Stairs lead to the entrance of a concrete building with signs describing it as the San Francisco Unified School District administrative building.
San Francisco Unified School District’s handling of a proposed OpenAI agreement has prompted questions about transparency and governance in school technology decisions. Credit: Courtesy of Wikimedia Commons CC BY-SA 3.0

The San Francisco Unified School District signed a contract with OpenAI three weeks before seeking approval from its school board, authorizing district-wide use of emerging digital tools without the public oversight that usually precedes such decisions.

The move caused concern among union leaders heading into strike negotiations in which teachers have raised the specter of artificial intelligence threatening jobs. But it was the appearance of a deal being done with little to no discussion over safeguarding student data privacy that most alarmed an organization that monitors use of surveillance technology in schools.

District records show that technology service officer Eddie H. Ngo signed an agreement with OpenAI on Jan. 22, with an effective start date of Feb. 2. That was eight days before the contract, which redacts the cost, was scheduled to appear for approval by the Board of Education. It was on the meeting consent agenda, a list of several items typically passed as part of a single vote without public debate.

The district has removed the item from the Feb. 10 meeting materials, though without clarifying whether it launched any of the activities in what a document calls a “proposal,” whether the issue would ever go to a board vote, why the contract was redacted or what kinds of district or student data could be involved. The district’s communications office, however, did say in an unsigned email that “the proposed agreement does not involve any financial cost to the district.”

The contract authorized deployment of ChatGPT EDU, an education-focused product, for up to 12,000 end users. A similar agreement with OpenAI in November 2024 cost the San Bernardino City Unified School District $19 per month for each end user. The San Francisco district did not explain why the agreement would be cost-free.

The document did not spell out how students’ work and personal information would be handled, or whether it would change educators’ workloads and instructional roles. It did not specify whether students could use the tool, though it did not appear to rule that out.

ChatGPT EDU is commonly used by schools and universities as a general-purpose AI tool for staff, and sometimes students. It supports writing, document analysis, lesson planning and basic data or coding tasks.

Even if students do not have direct access, data such as school work, academic records, behavioral information and digital interactions can be especially sensitive, since minors have special legal protections. Once shared with vendors, student information can be stored, analyzed or reused beyond public view.

Artificial intelligence chatbots can present privacy problems for schools, said Lee Tien, legislative director at the San Francisco-based Electronic Frontier Foundation, which scrutinizes how many public institutions, including schools, use technology and collect private data.

The timing and handling of the agreement raised questions about how San Francisco school administrators evaluate and approve technology tools, and whether meaningful oversight occurs, Tien said. When procurement decisions come in advance of review by accountable leadership, public discussions about surveillance and transparency can be shortchanged. “It’s simply rubber-stamping decisions that were being made, and you don’t know why they were being made,” Tien said.

The signed OpenAI order form redacts pricing, and omits key details, such as scope of use or data-handling terms, making it unclear what the district was agreeing to.

The email from the district stated that formalizing an agreement with OpenAI would come with appropriate oversight and support the district’s “commitments to security, transparency and accountability.”

“SFUSD is establishing a centralized governance framework for artificial intelligence to ensure its use is safe, ethical, and responsible across the district,” the email stated. “This proposed agreement would establish clear guardrails around data protection, privacy, and responsible use.”

San Francisco schools have already embraced AI in limited, curriculum-specific settings, including literacy tutoring tools that district leaders credit with early gains. But the district has not publicly released guidance spelling out what staff may or may not put into AI systems, whether student-related information must be anonymized or whether AI may be used for sensitive tasks, such as an individualized special education plans or disciplinary files.

The agreement surfaced in the middle of high-stakes labor negotiations with the teachers’ union, United Educators of San Francisco, which has called for a strike to begin on Monday. Beyond wages and health care, the union raised concerns about the district’s growing use of AI in schools.

During the 2023-24 school year contract talks, the union sought limits on classroom use of artificial intelligence, protections against displacement of educator jobs and a requirement that the district discuss the educational impact of AI tools with labor representatives before deploying them. Those provisions remain unresolved, according to the union.

Tien said those narrower uses contrast sharply with a districtwide, general-purpose AI system adopted through retroactive approval and minimal public scrutiny, emphasizing that the real concern is not what the technology does but how procurement decisions ultimately shape how people interact with the technology.

The Board of Education, which oversees district staff, did not respond to requests for comment.

Vendor-written rules

There is no comprehensive federal or California law governing how generative AI may be used in K-12 schools. Existing student privacy laws, including the Family Educational Rights and Privacy Act and California’s Student Online Personal Information Protection Act, were written long before chatbots and large language models entered classrooms.

While a recent state law requires education leaders to be convened to develop guidelines for “safe and effective” AI use in schools, it has not restricted districts from adopting AI tools now, making decisions like San Francisco’s OpenAI contract a matter of local procurement processes.

When school districts lack the staff or expertise to develop their own rules, they often default to vendor-written agreements misaligned with the public interest, Tien said. “If my incentive is how we get cheap tech, I may not care very much about how much student data ends up being leaked out to some other entity or what they do with it,” he said.

San Francisco’s agreement adopts the same student data privacy terms that OpenAI offered in a prior deal with the San Bernardino City Unified School District. It prohibits OpenAI from selling or training its models on student data and from targeted advertising. It also states that OpenAI will store student data securely, restrict law-enforcement disclosures, treat data as school-controlled, limit it to educational uses and delete it when the contract ends.

But those assurances do not spell out what kind of student data would be processed, who may use the tools, with what guardrails and at what cost.

“We don’t know a whole lot about how these processes work, and that means a lot of the real consequences end up playing out at the level of individual schools,” Tien said. Privacy risks around new technologies are especially acute as governments collect more detailed data about people’s movements and behavior, including children’s. Tien added that the lack of clear governance over that data raises serious concerns, particularly when immigration authorities might seek access to information never intended for enforcement purposes.

Tien noted that California has seen how surveillance technologies adopted for administrative convenience, such as license plate readers and school tracking tools, can enable uses far beyond their original scope.

Warning signs across the state

San Francisco is not the first California district to move quietly into large contracts with AI providers offering education-focused packages.

In August 2024, a CalMatters investigation into AI rollouts in California schools found that technology agreements sparked backlash in the Los Angeles and San Diego unified school districts when the public discovered that systems were already in use, raising alarms about surveillance of children, algorithmic bias, data security and cost.

Los Angeles district leaders faced criticism over AI-driven tools that monitored students and flagged behavior, with parents and advocates questioning whether children were being exposed to opaque systems without consent. In San Diego, similar concerns emerged around technology adopted with limited upfront scrutiny.

CalMatters quoted experts who urged districts to slow down and consult with nonprofit organizations that evaluate and certify education tech, arguing that independent vetting helps determine whether tools are effective, appropriate and aligned with student interests. Specialists said even basic evaluation of contract language by external experts can help districts avoid costly or harmful missteps.

Sylvie Sturm is an award-winning print journalist with 20 years of experience writing and editing for Canadian community newspapers. Since moving to the Bay Area in 2014, she’s shifted her attention towards audio journalism. She’s currently contributing to the “Civic” podcast from the Public Press. She also mentors science writers at UC San Francisco in print journalism and podcasting, and has taught media at San Francisco State University.