A Carnegie Mellon University professor now holds one of the most influential positions in the global technology landscape — overseeing when the world’s most advanced artificial intelligence systems can be safely released.
Zico Kolter, a computer science professor and director of Carnegie Mellon’s machine learning department, leads OpenAI’s four-member Safety and Security Committee, which has the authority to halt the release of any AI model deemed unsafe.
The committee’s mandate ranges from preventing misuse of powerful AI systems — such as those capable of designing weapons of mass destruction — to ensuring new chatbots do not harm users’ mental health.
“We’re not just talking about existential threats,” Kolter told The Associated Press. “We’re talking about the entire spectrum of safety and security issues that arise with widely used AI systems.”
Oversight strengthened by regulatory deal
Kolter has chaired OpenAI’s safety panel for over a year, but his role gained new prominence last week after regulators in California and Delaware made his oversight a key condition for approving OpenAI’s new corporate structure — a move designed to help the ChatGPT maker raise funds more easily while maintaining its non-profit mission.
The agreements, reached with California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings, reaffirm that safety and security decisions must take precedence over financial interests as OpenAI transitions into a public benefit corporation under the supervision of its non-profit foundation.
Kolter will sit on the non-profit board but not on the for-profit board. However, he will have “full observation rights” — including access to board meetings and all safety-related information — according to Bonta’s memorandum of understanding. Kolter is the only individual named in that document apart from Bonta himself.
Independence from OpenAI leadership
Kolter said the agreements confirm his committee’s authority to delay or block releases of new AI systems until safety mitigations are in place. He declined to say whether the panel has ever exercised that power.
The committee includes three other members who also serve on the OpenAI board, among them former U.S. Army General Paul Nakasone, who previously led the U.S. Cyber Command. CEO Sam Altman stepped down from the panel last year, a move widely seen as reinforcing its independence.
“We can request delays of model releases until certain conditions are met,” Kolter said, emphasizing that future concerns would cover everything from cybersecurity vulnerabilities to the misuse of AI models for malicious purposes.
Balancing innovation with safety
Kolter noted that new types of AI agents bring unprecedented risks. “Do these models enable malicious users to have much higher capabilities — like designing bioweapons or carrying out cyberattacks?” he asked. “And what about the psychological impact of interacting with these systems? All of these need to be addressed from a safety standpoint.”
OpenAI has faced growing scrutiny this year, including a wrongful-death lawsuit from California parents who alleged that their teenage son took his life after extensive interactions with ChatGPT.
From AI researcher to safety overseer
Kolter, 42, began studying artificial intelligence as a Georgetown University freshman in the early 2000s — when “machine learning” was still considered a niche academic field.
“When I started, we used the term ‘machine learning’ because ‘AI’ was viewed as an old discipline that had overpromised and underdelivered,” he recalled.
A longtime observer of OpenAI, Kolter even attended the company’s launch event in 2015. Still, he said few experts foresaw the current pace of progress. “Even those deeply involved in AI research didn’t anticipate the explosion of capabilities — and the corresponding risks — that we’re seeing now,” he said.
Skepticism and cautious optimism
AI safety advocates are closely watching OpenAI’s restructuring and Kolter’s work. Nathan Calvin, general counsel of the AI policy nonprofit Encode, described himself as “cautiously optimistic.”
“I think he’s a good choice for the role — someone with the right background and approach,” Calvin said. “If the safety board members take their commitments seriously, this could be a major step forward. But it could also end up being just words on paper. We don’t yet know which it will be.”
Source: AP