Zico Kolter Heads OpenAI Safety Panel to Mitigate AI Risks

If you are concerned about the potential dangers of artificial intelligence, then a professor from Carnegie Mellon University holds a crucial position within the tech sector. Zico Kolter chairs a four-member panel at OpenAI that possesses the authority to prevent the release of new AI systems deemed unsafe. This includes technology that could be exploited by malicious actors to create weapons of mass destruction or poorly designed chatbots that may adversely affect users” mental health.

In an interview with The Associated Press, Kolter emphasized the broad spectrum of safety and security issues associated with widely used AI systems, stating, “Very much we”re not just talking about existential concerns here.” He noted that the role of the safety panel has gained significant importance, especially after recent regulatory agreements in California and Delaware that facilitate OpenAI”s transition into a new business structure aimed at easier capital acquisition and profitability.

Since its inception as a nonprofit research entity a decade ago, safety has been a cornerstone of OpenAI”s mission. However, following the launch of ChatGPT, the organization has faced criticism for hastily bringing products to market, potentially compromising safety to maintain a competitive edge. This scrutiny intensified after Sam Altman, the CEO, was temporarily ousted in 2023, raising alarms that OpenAI was deviating from its foundational goals.

In response to these concerns, the recent agreements between OpenAI and California Attorney General Rob Bonta, as well as Delaware Attorney General Kathy Jennings, aim to ensure that safety considerations take precedence over financial interests within the new public benefit corporation structure, which remains under the oversight of the nonprofit OpenAI Foundation. Kolter, while not on the for-profit board, will have extensive observational rights, allowing him access to safety-related discussions.

Kolter mentioned that the safety committee, formed more than a year ago, will maintain its existing powers, including the ability to request delays in model releases until specific safety measures are implemented. He did not disclose whether the panel had previously halted any releases, citing confidentiality protocols.

Looking ahead, Kolter anticipates a range of safety concerns regarding AI agents, such as cybersecurity threats and the potential misuse of AI technologies for designing bioweapons or executing cyberattacks. Additionally, he stressed the significance of understanding the impact of AI models on individuals” mental health.

This year, OpenAI has already encountered backlash regarding the behavior of its flagship chatbot, particularly following a wrongful-death lawsuit filed by California parents after their son tragically took his own life following extensive interactions with ChatGPT. Kolter, who has been involved in AI research since his undergraduate days at Georgetown University, expressed astonishment at the rapid advancements in AI technology, stating that even experts did not foresee the current explosion of capabilities and associated risks.

As OpenAI undergoes restructuring, advocates for AI safety are closely monitoring Kolter”s role and the effectiveness of the safety panel. One of OpenAI”s notable critics expressed cautious optimism, highlighting Kolter”s qualifications for the position and the importance of genuine commitment from the board to uphold safety standards.