OpenAI Disbands Safety Team Focused on Risk of Artificial Intelligence Causing “Human Extinction”
OpenAI eliminated a team focused on the risks posed by advanced artificial intelligence less than a year after it was formed—and a departing executive warned Friday (May 17) that safety has “taken a backseat to shiny products”at the company.
The Microsoft-backed ChatGPT maker disbanded its so-called “Superalignment,” which was tasked with creating safety measures for advanced general intelligence (AGI) systems that “could lead to the disempowerment of humanity or even human extinction,” according to a blog post last July.
The team’s dissolution, which was first reported by Wired, came just days after OpenAI executives Ilya Sutskever and Jan Leike announced their resignations from the Sam Altman-led company.
“OpenAI is shouldering an enormous responsibility on behalf of all of humanity,” Leike wrote in a series of X posts on Friday.
“But over the past years, safety culture and processes have taken a backseat to shiny products. We are long overdue in getting incredibly serious about the implications of AGI.”
Sutskever and Leike, who headed the OpenAI’s safety team, quit shortly after the company unveiled an updated version of ChatGPT that was capable of holding conversations and translating languages for users in real time.
Source: New York Post