OpenAI Staffers Responsible for Safety Are Jumping Ship

OpenAI launched its Superalignment team almost a year ago with the ultimate goal of controlling hypothetical super-intelligent AI systems and preventing them from turning against humans. Naturally, many people were concerned—why did a team like this need to exist in the first place? Now, something more concerning has occurred: the team’s leaders, Ilya Sutskever and Jan Leike, just quit OpenAI.

The resignation of Superalignment’s leadership is the latest in a series of notable departures from the company, some of which came from within Sutskever and Leike’s safety-focused team. Back in November of 2023, Sutskever and OpenAI’s board led a failed effort to oust CEO Sam Altman. Six months later, several OpenAI staff members have left the company that were either outspoken about AI safety or worked on key safety teams.

Sutskever ended up apologizing for the coup (my bad, dude!) and signed a letter alongside 738 OpenAI employees (out of 770 total) asking to reinstate Altman and President Greg Brockman. However, according to a copy of the letter obtained by The New York Times with 702 signatures (the most complete public copy Gizmodo could find), several staffers who have now quit either did not sign the show of support for OpenAI’s leadership or were laggards to do so.

The names of Superalignment team members Jan Leike, Leopold Aschenbrenner, and William Saunders—who have since quit—do not appear alongside more than 700 other OpenAI staffers showing support for Altman and Brockman in the Times’ copy. World-renowned AI researcher Andrej Karpathy and former OpenAI staffers Daniel Kokotajlo and Cullen O’Keefe also do not appear in this early version of the letter and have since left OpenAI. These individuals may have signed the later version of the letter to signal support, but if so, they seem to have been the last to do it.

Gizmodo has reached out to OpenAI for comment on who will be leading the Superalignment team from here on out but we did not immediately hear back.

More broadly, safety at OpenAI has always been a divisive issue. That’s what caused Dario and Daniela Amodei in 2021 to start their own AI company, Anthropic, alongside nine other former OpenAI staffers. The safety concerns were also what reportedly led OpenAI’s nonprofit board members to oust Altman and Brockman. These board members were replaced with some infamous Silicon Valley entrepreneurs.

OpenAI still has a lot of people working on safety at the company. After all, the startup’s stated mission is to safely create AGI that benefits humanity! That said, here is Gizmodo’s running list of notable AI safety advocates who have left OpenAI since Altman’s ousting. Click through on desktop or just keep scrolling mobile.