AI Employees Should Have a “Right To Warn” About Looming Trouble

Reasonable rules allowing employees at AI research houses to warn about impending problems are sensible given the technology’s increasing power.

Alex Kantrowitz
3 min readJul 3, 2024

By now, we’re starting to understand why so many OpenAI safety employees left in recent months. It’s not due to some secret, unsafe breakthrough (so we can put “what did Ilya see?” to rest). Rather, it’s process oriented, stemming from an unease that the company, as it operates today, might overlook future dangers.

After a long period of silence, the quotes are starting to pile in. “Safety culture and processes have taken a back seat to shiny products,” said ex-OpenAI Superallignment co-lead Jan Leike last month. “OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there,” said Daniel Kokotajlo, an ex-governance team employee, soon afterward. Safety questions are “taking a backseat to releasing the next new shiny product,” added ex-OpenAI researcher William Saunders.

Whether these assertions are correct (OpenAI disputes them), they make clear that AI employees need much better processes to report concerns about the technology to the public. Ordinary whistleblower protections tend to deal with the illegal, not potentially dangerous, and so they fall short of what these employees require to speak freely without fear. And though today’s cutting edge AI models aren’t a threat to society, there’s currently no good way to flag potentially dangerous future developments — at least for those working within the companies — to third parties. That’s why we’ve almost exclusively heard from those who’ve exited. Right now, the alert system is left to the corporations themselves.

So Saunders, Kokotajlo, and more than a dozen current and ex-OpenAI employees are calling for a ‘Right to Warn,’ where they’d be free to express concerns about potentially-dangerous AI breakthroughs to external monitors. They went public about this in an open letter earlier this month where they called for an end to non-disparagement agreements and for the start of an anonymous process to flag concerns to third parties and regulators. And after speaking with Saunders and Harvard Law Professor Lawrence Lessig, who is representing the group pro-bono, their demands seem sensible to me.

“Your P(doom) does not have to be extremely high to believe that it makes sense to have a system of warning,” Lessig told me. “You don’t put a fire alarm inside of a school because you really believe the school is going to burn down. It’s just that if the school’s on fire, there ought to be a way to pull an alarm.”

The AI doom narrative has been way overblown lately, but that doesn’t mean the technology is without risk. And while a right to warn might send a signal that the tech is more dangerous than it is, and even do some marketing for OpenAI’s capabilities, it’s worth establishing some new rules to ensure that employees can talk when they see something. Even if it’s not species-threatening.

The alternative — trusting companies to self-report troubling developments or meaningfully slow product cadence — has never worked. Even for those entities with novel corporate structures built around safety, an employee “Right to Warn” is essential.

My full conversation with Saunders and Lessig is live on Big Technology Podcast. To get it in your feed, you can subscribe on Apple Podcasts, Spotify, or your app of choice.

--

--

Alex Kantrowitz
Alex Kantrowitz

Written by Alex Kantrowitz

Veteran journalist covering Big Tech and society. Subscribe to my newsletter here: https://bigtechnology.com.

No responses yet