According to Computerworld, OpenAI is searching for a new head of AI preparedness, a job CEO Sam Altman explicitly called “stressful” in an X post. Altman stated the candidate will be “jumping into the deep end almost immediately.” He also noted that in 2025, the world saw the impact AI models can have on human mental health, with OpenAI facing several negligence lawsuits related to user well-being. The role was previously held by Joaquin Quiñonero Candela and Lilian Weng. Weng left the company in November 2024, and Quiñonero Candela was moved to Head of Recruitment in July 2025.
A Job No One Envies
Let’s be real. This is a crisis management role disguised as a forward-looking safety position. The key detail here is 2025. Altman isn’t talking about a hypothetical future risk; he’s referencing a specific, recent year where things apparently went sideways enough to generate lawsuits. So this new hire isn’t just planning for sci-fi scenarios. They’re walking into a company that’s already in legal hot water over tangible harm. Their first task? Probably dealing with the fallout from that “taste” of impact everyone just got. Talk about a baptism by fire.
The Revolving Door of Safety
Here’s another thing. The timeline of who held this job and when they left is pretty telling. Lilian Weng, a well-known figure in AI safety circles, departed in November 2024. Then, just months later in July 2025, Quiñonero Candela is reassigned. That’s two senior safety leads gone or moved within about eight months, right before and during the period Altman cites for major mental health impacts. It makes you wonder, doesn’t it? Was there internal conflict over how to handle these risks? Is this role being reconfigured because the old approach failed? The personnel shuffle suggests this isn’t a simple hiring search—it’s a reset.
Preparedness vs. Product
And that’s the core tension. “Preparedness” sounds proactive, but at a company racing to develop and deploy ever-more-powerful models, it’s often a defensive, after-the-fact function. The lawsuits are about negligence, meaning the allegation is that OpenAI didn’t do enough to foresee or prevent harm. So this new head will be tasked with building guardrails for systems that are already out in the wild causing problems, while also trying to anticipate the next wave of issues from even more advanced tech. It’s a nearly impossible mandate. You’re trying to install seatbelts in a car that’s already speeding down the highway, while also designing airbags for a rocket ship being built in the next garage. The stress Altman mentions? Yeah, that tracks.
