"What Have We Done?"
Sam Altman Is Alarmed By GPT-5
July 31, 2025 (original publication date)
by Jaymie Johns

In a candid podcast appearance, OpenAI CEO Sam Altman has revealed deep concerns about the company's upcoming GPT-5 model, likening its development to historical scientific breakthroughs that left creators questioning their own inventions. As OpenAI gears up for a potential August 2025 launch, Altman's remarks highlight the rapid pace of AI advancement and the ethical dilemmas it poses.
During an episode of "This Past Weekend with Theo Von," Altman opened up about his experiences testing GPT-5, describing moments that left him profoundly unsettled.
“There are moments in the history of science where you have a group of scientists look at their creation
and just say, you know: ‘What have we done?'”
Altman stated, drawing a parallel to the Manhattan Project—the secretive World War II effort that developed the atomic bomb.
He elaborated on the speed of progress, noting that “it feels very fast” and expressing a sense of unease about the lack of oversight: “It feels like there are no adults in the room.”Altman admitted to feeling nervous about the technology, suggesting that AI's evolution might be outpacing humanity's ability to manage its risks. While he didn't specify exact threats, his words imply worries about existential dangers, uncontrolled deployment, or unintended consequences that could spiral beyond OpenAI's grasp.
Altman also shared a personal reflection, saying that interacting with GPT-5 made him feel “useless” at times, underscoring the model's advanced capabilities. This sentiment echoes his earlier criticisms of GPT-4, which he once called “kind of sucks” and “mildly embarrassing at best,” while promising that future iterations like GPT-5 would represent a quantum leap in intelligence.
OpenAI's journey to GPT-5 has been marked by iterative releases and strategic shifts. In February 2025, the company launched GPT-4.5, described by Altman as a “giant, expensive model” that felt like “talking to a thoughtful person.” He highlighted its unique “magic” and noted enthusiastic user feedback, with people pleading not to discontinue or replace it. By April 2025, OpenAI adjusted its roadmap, opting to release intermediate models like o3 and o4-mini before GPT-5, aiming to integrate advanced reasoning tools and ensure sufficient computing capacity for high demand.
Altman has emphasized that GPT-5 will unify OpenAI's technologies, combining elements from the GPT and o-series models to create a versatile system capable of handling diverse tasks, including voice, search, and deep research. Free users will gain unlimited access to a standard version, while Plus and Pro subscribers will unlock higher intelligence levels. Recent leaks and reports indicate the launch could be imminent, with features like automatic model selection for prompts potentially making it a game-changer in addressing common AI limitations.
However, this progress comes amid internal challenges. OpenAI is grappling with financial pressures, projecting up to $5 billion in losses and facing the risk of bankruptcy within a year if it doesn't transition to a for-profit structure. Investors are pushing for this shift by year's end to safeguard funding and prevent hostile takeovers.
A key subplot in OpenAI's story is its strained partnership with Microsoft, which has invested $13.5 billion. The agreement defines artificial general intelligence (AGI) as a system generating $100 billion in profits, but ambiguities in this definition have fueled disputes. Reports suggest OpenAI might declare AGI prematurely—possibly with GPT-5—to sever ties and limit Microsoft's access to its tech before 2030.
Microsoft has been accused of anti-competitive practices, and OpenAI insiders describe the tech giant's stance as a “nuclear option,” potentially walking away from for-profit negotiations. Despite this, talks to extend the partnership continue, even post-AGI achievement. Altman has previously downplayed AGI's societal impact, claiming it might “whoosh by” with minimal disruption, but his recent fears suggest a more cautious outlook.
Altman's admissions have sparked widespread discussion in the tech community. Industry observers see them as a rare moment of vulnerability from a leader often optimistic about AI's potential. Some praise the transparency, while others worry it could heighten public anxiety about AI risks. With GPT-5 poised to integrate into tools like Microsoft's Copilot—potentially via a new “smart mode”—the model's rollout could reshape AI accessibility and competition.
As August 2025 unfolds, the world watches closely. Will GPT-5 fulfill Altman's promise of superior intelligence, or will his fears prove prescient? OpenAI's trajectory underscores the double-edged sword of innovation: immense promise shadowed by profound responsibility. In Altman's words, the rapid pace feels like “no adults in the room,” a reminder that even pioneers can be daunted by their creations.
Long before Sam Altman began likening his own AI models to the Manhattan Project, Elon Musk walked away from OpenAI—because he didn’t like where it was headed.
Musk was one of OpenAI’s original founders and funders. But by 2018, he’d quietly distanced himself, officially citing a conflict of interest with Tesla’s AI team. Unofficially, the concern ran deeper. Musk had long warned about the existential risks of unregulated AI—describing it as humanity’s “biggest threat”—and was reportedly disturbed by OpenAI’s growing power without sufficient constraints.
In the years since, he hasn’t softened. Musk criticized OpenAI’s decision to shift from its open-source, nonprofit roots to a capped-profit structure that now leans hard toward commercialization. He’s blasted the company’s alliance with Microsoft. And he’s founded his own alternative—xAI—to build artificial general intelligence that, in his words, “maximally benefits humanity.”
Musk warned them. Not out of rivalry or theater, but because he’s been tracking this curve for decades. The fact that Altman now admits to being afraid of what OpenAI created is not just a shift—it’s a mirror of what drove Musk away in the first place.
Maybe the world should’ve listened sooner.
Media & Technology Morality Analyst
Jaymie Johns



