Sam Altman is not a household name the way Elon Musk is. Musk has spent decades in the public eye—building electric cars, launching rockets, and acquiring social media platforms.
Altman's rise, by contrast, has been quieter, faster, and far less scrutinized. That is changing.
As CEO of OpenAI, the company behind ChatGPT, Altman now sits at the center of the most consequential technology boom in recent history.
He oversees a company valued at $852 billion. He has brokered deals with Microsoft, Amazon, and the Pentagon.
He has testified before Congress and appeared on magazine covers. And now, in 2026, he finds himself in a federal courthouse in Oakland, California, sued by his co-founder for allegedly betraying the founding principles of the company they built together.
The question being asked—inside courtrooms, inside OpenAI's own offices, and increasingly in public—is a simple one: Can Sam Altman be trusted?
Altman did not invent artificial intelligence. What he did, by most accounts, was sell it.
He dropped out of Stanford at 19, joined the startup incubator Y Combinator, and became its president by the time he was 28.
Paul Graham, Y Combinator's co-founder and one of Silicon Valley's most influential voices, once described Altman's instincts this way: you could drop him on an island of cannibals, and he would come back five years later as their king.
The description captured something real. Altman has a documented ability to make people believe he shares their priorities.
Engineers worried about AI safety found him persuasive. Investors worried about returns found him compelling. Regulators found him measured and thoughtful.
In 2015, he co-founded OpenAI alongside Musk and others, explicitly framing it as a nonprofit dedicated to ensuring that artificial general intelligence (AGI)—by the theoretical point at which AI surpasses human cognition—would benefit humanity rather than a handful of corporations.
That founding story, and the moral authority it carried, became the foundation of the Altman mythology.
In April 2026, The New Yorker published an investigation based on more than 200 pages of internal OpenAI documents and over 100 interviews with current and former employees and board members.
The portrait that emerged was not flattering.
Multiple sources described a consistent pattern. Former chief scientist, Ilya Sutskever, compiled a 70-page document detailing instances in which Altman had allegedly misled the board.
Dario Amodei, who left OpenAI in 2020 to found the rival company, Anthropic, kept detailed notes on his interactions with Altman.
According to The New Yorker, Amodei's notes described Altman's reassurances on critical issues as "almost certainly nonsense."
One former OpenAI board member offered a more clinical assessment. Altman, they told the publication, possesses two traits rarely found together: an intense desire to be liked in every interaction, and an almost complete indifference to the consequences of deceiving others.
A senior Microsoft executive went further, suggesting there was a small but real possibility that Altman would one day be remembered alongside figures such as Bernie Madoff or Sam Bankman-Fried.
Microsoft, which has invested billions in OpenAI, has itself experienced strained relations with Altman.
Multiple executives at the company described him as repeatedly breaking his word. Earlier this year, on the same day OpenAI reaffirmed Microsoft as its exclusive provider for certain AI models, it announced a $50 billion deal with Amazon as an exclusive reseller, a move that led Microsoft to signal it was prepared to pursue legal action.
The New Yorker investigation also noted something that struck many observers as significant: Altman, the public face of the AI revolution, has limited technical expertise.
Engineers interviewed for the piece described a CEO who struggles with basic machine learning concepts and confuses fundamental AI terms.
Former board member Sue Yoon offered a different framing. Altman was not, in her view, a calculated deceiver.
He was someone who had come to believe his own shifting narratives. "He's too caught up in his own self-belief," she told The New Yorker. "So he does things that, if you live in the real world, make no sense."
OpenAI was founded in 2015 with roughly $38 million in initial funding from Musk and others, structured explicitly as a nonprofit.
The stated reason was straightforward: if the development of AGI was left to profit-driven corporations, the risk to humanity would be too great.
By 2019, OpenAI had added a for-profit subsidiary. By 2023, ChatGPT had become the fastest-growing consumer application in history.
By 2025, the company was in advanced discussions for an initial public offering that would cement its status as one of the most valuable companies in the world.
Musk, who had left OpenAI's board in 2018 following a falling out with Altman and other co-founders, filed a lawsuit in 2024 alleging that OpenAI had violated its founding charter by prioritizing profit over its stated mission.
He is seeking damages that could exceed $150 billion. OpenAI has denied wrongdoing, arguing that Musk himself understood a for-profit structure was necessary to compete and that his lawsuit is motivated by the competitive ambitions of his own AI company, xAI.
What is not in dispute is the scale of the transformation. The company Altman now runs bears little structural resemblance to the one he helped create.
In early 2026, Altman signed a contract allowing the U.S. government to use OpenAI's technology in classified operations, a deal that drew significant backlash.
The company that had positioned itself as a safeguard against AI misuse was now a Pentagon contractor.
The decision took on additional significance in light of what happened at Anthropic. The rival company, led by Amodei, had previously refused a similar arrangement, citing concerns about autonomous weapons.
As a result, Anthropic was blacklisted by the Pentagon. OpenAI then took the contract Anthropic had turned down.
Amodei, already a vocal critic of Altman's leadership style, became a sharper point of contrast—a CEO who had walked away from a defense contract on principle, versus one who accepted it.
Internally, OpenAI has also shown signs of strain. In April 2026, three senior executives departed on the same day: the chief product officer, the chief technology officer for enterprise, and the head of Sora, the company's video generation model.
OpenAI's chief financial officer has separately expressed concerns about whether the company's spending commitments—reportedly up to $600 billion over five years—are sustainable given its current revenue trajectory. The company has missed its own revenue targets.
As the trial in Oakland continues, Altman is expected to testify later in May. The outcome will have consequences not only for OpenAI's corporate structure but also for its plans to go public—plans that now face scrutiny from investors, regulators, and a public paying closer attention than it once did.
Meanwhile, Anthropic has overtaken OpenAI as the most-downloaded AI app, according to March 2026 data. The competitive pressure is real.
What is perhaps most significant about this moment is not any single lawsuit, departure or missed revenue figure.
It is that Sam Altman, the man who built his influence on the premise that he could be trusted with the most powerful technology in human history, is now being asked to prove it—by a federal court, by his own shareholders, and by former colleagues who were once close enough to watch him work.