On a Friday afternoon in late February, an extraordinary sequence of events unfolded in rapid succession. Defense Secretary Pete Hegseth invoked a national security law to blacklist Anthropic — the San Francisco-based AI company founded in 2021 by Dario Amodei — from Pentagon contracts after Amodei refused to permit the company's technology to be used for domestic mass surveillance or autonomous lethal drones capable of selecting and killing targets without human oversight. The fallout was immediate and severe: Anthropic stands to lose a contract valued at up to $200 million, and President Trump posted on Truth Social directing every federal agency to "immediately cease all use of Anthropic technology." Anthropic has since announced it will challenge the Pentagon in court.
It was precisely the kind of moment Max Tegmark has spent years anticipating — not with satisfaction, but with the grim recognition of a forecast finally confirmed. Tegmark, a physicist at MIT, founded the Future of Life Institute in 2014 and in 2023 helped organize an open letter calling for a pause in advanced AI development that ultimately drew more than 33,000 signatories, including Elon Musk. His diagnosis of the Anthropic situation is unflinching: the industry engineered this crisis through its own deliberate choices.
The following is an edited account of a conversation conducted with Tegmark as these events were still unfolding. The full interview is available on TechCrunch's StrictlyVC Download podcast.

Tegmark's first reaction to the news was telling. Rather than sympathy or outrage, he reached for a broader historical frame. "The road to hell is paved with good intentions," he observed. A decade ago, the promise of AI was framed around curing cancer and driving national prosperity. Today, the U.S. government finds itself in open conflict with an AI company over the company's refusal to enable domestic mass surveillance and autonomous killing systems.
For Tegmark, the apparent contradiction at the heart of Anthropic's predicament — a self-declared safety-first company that had been collaborating with defense and intelligence agencies since at least 2024 — is real, but it does not single out Anthropic alone. "Anthropic has been very good at marketing themselves as all about safety," he said. "But if you actually look at the facts rather than the claims, what you see is that Anthropic, OpenAI, Google DeepMind and xAI have all talked a lot about how they care about safety. None of them has come out supporting binding safety regulation the way we have in other industries. And all four of these companies have now broken their own promises."
The pattern is consistent across the sector. Google abandoned its famous "Don't be evil" pledge and subsequently dropped a broader commitment against AI-enabled harm in order to pursue surveillance and weapons contracts. OpenAI removed the word safety from its mission statement. xAI dissolved its safety team entirely. And in the same week the Pentagon confrontation erupted, Anthropic quietly dropped what Tegmark describes as its most consequential safety commitment — the promise not to release increasingly powerful AI systems until the company was confident those systems would not cause harm.
The root cause, in Tegmark's assessment, is a regulatory environment that these same companies actively cultivated. For years, the leading AI developers lobbied against formal government oversight, insisting that self-governance was sufficient. The result, he argues, is a legal landscape that offers less protection than the rules governing a neighborhood deli. "We right now have less regulation on AI systems in America than on sandwiches. You know, if you want to open a sandwich shop and the health inspector finds 15 rats in the kitchen, he won't let you sell any sandwiches until you fix it. But if you say, 'Don't worry, I'm not going to sell sandwiches, I'm going to sell AI girlfriends for 11-year-olds, and they've been linked to suicides in the past, and then I'm going to release something called superintelligence which might overthrow the U.S. government, but I have a good feeling about mine' — the inspector has to say, 'Fine, go ahead, just don't sell sandwiches.'"
The opportunity existed, Tegmark argues, for the industry to convert its voluntary commitments into enforceable law. Had the major AI developers presented a united front and requested that governments codify their own safety pledges into binding legislation, a regulatory framework could have been established. Instead, the industry chose a path of corporate amnesty — and the historical precedents for such vacuums are not encouraging. "We know what happens when there's a complete corporate amnesty: you get thalidomide, you get tobacco companies pushing cigarettes on kids, you get asbestos causing lung cancer." The absence of rules that once served the industry's competitive interests has now left those same companies exposed to whatever a given administration chooses to demand.
"There is no law right now against building AI to kill Americans, so the government can just suddenly ask for it," Tegmark said bluntly. "If the companies themselves had earlier come out and said, 'We want this law,' they wouldn't be in this pickle. They really shot themselves in the foot."
Tegmark applies the same critical scrutiny to the industry's most reliable rhetorical defense: the China argument. The standard lobbying position holds that any regulatory constraint on American AI development cedes ground to Beijing. Tegmark regards this framing as fundamentally dishonest. He points out that China is currently moving to ban AI companions outright — not as a concession to American sensibilities, but because Chinese authorities have concluded that such systems are undermining national cohesion and youth development. "Obviously, it's making American youth weak, too," he noted.
On the question of superintelligence specifically, Tegmark argues that the logic of the China race collapses under scrutiny. "When people say we have to race to build superintelligence so we can win against China — when we don't actually know how to control superintelligence, so that the default outcome is that humanity loses control of Earth to alien machines — guess what? The Chinese Communist Party really likes control. Who in their right mind thinks that Xi Jinping is going to tolerate some Chinese AI company building something that overthrows the Chinese government?" The analogy he reaches for is the Cold War: the United States ultimately prevailed over the Soviet Union without ever engaging in a race to detonate the most nuclear weapons on the other's territory, because both sides recognized that particular competition had no winner.
On the timeline to transformative AI capability, Tegmark is direct about the failure of expert consensus. Six years ago, he notes, nearly every AI researcher he knew projected that human-level language and knowledge mastery was decades away — perhaps 2040 or 2050. That consensus was wrong. Progress has moved through high school, college, PhD, and professorial levels of capability in rapid succession. Last year, an AI system won the gold medal at the International Mathematics Olympiad. In a paper co-authored with Yoshua Bengio, Dan Hendrycks, and other leading researchers, Tegmark's team developed a rigorous definition of artificial general intelligence. By that measure, GPT-4 was 27% of the way there; GPT-5 was 57% of the way there.
"When I lectured to my students yesterday at MIT, I told them that even if it takes four years, that means when they graduate, they might not be able to get any jobs anymore," Tegmark said. "It's certainly not too soon to start preparing for it."
The question of industry solidarity in the wake of Anthropic's blacklisting proved to be short-lived. At the time of the interview, Sam Altman had publicly stated that he stands with Anthropic and holds the same red lines — a position Tegmark credited as requiring genuine courage. Google had said nothing, which Tegmark characterized as deeply embarrassing for an organization of its scale and stated values. xAI had also remained silent. Hours after the interview concluded, however, OpenAI announced its own deal with the Pentagon, complete with what the company described as technical safeguards — a development that illustrated precisely the competitive dynamic Tegmark had warned about.
Despite the severity of his critique, Tegmark does not regard a positive outcome as impossible. The path he describes is straightforward in principle, if not in practice: treat AI companies as subject to the same standards of accountability applied to any other industry. Require something analogous to clinical trials before deploying systems of this magnitude. Mandate demonstration of control to independent experts before release. "Then we get a golden age with all the good stuff from AI, without the existential angst," he said. "That's not the path we're on right now. But it could be."




