+295% uninstalls.OpenAI justifies the agreement with the Pentagon by recalling ethical principles and compliance with the law.A more detailed breakdown, for…
In mid-February, Anthropic refused to give the Pentagon full access to its AI models.
According to Anthropy CEO Dario Amoday, removing all ethical protections would make the US look like the countries it fights against or demonizes.
Thus, the US Department of War decided to end its partnership with Anthropic, worth about 200 million dollars, and rely on OpenAI, a company run by Sam Altman, who developed ChatGPT, the perfect symbol of AI production.
An issue that helped punish a boycott OpenAI punished and Anthropic rewarded.In fact, OpenAI lost customers after the deal with the Pentagon, so much so that uninstalls of ChatGPT increased by 295%, and Claude, Anthropic's AI model, rose to the top of the most downloaded apps, so much so that the app and website were down for several hours.The issue is more complex than one might realize at first glance, as there are not only war activities at stake, but also the potential use of artificial intelligence to compromise delicate and already shaky democratic balances.For example, despite the friction with Anthropic, the Pentagon would have used the company's models to coordinate the first strike against Iran, so the White House order was not recorded.
Sam Altman's statement about respecting the law
The announcement that Sam Altman made to begin cooperation with the US Department of War (the new name for the Department of Defense inaugurated by President Donald J. Trump) emphasizes the protection of moral and democratic foundations.
The ultimatum, as The Verge points out, does not explicitly exclude that OpenAI could lend itself to public scrutiny and soften its position on the issue of human irresponsibility.
According to sources interviewed by The Verge, deeper analysis would make Sam Altman's statements little more than smoke and mirrors.
In fact, OpenAI will not accept a waiver, it will simply agree to capitulate to the Pentagon's demands, masking its capitulation in vague language that guarantees full compliance with existing laws—the written, approved rules that mass surveillance exceeds the bounds of legality.
Details of the agreement between the Pentagon and OpenAI
The agreement signed with the Pentagon includes a clause that makes OpenAI models available for any lawful use.There are three key elements on the plate.
The Fourth Amendment gives people the right to be protected from unreasonable searches and seizures.This was supplemented by the Foreign Intelligence Surveillance Act (FISA), a 1978 federal law that regulates electronic surveillance activities.
The third reason is Executive Order 12333, which was signed by then-US President Ronald Reagan in 1981 and, thanks to subsequent amendments, became the regulatory basis for surveillance operations outside the United States, authorizing warrantless activities, specifically because they are not required to comply with internal FISA laws.
And it is through the inspiration of these three spits and how they have been used over the years that Altman's words can be translated.
Algorithmic democracy
13 years have passed since the revelations of Edward Snowden, the American analyst who demonstrated how the PRISM program allowed data from large technology companies such as Apple and Google to be collected, and thus intercept communications within the United States thanks to the rights established in Executive Order 12333.
In such a context, the promise to respect the law is not a guarantee of privacy, but a license for the Pentagon to continue its practice of AI-enhanced mass espionage, says Mike Masnick, founder of the website Techdirt, which since 1988 has covered innovation, technology policy and civil liberties.
The message is consistent with what Dario Amodei claims is that the level of surveillance allowed by the use of AI is contrary to democratic values. Although he refuses to allow general analysis, according to critics, Sam Altman uses vague language to allow military forces to operate without OpenAI employees, which he technically accuses of violating contract terms.
The danger is not only in war
OpenAI said its models will only be hosted in the cloud and not directly on the drone.Experts, however, say the distinction is irrelevant because the "autonomous kill chain," which includes target identification, mapping and selection, occurs almost entirely through algorithms that run in the cloud.
Given the complexity of new situations, AI can not only support military operations but also facilitate the identification of behavioral patterns, returning detailed information on each citizen, such as by cross-referencing geolocation, websites visited, financial and electoral data, and images captured by surveillance cameras.
The agreement between OpenAI and the Pentagon marks a point of no return. When the law is interpreted so loosely by intelligence agencies, not only are the guarantees offered by OpenAI weak, but the laws themselves are void of the trust they are supposed to foster.
The conflict between OpenAI and Anthropic views is causing controversy in the United States.For Washington, Anthropic is a company that threatens the government's supply chain, a label usually reserved for hostile companies, especially Chinese and foreign ones.
Defense Secretary Pete Hegseth made it clear that the military destiny of the United States will not be held hostage by the ideologies of domestic technology companies.
OpenAI's stance therefore appears to be a strategic line to avoid government retaliation at the expense of genuine protection of citizens' data.
