When Anthropic CEO Dario Amodei balked at the Pentagon’s demands, warning the proposed language the DOW wanted could allow safeguards to be bypassed, Michael responded by taking the fight public. He accused Amodei of having a “God complex,” called him “a liar,” and warned that no private company should be able to dictate the military’s options. The Pentagon, he insisted, “will ALWAYS follow the law but will not yield to the desires of any profit-driven tech firm.”
Израиль нанес удар по Ирану09:28
。关于这个话题,WPS下载最新地址提供了深入分析
Сайт Роскомнадзора атаковали18:00
"Using firm contracts in uncertain situations carries significant commercial risks," says Albert Sanchez-Graells, a professor at the University of Bristol Law School who has been researching NHS contracts for more than 15 years.
Anthropic had refused Pentagon demands that it remove safeguards on its Claude model that restrict its use for domestic mass surveillance or fully autonomous weapons, even as defense officials insisted that AI models must be available for “all lawful purposes.” The Pentagon, including Secretary of War Pete Hegseth, had warned Anthropic it could lose a contract worth up to $200 million if it did not comply. Altman has previously said OpenAI shares Anthropic’s “red lines” on limiting certain military uses of AI, underscoring that even as OpenAI negotiates with the U.S. government, it faces the same core tension now playing out publicly between Anthropic and the Pentagon.