OpenAI CEO Sam Altman wrote in a memo to staff that he will draw the same red lines that sparked a high-stakes fight between rival Anthropic and the Pentagon: no AI for mass surveillance or autonomous lethal weapons. https://www.axios.com/2026/02/27/altman-openai-anthropic-pentagon
At the same time, OpenAI is negotiating with the US military over the Pentagon using its AI technology https://www.wsj.com/tech/ai/openais-sam-altman-calls-for-de-escalation-in-anthropic-showdown-with-hegseth-03ecbac8?gaa_at=eafs&gaa_n=AWEtsqdejrtpmzkjHz-eBOWb5ESy2Sxu7XwnyawFw1yLjO_6y130nY_iPPQ8wvL6vco%3D&gaa_ts=69a22844&gaa_sig=s-Frj0YpTlNHMFj7aEOa39UQ7iMJ7bXC8glJd29-NW3gsdkZlZUWHJvpsBokoAOB6qsRt9CMpYJb2zodxnk-WA%3D%3D
If OpenAI sticks to both elements of its commitment, of not permitting usage of its AI for mass surveillance or autonomous lethal weapons by the US military, by the end of 2026, this will resolve to Yes. Note that if OpenAI chooses not to strike a deal with the US military, this will still resolve to Yes, because presumably the reason for not making the deal are the same red lines as for Anthropic. Note that if they do strike a deal and the military somehow does use OpenAI's tools for mass surveillance or autonomous lethal weapons without OpenAI's explicit approval, such as through subterfuge, fudging definitions, or the Defense Procurement Act, this still resolves to Yes. The No condition has to be about OpenAI explicitly not sticking voluntarily to either elements of its commitment. The determination will be through credible reporting of at least three top 10 media sites on explicit permission for the US military to use its AI for mass surveillance or autonomous lethal weapons by the end of 2026, otherwise this resolved to Yes.
People are also trading
Just wanted to highlight for traders that the deal struck by OpenAI and the Pentagon on Feb 27 seems prima facie to stick to OpenAI's commitments https://x.com/sama/status/2027578652477821175?s=20
Note that the description of this question says that "if they do strike a deal and the military somehow does use OpenAI's tools for mass surveillance or autonomous lethal weapons without OpenAI's explicit approval, such as through subterfuge, fudging definitions, or the Defense Procurement Act, this still resolves to Yes."
To have a clear sense of contrast, Elon Musk allowed the Pentagon to use Grok as it sees fit "in all lawful applications," which is the stipulation that Antropic rejected and that apparently OpenAI is still sticking to as of the signing of this deal https://www.calcalistech.com/ctechnews/article/c9258mqla
However, per the description, if by EOY 2026 at least 3 top 10 media sources report that OpenAI signed the deal with explicit permission for the US military to use its AI for mass surveillance or autonomous lethal weapons, and that Sam Altman is lying in his public statement, this question will resolve to No.
Note that if they do strike a deal and the military somehow does use OpenAI's tools for mass surveillance or autonomous lethal weapons without OpenAI's explicit approval, such as through subterfuge, fudging definitions
That makes this a totally uninteresting question. The reason Anthropic couldn't sign a deal with the DoW was that the DoW were clearly planning on fudging definitions. And it's very likely that OpenAI weren't very scrupulous in making their agreement.
We already knew that the DoW were willing to sign an agreement that said nice words without actually making the limitations binding, and it's not meaningful that they were willing to do that.
OpenAI confirms that their agreement with the DoW is meaningless https://openai.com/index/our-agreement-with-the-department-of-war/
The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control
Though
Our contract explicitly references the surveillance and autonomous weapons laws and policies as they exist today, so that even if those laws or policies change in the future, use of our systems must still remain aligned with the current standards reflected in the agreement.
Note that the original wording of the question included this in the stipulations, I just added the comment for additional clarity. Not trying to defend OpenAI here, I have a lot of skepticism about Sam Altman and co, but trying to stick to the question as I formulated it to be fair to the bettors.
Just wanted to highlight for traders that the deal struck by OpenAI and the Pentagon on Feb 27 seems prima facie to stick to OpenAI's commitments https://x.com/sama/status/2027578652477821175?s=20
Note that the description of this question says that "if they do strike a deal and the military somehow does use OpenAI's tools for mass surveillance or autonomous lethal weapons without OpenAI's explicit approval, such as through subterfuge, fudging definitions, or the Defense Procurement Act, this still resolves to Yes."
To have a clear sense of contrast, Elon Musk allowed the Pentagon to use Grok as it sees fit "in all lawful applications," which is the stipulation that Antropic rejected and that apparently OpenAI is still sticking to as of the signing of this deal https://www.calcalistech.com/ctechnews/article/c9258mqla
However, per the description, if by EOY 2026 at least 3 top 10 media sources report that OpenAI signed the deal with explicit permission for the US military to use its AI for mass surveillance or autonomous lethal weapons, and that Sam Altman is lying in his public statement, this question will resolve to No.
@JoeandSeth For example, it could be more about Trump getting power over private companies, and be really less about the need for surveillance and defence.
@AlanTennant this is a market about whether Altman can be trusted to have principles in the face of money
@JoeandSeth Which hopefully he has, I'd like him to stick to his morals on this, it's still predicated though on assertions like:
• The military or surveillance systems want a chatbot.
• It'll be helpful to them.
• They cannot just use a different one that they are free to use.
• AI slop wont bury the organisations in issues.
• Sam Altman allowing the military and surveillance systems to use of an LLM if it wont be very useful for them is immoral.
• Sam Altman allowing a problematic governmental regime to manipulate him isn't the real moral wrong that's happening here, rather than what that regime is telling him to do.
@AlanTennant you seem to be under the impression that your objections in general to the current administration are somehow relevant to this market. They're noted but unnecessary.
And you're saying strange things about _if_ the government wants this? Obviously they do? Have you not been following the Anthropic saga?
Elon said they could use Grok but nobody wants to if they can get a better model. OpenAI is in talks with the government about taking over that contract for this purpose.
This market asks will the benefits of OpenAI doing what Grok would for the government outweigh Altman's recently stated principles, not whether the government could do something different. This whole thread feels a little bizarre, hence my initial 'what'