The data, provided by market intelligence firm Sensor Tower, demonstrates the public’s distaste for OpenAI’s deal with the DOD, which will see its AI implemented in defence tools and services.
The company’s usual day-over-day uninstall rate was 9 per cent, but now sits at 295 per cent. Additionally, its US downloads dropped 13 per cent day-over-day.
However, rival AI firm Anthropic, which was originally set to do the deal with the Pentagon, has benefited from the OpenAI news, with its download rate up 37 per cent day-over-day on Friday, 27 February, and 51 per cent as of the day after.
Anthropic decided against partnering with the US DOD due to concerns that its AI would be used in fully autonomous weaponry, which the technology is not yet capable of doing safely, and for the surveillance of Americans.
In response to the decline, OpenAI has gone into damage control and clarified that it would not allow its technology to be used to spy on the American people.
“Throughout our discussions, the department made clear it shares our commitment to ensuring our tools will not be used for domestic surveillance. To make our principles as clear as possible, we worked together to add additional language to our agreement,” the company said.
“This language makes explicit that our tools will not be used to conduct domestic surveillance of US persons, including through the procurement or use of commercially acquired personal or identifiable information. The department also affirmed that our services will not be used by Department of War intelligence agencies like the NSA. Any services to those agencies would require a new agreement. “
OpenAI went as far to say that its agreement with the DOD “has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s”, adding that the technology cannot be used for “mass” domestic surveillance, that it is not to be used to direct autonomous weapons systems, and not to be used for “high-stakes” automated decisions, giving the example of social credit.
“Other AI labs have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments. We think our approach better protects against unacceptable use,” it said.
This does beg the question, however, as to what the DOD preferred with OpenAI’s apparently more restrictive deal compared to Anthropic’s.
Daniel Croft