Powered by MOMENTUMMEDIA
For breaking news and daily updates, subscribe to our newsletter

5 years of experience required: OpenAI, Anthropic looking to hire chemical weapons experts

AI giants OpenAI and Anthropic have begun advertising job openings for experts in chemical weapons.

Wed, 18 Mar 2026
5 years experience required: OpenAI, Anthropic looking to hire chemical weapons experts

Anthropic, which posted its job ad roughly a week ago on LinkedIn, is looking for someone to fill the position of policy manager for chemical weapons and high-yield explosives.

According to the job listing, the role will tackle how AI systems handle data relating to chemical weapons and explosives, and the salary will be between US$245,000 and US$285,000.

“This role offers a unique opportunity to shape how AI systems handle sensitive chemical and explosives information,” Anthropic said.

 
 

“You’ll work with leading AI safety researchers while tackling critical problems in preventing catastrophic misuse. If you’re excited about using your expertise to ensure AI systems remain safe and beneficial, we want to hear from you.”

Anthropic said a minimum of “five to eight years of experience in chemical weapons and/or explosive defence” and “a track record of translating specialised technical knowledge into actional safety policies or guidelines” were key requirements.

Based on the listing, it appears Anthropic wants data relating to explosives and chemical weapons handled carefully, but is approaching ways in which the AI would manage that information, saying that good applicants would be “passionate about preventing misuse of dangerous technical knowledge while enabling beneficial applications”.

Similarly, OpenAI is advertising for a frontier biological and chemical risks researcher, which it said is to prepare its most capable frontier AI models.

“Frontier AI models have the potential to benefit all of humanity, but also pose increasingly severe risks,” OpenAI said.

“To ensure that AI promotes positive change, the preparedness team helps us prepare for the development of increasingly capable frontier AI models. This team is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models.”

The position will be guided by the company’s Preparedness Framework and fall under the preparedness team.

“We are looking to hire exceptional research engineers [who] can push the boundaries of our frontier models. Specifically, we are looking for those that will help us shape our empirical grasp of the whole spectrum of AI safety concerns and will own individual threads within this endeavour end-to-end,” OpenAI said.

“You will own the scientific validity of our frontier preparedness capability evaluations – designing new evals grounded in real threat models (including high-consequence domains like CBRN as well as cyber and other frontier-risk areas), and maintaining existing evals so they don’t stale or silently regress. You’ll define datasets, graders, rubrics, and threshold guidance, and produce auditable artifacts (evaluation cards, capability reports, system-card inputs) that leadership can trust during high-stakes launches.”

The role pays almost double the Anthropic role with a salary of up to US$455,000, and will see the role identify AI risks related to chemical and biological factors and build scalable systems to evaluate and mitigate them.

The role requires an applicant to be a “US person” under definitions outlined in the US Export Administration Regulations 15 C.F.R. § 772.1, and in the International Traffic in Arms Regulations, 22 C.F.R. § 120.62. This means a US citizen, legal permanent resident, people granted asylum status or those admitted as refugees.

The department of AI war

The search for chemical weapons and explosives experts by OpenAI and Anthropic comes as the former has accepted a deal with the US Department of Defence (DOD) to have its AI implemented in defence tools and services, a deal that Anthropic was originally set to have but fell through.

Anthropic decided against partnering with the US DOD due to concerns that its AI would be used in fully autonomous weaponry, which the technology is not yet capable of doing safely, and for the surveillance of Americans.

OpenAI said its technology would also not be used for fully autonomous weapons or domestic surveillance.

The company went as far to say that its agreement with the DOD “has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s”, adding that the technology cannot be used for “mass” domestic surveillance, that it is not to be used to direct autonomous weapons systems, and not to be used for “high-stakes” automated decisions, giving the example of social credit.

“Other AI labs have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments. We think our approach better protects against unacceptable use,” it said.

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.
Tags: