Powered by MOMENTUMMEDIA
For breaking news and daily updates, subscribe to our newsletter

OpenAI CEO’s home targeted in attempted drive-by just days after Molotov attack

OpenAI CEO Sam Altman has been the target of two brutal attacks, with a drive-by shooting occurring just days after his same home was struck with a Molotov cocktail.

Tue, 14 Apr 2026
OpenAI CEO’s home targeted in attempted drive-by just days after Molotov attack

According to police, Altman’s San Francisco home was targeted in a drive-by shooting, when the passenger of a Honda reached out and fired upon the building.

The car was observed first scoping the property, before returning at 1:40am GMT-7 (around 6:40pm AEST), at which time the passenger reached out the window and fired a round at the house, before fleeing.

Security cameras at the home recorded the vehicle’s license plate, and guards heard the gunshots, allowing police to track down the gunman and his accomplices.

 
 

Officers arrested Amanda Tom, 25, and Muhamad Tarik Hussein, 23, after recovering the Honda nearby on Taylor Street. A raid of the home found three firearms.

The incident occurred only days after a 20-year-old Texas-based man was arrested, having allegedly thrown a Molotov cocktail at Atlman’s same San Francisco residence.

According to ABC 7 Eyewitness News, Daniel Alejandro Moreno-Gama allegedly launched a “Molotov cocktail slash sticky bomb” at the gate of Altman’s home, before running away on foot at around 4am.

One hour later, he allegedly showed up at OpenAI’s office building holding a jug filled with what he claimed was kerosene and threatened to set the building alight.

When police arrived at the scene, they identified him as the same person who had struck Altman’s house and arrested him.

“We deeply appreciate how quickly SFPD responded and the support from the city in helping keep our employees safe,” OpenAI said in a statement.

Why target Altman?

The attacks on Altman’s home come at a time when opinions on AI are highly divided, with many frustrated and angry at the technology and its creators for displacing people out of jobs, impacting the environment, destroying creativity and copyright, threatening privacy and security and more.

On his personal blog, following the first incident, Altman posted a photo with his family to try and deter a future attack.

While he also said that AI is “the most powerful tool for expanding human capability and potential that anyone has ever seen”, he also sympathised with those who are fearful of what is to come.

“The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever,” he said.

“We have to get safety right, which is not just about aligning a model – we urgently need a society-wide response to be resilient to new threats. This includes things like new policy to help navigate through a difficult economic transition in order to get to a much better future.”

Altman said that AI needed to be democratised and for the people, adding that he doesn’t think it is right that “a few AI labs would make the most consequential decisions about the shape of our future”, potentially referring to himself and OpenAI.

He also reflected on his own triumphs and mistakes during his time with OpenAI.

“I was thinking about our upcoming trial with Elon and remembering how much I held the line on not being willing to agree to the unilateral control he wanted over OpenAI. I’m proud of that, and the narrow path we navigated then to allow the continued existence of OpenAI, and all the achievements that followed,” he said.

Altman celebrated the work that OpenAI had done, saying that the company achieved what it set out to do.

“Mostly though, I am extremely proud that we are delivering on our mission, which seemed incredibly unlikely when we started. Against all odds, we figured out how to build very powerful AI, figured out how to amass enough capital to build the infrastructure to deliver it, figured out how to build a product company and business, figured out how to deliver reasonably safe and robust services at a massive scale, and much more,” he said.

“A lot of companies say they are going to change the world; we actually did.”

Finally, Altman discussed how the industry is changing and its impact on the world, particularly in the context of artificial general intelligence (AGI), which, in basic terms, is AI with capabilities beyond humans.

“My personal takeaway from the last several years, and take on why there has been so much Shakespearean drama between the companies in our field, comes down to this: ‘Once you see AGI, you can’t unsee it.’ It has a real ‘ring of power’ dynamic to it, and makes people do crazy things. I don’t mean that AGI is the ring itself, but instead the totalising philosophy of ‘being the one to control AGI’,” he said.

“The only solution I can come up with is to orient towards sharing the technology with people broadly, and for no one to have the ring. The two obvious ways to do this are individual empowerment and making sure democratic system stays in control.

“A lot of the criticism of our industry comes from sincere concern about the incredibly high stakes of this technology. This is quite valid, and we welcome good-faith criticism and debate. I empathise with anti-technology sentiments, and clearly, technology isn’t always good for everyone. But overall, I believe technological progress can make the future unbelievably good, for your family and mine.”

Cyber DailyWant to see more stories from trusted news sources?
Make Cyber Daily a preferred news source on Google.

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.
Tags: