“This is not what we signed up for.”

There was a noticeable change in Silicon Valley this week.
More than 200 Google and OpenAI employees have asked their employers to better define the limits of how AI can be used for military purposes. Obviously. Out loud. With Axios’ confidential press release, employees have made it clear that they are increasingly uncomfortable with how the AI tools they develop are being used.
And honestly? You can see why.
AI no longer just helps write email and generate graphics. It is discussed in terms of military equipment, surveillance and independent weapons on the battlefield. That is difficult. At least one person involved in the effort has wondered aloud whether this corporate assessment is enough, or whether it merely represents a wishful prose that can be bent when needed in the face of political circumstances.
The reason this feels like déjà vu is because we’ve been here before. In 2018, Google sued a company that worked on Project Maven, the Pentagon’s drone imagery analysis project. Google has responded with its own AI policy, promising that the company will not develop AI for use in weapons or weapons surveillance. The problem is, technology moves faster than regulations, and things that seemed out of bounds in 2018 may seem less clear in 2023.
OpenAI also has publicly accessible use case policies that prevent weaponization. On paper, it’s reassuring. But workers seem to be looking for answers to a perplexing question: What if AI technology is used twice? What if it helps doctors do research, but they can also be employed in the military? What is the border?
If you step back a bit, you’ll see the geopolitical context: AI has been named one of the Defense Department’s top areas of greatest importance for modernization, and there’s an entire website for the Office of Digital and Artificial Intelligence. They say AI will enable faster decision-making, reduce loss of life, and prevent threats. Everything is very “functional”.
But critics, including some inside tech companies, worry that this is the narrow edge of the wedge. AI in defense systems can lead to a lack of accountability. Autonomous programs, even non-lethal ones, are another step toward transferring decisions that some believe should remain in the hands of the people.
But the international debate is far from over. The UN has been debating autonomous weapons of mass destruction for years and, as recent reports show, countries are still far from agreeing on what should happen next. Others want to be banned. Others prefer to propose looser guidelines. AI models, meanwhile, are getting better every month.
The part that feels really human is the people talking that aren’t against the technology. Many of them are AI enthusiasts. They envisioned their systems enabling early disease detection, real-time language translation, and easy access to learning. They support good things. This is why this is such a charged situation. It’s not rebellion for its own sake – it’s a disagreement about values.
There is an element of production, too. Young engineers are not quick to back down and say, “If we don’t do it, someone else will.” The wait for Silicon Valley is unheard of. Instead, they ask: If we’re going to do it, shouldn’t we create boundaries, too?
But apparently, corporate leaders have a different view. Governments are big customers. Security issues are a factor. And as the AI race continues (especially between the US and China), they don’t want to be left behind. It’s not easy to just walk away. It’s strategy, it’s money, it’s politics, all of that.
But internal pressure reveals something important. AI is not just algorithms. AI is precious. AI is a group of people who sit in front of a monitor and begin to understand that what they do one day may balance questions of life and death.
Perhaps that is the essence of this matter. This behavior is like a policy argument. The workers are very clear: “We want guardrails.” Not because they are against progress – but because they see its gravity.
What’s next? It is not clear. Companies can reinforce promises. Governments can develop more defined policies. Or the conflict may be documented with PR announcements.
But one thing is clear: the debate over military AI is no longer just a theory. It’s personal. And it happens in the rooms where the future is made.



