
In February 2025, Google quietly removed a long standing pledge from its AI Principles: the commitment not to build artificial intelligence for weapons or surveillance from its publicly stated principles. What might look like a minor update to corporate policy is, in reality, a seismic shift in the politics of technology. When one of the world’s largest tech companies decides that it’s open to developing AI for warfare, the implications for human rights, accountability, and the future of digital freedom are enormous.
Ethically Flexible Approach to AI
Back in 2018, Google introduced its “AI Principles” where it committed not to design or deploy AI for use in “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” and “technologies that gather or use information for surveillance violating internationally accepted norms.” However, fast-forward to early 2025, and those explicit prohibitions are gone. Google removed the section titled “Applications we will not pursue” (including weapons) from its website. Instead, the updated policy says Google will work to align its AI with “widely accepted principles of international law and human rights” and support national security missions among governments that share certain values.
Why does this matter? Because when a company with Google’s impact decides that it is open to AI‐weapons (or AI for weapons decisions, or enabling military functions) the accessibility and “normality” of these weapons grows, this means: faster kill‐chains, less human oversight, and less clarity about responsibility and accountability. And when the threshold for deployment is “we believe the benefits substantially outweigh the risks,” you open a Pandora’s box of ethical implications.
The company’s new language focuses on “alignment with human rights” and “human oversight,” but offers no clear mechanism, no binding standards, and no guarantee that AI systems will be safe, transparent or auditable. As Human Rights Watch put it: voluntary guidelines are not a substitute for enforceable law.
Why This Matters for Digital Rights, Especially in the Gulf
This shift isn’t only about weapons. It’s about the deepening merger between Big Tech, state power, and systems of surveillance, which has severe implications for digital rights across the Gulf and beyond .
Around the world, AI systems built by private companies are already being used to analyse battlefield data, predict threats, and even select targets; where algorithmic decisions could determine who lives, who dies and who gets a say. But what’s new, and deeply concerning, is that global tech giants are now openly positioning themselves as partners in this militarised AI future.
Three Key Concerns
In the Gulf, the stakes are particularly high. The region has seen a great increase in expansion of state surveillance infrastructure, often framed as “smart governance” or “digital transformation.” When AI developed by companies like Google becomes embedded in these systems, it can dramatically amplify governments’ capacity to monitor, profile, and suppress dissent; especially in contexts where freedom of expression and privacy protections are already limited.
- Erosion of Ethical Boundaries
Once a company removes a clear “we won’t build this” clause, there’s no line left to defend. The principle and ethics surrounding AI becomes negotiable – a matter of interpretation, not conviction. The risk is not only what’s done, but the slippery slope: once the boundary moves, what stops further erosion? - Accountability Black Hole
AI in warfare and surveillance operates in complex, opaque systems where errors or abuses are hard to trace. Who is responsible when an algorithm misidentifies a civilian as a target? The engineer? The military? The company? Without enforceable transparency and oversight, accountability disappears. In some cases, this might be by design. If the algorithm simply “misfired,” then the person that wanted to pull the trigger in the first place gets off free. - A Race to the Bottom
Google justifies its change by invoking global competition; the need for “democracies” to lead in AI. But this logic risks triggering an arms race where ethical restraint is seen as weakness. If every company or country acts on that logic, no one wins and everyone loses.
Why we cannot rely only on self-regulation
Google’s decision to remove its pledge against AI for weapons is a flashing warning light. It tells us: the AI era’s ethical foundations are shifting. We must ask ourselves: when the company that once said “don’t be evil” says it is comfortable working on AI that might harm or surveil or kill, what does that say about the future we’re designing?
The use of this technology is far from “neutral” and how we choose to regulate it, govern it and deploy it will shape societies for decades. Let’s not allow the narrative to be set solely by corporate boardrooms or military budgets. The public stake is too high.
In February 2025, Google quietly removed a long standing pledge from its AI Principles: the commitment not to build artificial intelligence for weapons or surveillance from its publicly stated principles. What might look like a minor update to corporate policy is, in reality, a seismic shift in the politics of technology. When one of the world’s largest tech companies decides that it’s open to developing AI for warfare, the implications for human rights, accountability, and the future of digital freedom are enormous.
From “Don’t be Evil” to “Ethically Flexible”
Back in 2018, Google introduced its “AI Principles” where it committed not to design or deploy AI for use in “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” and “technologies that gather or use information for surveillance violating internationally accepted norms.” However, fast-forward to early 2025, and those explicit prohibitions are gone. Google removed the section titled “Applications we will not pursue” (including weapons) from its website. Instead, the updated policy says Google will work to align its AI with “widely accepted principles of international law and human rights” and support national security missions among governments that share certain values.
Why does this matter? Because when a company with Google’s impact decides that it is open to AI‐weapons (or AI for weapons decisions, or enabling military functions) the accessibility and “normality” of these weapons grows, this means: faster kill‐chains, less human oversight, and less clarity about responsibility and accountability. And when the threshold for deployment is “we believe the benefits substantially outweigh the risks,” you open a Pandora’s box of ethical implications.
The company’s new language focuses on “alignment with human rights” and “human oversight,” but offers no clear mechanism, no binding standards, and no guarantee that AI systems will be safe, transparent or auditable. As Human Rights Watch put it: voluntary guidelines are not a substitute for enforceable law.
Why This Matters for Digital Rights, Especially in the Gulf
This shift isn’t only about weapons. It’s about the deepening merger between Big Tech, state power, and systems of surveillance, which has severe implications for digital rights across the Gulf and beyond .
Around the world, AI systems built by private companies are already being used to analyse battlefield data, predict threats, and even select targets; where algorithmic decisions could determine who lives, who dies and who gets a say. But what’s new, and deeply concerning, is that global tech giants are now openly positioning themselves as partners in this militarised AI future.
In the Gulf, the stakes are particularly high. The region has seen a great increase in expansion of state surveillance infrastructure, often framed as “smart governance” or “digital transformation.” When AI developed by companies like Google becomes embedded in these systems, it can dramatically amplify governments’ capacity to monitor, profile, and suppress dissent; especially in contexts where freedom of expression and privacy protections are already limited.
Three Key Concerns
- Erosion of Ethical Boundaries
Once a company removes a clear “we won’t build this” clause, there’s no line left to defend. The principle and ethics surrounding AI becomes negotiable – a matter of interpretation, not conviction. The risk is not only what’s done, but the slippery slope: once the boundary moves, what stops further erosion? - Accountability Black Hole
AI in warfare and surveillance operates in complex, opaque systems where errors or abuses are hard to trace. Who is responsible when an algorithm misidentifies a civilian as a target? The engineer? The military? The company? Without enforceable transparency and oversight, accountability disappears. In some cases, this might be by design. If the algorithm simply “misfired,” then the person that wanted to pull the trigger in the first place gets off free. - A Race to the Bottom
Google justifies its change by invoking global competition; the need for “democracies” to lead in AI. But this logic risks triggering an arms race where ethical restraint is seen as weakness. If every company or country acts on that logic, no one wins and everyone loses.
Why we cannot rely only on self-regulation
Google’s decision to remove its pledge against AI for weapons is a flashing warning light. It tells us: the AI era’s ethical foundations are shifting. We must ask ourselves: when the company that once said “don’t be evil” says it is comfortable working on AI that might harm or surveil or kill, what does that say about the future we’re designing?
The use of this technology is far from “neutral” and how we choose to regulate it, govern it and deploy it will shape societies for decades. Let’s not allow the narrative to be set solely by corporate boardrooms or military budgets. The public stake is too high.
