AI Is Reshaping Activism 

Recently, the discourse surrounding the ethics of AI has become a major topic of contention across almost every digital rights platform. Amongst these professionals there is a clear consensus that AI is transforming how activists engage with dissent and how states may choose to weaponize this to suppress them. At the core of this discussion lies a deep ethical conflict: AI’s promise of progress is inseparable from its potential for control. This blog will briefly explore these discussions and find potential solutions to better protect civil liberties.

AI as an Ally: Tools for Resistance

Despite the influx of bad press given to AI, Human rights organizations have boasted great success with AI integration into building cases against perpetrators. For example, many now use machine learning to analyze satellite imagery and detect evidence of war crimes or environmental destruction, as seen in projects such as Amnesty International’s Decoders who had enlisted thousands of digital volunteers and AI-assisted platforms to document abuses in countries like Syria, South Sudan, and the Niger Delta. However, this project appears to be inactive at the time of writing this blog for undisclosed reasons, leading to questions surrounding its effectiveness. 

More recently,  we have seen the success of investigative outlets such as Bellingcat leverage AI tools to geolocate videos from conflict zones and verify open-source intelligence, which can disseminate faster than traditional media. Additionally,  alternative platforms have been created as a form of digital protections from more localised versions of digital exploitation, there are legal tech platforms like DoNotPay that use AI chatbots to help users challenge fines, contest evictions, or navigate complex immigration processes.

 In more practical terms, AI can even offer physical protection, with tools and research developments such as CV Dazzle and Fawkes helping activists mask their identities in photos, by scrambling facial recognition software. This allows activists the freedom to protest whilst protecting their identity. 

In this light, AI enhances activism and provides a revolutionary means to mobilise and extend human agency. These technologies have enabled these organizations to extend their reach and capabilities beyond traditional methods, allowing activists to mobilise quicker and more effectively. However, these softwares are not without their risks or limitations. 

AI as a Threat: Automating Repression

In contrast to the developments listed above, AI is as equally adept at serving authoritarian interests. Around the world, governments are deploying facial recognition, predictive policing, and surveillance algorithms to monitor and neutralize dissent. A catalyst for this discussion is  China’s use of surveillance tools in Xinjiang, for example, the use of AI-powered tools to track Uyghur Muslims in real time, profiling individuals based on everything from facial expressions to smartphone metadata, as reported by Human Rights Watch.

A particular software that is mentioned frequently on our site is AI-enabled spyware like Pegasus has been deployed against journalists and human rights defenders, with reports from Citizen Lab detailing zero-click hacks that turn phones into surveillance devices without users even clicking a link. These tools give state actors the power to eavesdrop on private conversations, track movement, and preemptively disrupt organizing efforts. As described in our blog posts, this has led to severe psychological harm against activists in the Middle East.

Even supposedly neutral content moderation systems, run by major platforms like Facebook and YouTube, are far from impartial. Automated moderation tools routinely misidentify activist content, particularly from the Global South; as harmful or extremist. Reports from organizations such as Access Now and Article 19 show that algorithms often suppress protest videos, testimonies of abuse, or political speech, especially when expressed in non-Western languages or dialects. This isn’t simply a flaw in execution, it’s a structural problem. AI systems reflect the biases embedded in their data and development. From  algorithms that discriminate against women to facial recognition systems that perform poorly on darker skin tones, these technologies routinely reinforce systemic inequality rather than correcting it.

Regulating the Future

As AI continues to evolve, it is imperative that we also evolve our approach to governance and accountability. There are a  number of early frameworks like the UNESCO Recommendation on the Ethics of AI and the EU AI Act are promising first steps, aiming to set limits on surveillance technologies and mandate transparency. But these regulations remain disproportionate, with few enforcement mechanisms and limited global reach.

Meanwhile, activists, technologists, and researchers are pushing back. There has been an influx of Civil society groups and NGOs calling for bans on facial recognition in public spaces, advocating for open-source audit tools, and demanding algorithmic transparency from Big Tech. The movement for “AI for Social Good” is growing, but it must continue to advocate to protect our civil liberties. 

Conclusion: The Fight Is Still Human

AI will not liberate us or oppress us on its own. It is simply a tool shaped by the intentions, interests, and ethics of those who use it. For every protest it helps organize, there may be another it helps suppress. The real battle is not between humans and machines, but between justice and control. The future of protest, and the right to dissent, depends on how we navigate this.

Scroll to Top