AI is not a future concern—it is already reshaping the very terrain of human rights advocacy.
This reflection stems from the second installment of Articulate Foundation’s conversation series, “The Future of Civil Society and Human Rights Advocacy: Between Adaptation and Resistance.” The series explores how organizations can navigate—and influence—the technological and social transformations that are redefining the civic landscape.
The Double-Edged Sword of AI
Our second conversation, “Artificial Intelligence and the Defense of Human Rights: Challenges and Opportunities for Our Sector,” examined AI’s dual nature as both an engine of innovation and a source of profound risk.
Moderated by Faisal Yamil Meneses, Director of Strategic Operations at Articulate Foundation, the discussion featured Francisco “Paco” Quintana, international human rights lawyer and advisor on the intersection of AI and human rights, and Beatriz Borges, Director of Articulate Foundation.
Together, they unpacked how AI is transforming the way we work, communicate, and make decisions—and why this transformation cannot be left unexamined.
The Non-Neutrality of Algorithms
A key insight emerged early in the discussion: AI is not neutral.
As Francisco Quintana reminded the audience, every algorithm reflects the values, intentions, and asymmetries of those who design and deploy it. AI systems are already shaping economic opportunity, public discourse, and political participation—and, by extension, every human right.
The panel identified several areas of particular concern:
- Privacy and Surveillance: The mass collection and use of personal data, often without consent, pose direct threats to autonomy and dignity.
- Bias and Discrimination: Algorithmic systems reproduce and magnify social inequities, influencing everything from hiring and credit scoring to access to information.
- Behavioral Manipulation: AI-driven platforms have unprecedented power to influence perception and decision-making, with implications for democratic processes, consumption, and civic trust.
But the conversation did not end in caution. It also illuminated possibility. Over 600 documented applications of AI for social impact demonstrate that these same technologies can be mobilized to strengthen transparency, expand access to justice, and enhance humanitarian response—if guided by ethical governance and inclusive design.
Humanizing Innovation
The central message of the discussion was both a warning and a call to action:
Civil society cannot afford to treat AI as a distant or purely technical issue. It must be understood as an immediate, political, and moral force that demands engagement.
The task ahead is twofold:
- To resist algorithmic harms through regulation, advocacy, and public accountability.
- To leverage AI responsibly as a tool for democratic empowerment and institutional transformation.
This requires a new kind of capacity within the human rights community—digital competence rooted in ethical purpose. Training, experimentation, and cross-sector collaboration are no longer optional; they are essential to sustain relevance and impact in an AI-driven world.
As Beatriz Borges summarized, “The challenge is not to stop innovation—it is to humanize it.”
In a time when algorithms increasingly mediate power, ensuring that technology serves humanity is the next frontier of human rights defense.
This article was developed by the Articulate Foundation team, with the support of Artificial Intelligence tools for content writing and curation, demonstrating how technology can be used for social good. AI can help you too. Enroll in our Certification on Artificial Intelligence for CSO and discover how






