AI Governance and Digital Rights

In Conversation with Emsie Erastus on AI Governance and Digital Rights

Emsie Erastus, Digital Rights Advocate and Head of African Voices at Women in AI Ethics™ (WAEI+), discusses policy reform, tech justice, and building accountable AI beyond traditional power centres. 

Women Building the Guardrails of AI: Behind the rapid rise of artificial intelligence are the people asking the difficult questions about responsibility, safety, ethics, and trust. Across academia, policy, and regulatory institutions, women are playing a vital role in shaping the guardrails that guide how these technologies evolve. Their work focuses on ensuring that the rapid expansion of artificial intelligence is matched by stronger accountability, rights protections and responsible governance.

AI governance is often discussed in global forums, corporate boardrooms, and policy white papers. But its consequences are negotiated in far more complex spaces — newsrooms navigating algorithmic censorship, civil society organisations confronting digital harm, and governments drafting laws under uneven global power dynamics. 

Emsie Erastus’s work has taken shape at precisely that intersection. 

A Tech Rights Consultant and Head of African Voices at Women in AI Ethics™ (WAIE+), she has worked across journalism, policy reform, and multi-stakeholder initiatives to advance ethical AI and human rights-aligned digital frameworks in Africa. From influencing Zambia’s National AI Strategy to researching online violence against women and girls, her work moves beyond principle into practice. 

She reflects on digital colonialism, platform power, generative AI in emerging markets, and the structural guardrails needed to ensure AI serves people — not concentrated power. 

You started your career in journalism. When did you first realise technology was quietly shaping power behind the scenes? 

Emsie: I realised it while working on an online news and current affairs desk in Namibia. We would publish stories about the OvaHimba community, particularly stories about maternal health and education affecting women and girls. Those posts were repeatedly flagged by Facebook for violating community standards because the women are traditionally bare-breasted. 

At the same time, tourists and influencers with large followings could post images of the same women and girls, and those posts stayed up. Even international broadcasters had similar content online that remained untouched. That was the first moment I felt something was deeply wrong. Our reporting was about citizens, public interest, and nation-building. It was not fetishism or tourism. Yet our stories were being treated as violations. 

When we appealed, there was no real person to speak to. That stayed with me. It made me realise this was bigger than a newsroom problem.

It was about who gets recognised as legitimate, whose culture is understood, and whose stories are allowed to exist online. 

That awareness deepened in other ways, too. I saw voice recognition tools fail to understand my accent because my pronunciation is shaped by my language, my region, and my country. I also remember being at an airport and struggling to get an automated tap to work, only to see it respond immediately when someone with lighter skin stepped forward. Those moments forced me to ask bigger questions. They made me realise these were not random glitches. They were patterns of bias built into the systems themselves. 

As a journalist from Africa dealing with content moderation and algorithms, what patterns did you notice that felt unfair or invisible to the rest of the world?

Emsie: One of the biggest patterns is that technology is presented as something too complex for ordinary people to question. It is framed as inevitable, neutral, and beyond public understanding. That framing is powerful because it shields the people building these systems from accountability. 

Subscribe to our bi-weekly newsletter

Get the latest trends, insights, and strategies delivered straight to your inbox.

We have seen this before with other forms of media. Radio and television were once treated as if they simply existed outside public control, until societies recognised that they should be regulated. The same is true of social platforms, algorithms, and AI systems. 

A lot of what feels invisible is made invisible by design. The language around these technologies is dense, abstract, and highly technical. It makes people assume whatever is happening must simply be natural or unavoidable. But these systems are not neutral. They are shaped by human choices, power structures, and economic interests. When people do not understand that, they are less likely to question it. 

In your work on information integrity and online violence against women and girls, how have you seen AI amplify existing inequalities? 

Emsie: AI does not appear in a vacuum. It grows out of histories that are already there. That is why I often say everything is connected. Racism, colonialism, and inequality have shaped the world long before AI entered the picture. 

Take information integrity. For a long time, Africa has been framed through narratives of poverty, dependency, and lack. These narratives were built through media and history, and they continue to influence how the continent is seen. When those assumptions flow into digital systems, they are not erased. They are amplified. 

With online violence against women and girls, the harms are even more visible now. There are obvious forms, such as deepfakes and synthetic sexual imagery. But there is also a quieter and very damaging form of violence directed at women in public life, politicians, activists, experts, and women in leadership roles. Narratives and images are pushed in ways that make women seem aggressive, unreasonable, or out of place simply for speaking with authority. 

That too is gender-based violence. It undermines confidence, distorts public perception, and punishes women for taking up space. We need to pay closer attention not just to what harmful content exists, but to how algorithms help distribute and reward that harm at scale. 

AI harms are often described as “edge cases,” meaning rare or exceptional situations. From your experience on the ground, what do global tech conversations tend to miss? 

Emsie: They miss the power of narrative. 

So much of the world is shaped by framing, political ideologies, public fear, international relationships, and even the language of “help” and “development.” AI systems now sit within those same structures of narrative power. They shape what people see, what they do not see, and what is presented as truth. 

You can already see this happening on platforms today. Some posts gain extraordinary reach while others seem to disappear from view. We may not always have direct access to the systems behind this, but the patterns are visible. That is why algorithmic auditing matters so much. 

We should not focus only on chatbots and generative AI tools. We should also be asking how information is being distributed across platforms, what is being left out, and which messages are being pushed forward. In a world where politics, public debate, and even news are shaped online, this becomes a serious democratic issue. 

You helped shape discussions around Zambia’s National AI Strategy. What does ethical AI actually look like when it is being negotiated in real policy rooms? 

Emsie: It does not begin with walking into a room and announcing what should happen. It begins much earlier, with advocacy, relationship-building, and trust. 

Our team and partners were fortunate to be working at a time when the government was willing to open up to dialogue, but that did not happen overnight. It took sustained work from our team to build credibility and make sure the government trusted us enough to invite us into those spaces. That is an important reality in this field. African Governments are often encouraged to rely on Western consultants because technology is framed as something only external experts can understand. 

In Zambia, much of the groundwork happened through training, capacity-building and a multi-stakeholder initiative that brought together government, civil society, and the private sector.

Those spaces allowed civil society to say clearly that they were not being invited to the table and needed to be part of these decisions. The government could then respond directly. 

That is what ethical AI looks like in practice. It means building the conditions for participation. It means ensuring policy is not written above people’s heads, but shaped with the people who will actually live with its consequences. 

Governments often prioritise innovation and economic growth in AI strategies. How do you ensure human rights do not become an afterthought? 

Emsie: One of the things we kept saying is that there is no need to rewrite the basic principles. The standards already exist. 

There is the Universal Declaration of Human Rights, the African Charter on Human and Peoples’ Rights, and national constitutions that guarantee privacy, freedom of expression and other fundamental rights. The same rights people enjoy offline should apply online. There should not be a separate exception simply because the space is digital. 

If someone cannot walk into my home and take my ID without consequence, they should not be able to take my biometric data without consent and call it innovation.

The digital world did not arrive outside our legal and moral systems. It entered them. So those same protections must still apply. That argument was powerful in policy spaces because it shifted the conversation away from technological novelty. It reminded governments that digital governance is still governance. Rights do not disappear simply because the tools have changed. 

You’ve written about digital colonialism. How does control over infrastructure translate into power in the AI ecosystem? 

Emsie: Infrastructure is power because it determines who benefits, who participates and who remains dependent. 

Digital infrastructure is still concentrated in a small number of places, which means entire regions are forced to depend on external systems while generating enormous value for global tech companies. Africa has 54 countries, yet investment remains uneven and highly selective. The same pattern can be seen in other parts of the Global South. 

At the same time, these tech companies make immense profits from users around the world. That is why data matters so much. There is a reason why so many of the richest people in the world are tech owners. Their platforms collect information, shape markets, and extract value across borders. 

What is striking is how little reinvestment often follows. In most industries, if you do business somewhere, you create local jobs, build local presence, invest in local capacity, and allow skills transfer. Big tech has largely been allowed to escape those expectations. It makes money from countries while contributing far less than other sectors would be expected to. 

That is one of the clearest expressions of digital colonialism. Value is extracted globally while power remains concentrated elsewhere. 

What concerns you most about the adoption of generative AI in emerging markets? 

Emsie: What concerns me most is not technology itself, but the concentration of ownership and power behind it. 

A relatively small group controls many of these systems, and some of the political and racial ideologies surrounding that power are deeply troubling. Alternatives are not being meaningfully built or supported. Instead, the rest of the world is expected to move in step with Silicon Valley. 

Most activists are not anti-technology. The real danger is unregulated AI in the hands of people and institutions with disproportionate power. That is where the destruction comes from. It is not AI on its own. It is unregulated AI controlled by those who already dominate the system. 

We have spent too long-playing defence, reacting to technologies after they arrive. We need to become more strategic and more proactive. If concentrated tech power has already shown how destructive it can be, then we cannot keep pretending regulation is the enemy. The real question is why so many of these actors fear being governed by the same rules that apply elsewhere. 

From your perspective, how should ethical AI work in practice?

Emsie: Ethical AI would begin with meaningful choice. 

People should be able to opt out of technologies without facing social or economic exclusion. They should not be forced into systems they do not trust simply to remain part of public life. Consent should be real, understandable, and free from coercion. 

Ethical AI would also require honesty. We need to stop pretending these systems are operating independently of human control. Technology does not simply go rogue. It is designed, instructed, constrained, and deployed by people. Once we become more truthful about that, public debate becomes more grounded and more responsible. 

The media has a role here, too. It should question the hype far more rigorously. Instead of repeating dramatic claims about AI autonomy or inevitability, it should investigate who funds these systems, what interests shape them, and why certain narratives are being amplified. The media must continue to act as the fourth estate it is meant to be. 

Ethical AI, in practice, would be accountable, regulated, transparent, and built to assist human beings. 

For younger generations navigating digital spaces today, what should they remember about protecting their rights?

Emsie: The first thing is that technology should remain a choice. It should not become something people are forced into without room to question, refuse, or step out. 

Too much responsibility has been pushed onto users, especially young people. We keep telling them not to share too much, not to reveal too much, not to trust too much. Of course, people should be careful, but that cannot be the whole model of safety. The burden cannot rest solely on the individual. 

What young people should demand is this: respect our rights. Respect our privacy, our autonomy. Respect what we choose to share. And do not hide the terms of that exchange behind language so dense that nobody can understand it. 

The answer cannot simply be endless self-policing. The bigger demand has to be for better systems — systems that respect people by default. 

Many people have shaped the field of AI ethics and digital rights. Are there women whose work has particularly inspired your journey? 

Emsie: Yes, absolutely, a big shout-out to the many women who have shaped my life and continue to shape this field. 

People like Dr Seeta Peña Gangadharan, Timnit Gebru, Ruha Benjamin, Hlengiwe Dube, and the late Sarry Xoagus-Eises have all influenced how I think about technology, power, and justice. Their work opened up space for deeper conversations about race, governance, accountability, and the social consequences of technology. 

There are also many women across the African continent doing powerful work in digital rights and technology governance. Their voices continue to shape the field in meaningful ways. 

On a lighter note, if you were to write an autobiography one day, what title might capture your journey? 

Emsie: It would have to be something joyful. Even in difficult conversations about technology, power, and governance, I still believe there is room for hope. 

Maybe something like Love in Us: How Humanity Was Restored

Emsie Erastus Quote
About the Speaker: Emsie Erastus is a Tech Rights Consultant and Head of African Voices at Women in AI Ethics™. Her work sits at the intersection of technology policy, journalism, and human rights, with a focus on AI ethics, data protection, privacy, and digital governance across African contexts. She has led multi-stakeholder initiatives influencing national AI policy, including contributing to the integration of ethics and research priorities within Zambia’s National AI Strategy. Her work spans capacity-building with government institutions, civil society organisations, and media professionals to strengthen digital rights frameworks aligned with international standards. Emsie holds an MSc in Media and Communications (with distinction) from the London School of Economics and Political Science (LSE) as a Chevening Scholar. She was recognised among the Global 100 Brilliant Women in AI Ethics and nominated for the Women in Tech Africa Awards in Tech Diplomacy. Her commentary and research have been featured on the BBC and Democracy Now!

Drawing from her diverse experience in journalism, media marketing, and digital advertising, Meera is proficient in crafting engaging tech narratives. As a trusted voice in the tech landscape and a published author, she shares insightful perspectives on the latest IT trends and workplace dynamics in Digital Digest.