Aimee Van Wynsberghe AI ethics leader and European Commission advisor

Dr Aimee Van Wynsberghe on Ethics at the Heart of AI

Recognized by UNESCO and trusted by the European Commission, Prof. Dr. Aimee van Wynsberghe blends empathy, ethics, and innovation to shape the future of AI. 

Code Her Future: Women Leading with Heart In a world where artificial intelligence and automation often outpace reflection, women leaders are reminding us that innovation needs integrity. This series celebrates those who weave empathy, ethics, and purpose into the fabric of technology, leaders reshaping the narrative from speed to sustainability, and from ambition to accountability. Their journeys show that the future of tech isn’t just about progress, it’s about people.

Ethics in AI isn’t just a classroom discussion; it’s the foundation on which the future of technology must stand. Around the world, governments, researchers, and industries are wrestling with questions of accountability, transparency, and sustainability. At the center of these conversations stands Prof. Dr. Aimee van Wynsberghe, Alexander von Humboldt Professor for Applied Ethics of AI at the University of Bonn. 

As a woman shaping the global AI conversation, Aimee’s journey is remarkable. From pioneering responsible robotics to advising the European Commission on policy, she has dedicated her career to ensuring that technology serves humanity responsibly. Her achievements, from being named a UNESCO Women in Science awardee to becoming a Humboldt Professor, underscore her influence as both a scholar and a global leader. 

Through her teaching, research, and leadership of the Bonn Sustainable AI Lab, Aimee shows how education can shape a future where AI is built on responsibility. Her work is an inspiration not only to her students and peers, but also to the broader world, reminding us that ethical AI isn’t optional —it’s essential. 

What first drew you to the intersection of robotics and ethics, and how did your early work with surgical robots shape your perspective? 

Dr. Aimee: I’m originally from Canada, where I studied cell biology and worked at the Canadian Surgical Technologies and Advanced Robotics (CSTAR) center over twenty years ago. I was part of one of the first teams in the world working with the Zeus telesurgical system, training surgeons to use the da Vinci robotic surgical system. 

While part of an engineering team focused on performance, I became fascinated by the surgeon’s experience. What does it feel like to operate through a robot instead of physically touching a patient? My mentor, Dr. Christopher Schlachta, once said, “Aimee, this sounds like ethics to me.” 

That moment changed everything. I left natural sciences to study applied ethics, bioethics, and the ethics of technology, bringing together my two worlds of science and humanities. Since then, my work has focused on connecting them, drawing insights from ethics, and sharing them with engineers, surgeons, and designers to help them reflect on the human impact of their innovations. 

You co-founded the Foundation for Responsible Robotics. What gap did you hope to fill, and what impact has it had since? 

Dr. Aimee: I realized that publishing papers and speaking at conferences, while valuable, kept the conversation within academia. Meanwhile, robots were already being built and deployed with real-world consequences.  That inspired me to co-found the Foundation for Responsible Robotics with Noel Sharkey, a renowned roboticist. Our goal was to connect voices across disciplines, academia, industry, and civil society—and spark broader reflection on how robotics affects people and communities. 

Our advisory board included figures such as Gary Kasparov and Sherry Turkle, combining expertise from far beyond engineering. Through collaborations and events, we worked with companies and researchers to challenge assumptions and promote responsible design. The idea was simple: bring everyone to the same table before, not after, technology is released into the world. 

What has been the most rewarding part of this journey so far? 

Dr. Aimee: Moments of resonance, when people truly connect with what I’m saying, feel incredibly rewarding. 

Recently, I was on a panel in the UK with the Head of AI at The Economist and the Head of Sustainability at Microsoft. The topic was sustainable AI. Everyone was talking about carbon emissions and efficiency, so I asked a different question: Who is this technology really benefiting?  AI may revolutionize healthcare, but for whom? The climate crisis isn’t just about carbon. It’s rooted in capitalism, overconsumption, and extraction. AI won’t fix that. 

Afterward, several people thanked me for voicing what they’d been feeling but couldn’t articulate. Those are the moments that matter most to me, when my words give shape to unspoken concerns and help others feel seen. 

So, what does “sustainable AI” really mean to you in practice? 

Dr. Aimee: Five years ago, I published my first paper on sustainable AI, challenging how narrowly the term was being used. Most people discussed the use of AI for sustainability, specifically utilizing AI to accelerate the achievement of the UN’s Sustainable Development Goals. That’s important, but it’s only half the story. 

We must also examine the sustainability of AI itself. My research highlights the often-overlooked environmental and social costs associated with data extraction, mineral mining, and electronic waste. In Europe, we prioritize privacy and fairness, which are important yet sometimes considered luxuries. Elsewhere, people are asking simply, “Please don’t destroy my drinking water.”  After years of identifying what’s wrong, my work now focuses on solutions. Can we recycle GPUs? Or run data centres entirely on solar energy? Can we reuse the heat they generate instead of consuming water for cooling? 

Sustainable AI isn’t just about smarter algorithms. It’s about reimagining the entire system that powers them. 

You’ve become one of the leading voices in AI ethics and robotics. How has that visibility shaped your advocacy, and why is embedding ethics in education so important? 

Dr. Aimee: When I began, I was one of the very few women in robotics and AI. That’s changed, but visibility brings responsibility. I now see my role as helping future technologists think critically and empathetically. Ethics isn’t about memorizing moral theories; it’s a mindset. It means asking hard questions, reflecting on consequences, and articulating why something feels right or wrong. 

Generative AI has transformed my teaching approach. I can no longer assume essays are original, so I’ve shifted to direct conversation. Every student must speak in class, defend their ideas, and reflect in real-time. It’s more personal and more powerful. When students learn to think for themselves, they become not just engineers or designers but responsible builders of the future. 

You served on the European Commission’s High-Level Expert Group on AI. What were the biggest challenges and rewards of that experience? 

Dr. Aimee: It was an extraordinary experience and a complex one. Bringing together industry, academia, and civil society meant balancing competing priorities. And ethics takes time, yet we had limited hours outside our full-time roles. Still, the reward was immense. We reached a shared understanding that trustworthiness must be the foundation of AI in Europe. Even if not everyone agreed on every detail, we agreed that AI must earn and deserve public trust. That sense of collective purpose made all the difference. 

When it comes to developing AI ethically and responsibly, how important is international collaboration? 

Dr. Aimee: It’s essential, though rarely simple. Many countries claim “AI made in Europe” or “AI made in Germany,” but that’s a myth. The minerals come from Africa, hardware from Asia, and data centers are located in Europe or the US; waste often ends up elsewhere. AI isn’t national; it’s global. Which means responsibility must be global too. If Germany benefits from a GPU built with minerals from Ghana, then Germany shares responsibility for the mining conditions there. 

Sustainable AI presents a unique opportunity for global collaboration. We’re entering an era where nations might compete for finite resources such as lithium, water, and rare earths. This could be the start of an arms race, or it could mark a turning point. We can choose shared accountability instead of competition. Of course, reaching a universal agreement isn’t simple.

Cultures, economies, and values differ widely. But if there’s one area where the world truly must align, it’s on how we treat the planet while building AI. 

Some people say that focusing too much on ethics slows innovation. What’s your take? 

Dr. Aimee: I hear that a lot. Ethics does ask us to pause, reflect, and sometimes stop. That can frustrate those who value speed, but not every invention deserves acceleration. Autonomous weapons, for instance, machines deciding life and death, should absolutely be slowed, if not stopped. 

I don’t see ethics as a brake; I see it as a compass. True innovation isn’t about how fast we move but about creating what’s genuinely good for people and the planet. Ethics doesn’t stifle progress; it strengthens it. If we’re capable of building artificial intelligence, surely, we can figure out how to do it responsibly. 

You’ve received many awards over your career. Which feels most meaningful to you? 

Dr. Aimee: It’s hard to choose just one. Of course, being awarded the Humboldt Professorship was a huge milestone; it validated years of research and commitment to building the field of ethical AI.

But for me, the real achievement goes beyond titles or awards. What feels most meaningful is being able to balance recognition in my professional life with the reality of raising my children. It’s an incredibly challenging dual role, being a woman in academia, a mother, and a voice in global AI ethics. There’s always that sense of guilt for not being home enough, and at the same time, the deep satisfaction of contributing to something bigger. 

Every time someone says, “I read your work, and it inspired me,” it reminds me why I do this. 

As a mentor and professor, what do you hope your students carry forward? 

Dr. Aimee: I think we’re living in a time where technology makes it too easy to avoid the hard work of thinking. There are now numerous tools, from AI writing assistants to auto-summarizers, that can reduce the effort required for reflection and reasoning. It’s convenient, yes, but it’s also dangerous in the long run. 

When you stop practicing how to think, you lose the ability to form your own ideas. I tell my students and my kids that the brain is a muscle. It needs exercise. Writing your own email, reading a research paper instead of a summary, or analyzing something deeply, that’s how you strengthen it. 

So, what I want my students to remember is simple: think for yourself. Don’t outsource the process of thinking. Critical reflection, creativity, and awareness are the true skills that will matter most in the future of AI. 

With so many global challenges, what keeps you motivated? 

Dr. Aimee: There are many overwhelming moments, and this work isn’t easy. But what keeps me going is a mix of optimism and the visible progress I’ve seen over the years. 

When I started in this field two decades ago, it would’ve been unthinkable to imagine a High-Level Expert Group on AI in the European Commission. And now, here we are, Europe leading with frameworks that insist technology must respect the Charter of Rights and Freedoms. That shift gives me hope. 

Even in Germany, when I began researching sustainable AI five years ago, people thought it was just two buzzwords stitched together. Today, I’m part of an expert group under a federal ministry. In November, I’ll be in Brussels for a meeting where European states are forming a coalition on sustainable AI. The fact that these conversations are actually happening, that policymakers and civil society are paying attention, is incredibly motivating. 

And on a more personal level, I’m inspired when someone comes up to me after a talk and says, “You put into words what I’ve been feeling but couldn’t express.” That means everything to me. Because in the end, what I’m really trying to do is protect people, their rights, their freedoms, and the planet we all share. 

Those moments of resonance, recognition, and shared purpose are what keep me hopeful and grounded, even when the challenges feel immense. 

Is there a question from a student or your children that’s really stayed with you? 

Dr. Aimee: Yes, one from my son. He’s nine and recently came home from school after an AI activity where they used a program to create digital portraits of themselves.

He was so proud, saying, “Look, Mum, it really looks like me!” That led to a conversation about my work and how I try to teach people that, while technology is fascinating, it can also go wrong. And he said something so simple yet profound: “But Mum, we made the technology. If we don’t like it, we can stop making it.” 

It struck me how clear his thinking was. He wasn’t overwhelmed by complexity or profit. To him, it was common sense: if we built it, we could decide when to stop. If only more adults approached technology with that same clarity and courage to ask, “Do we like the direction we’re heading?”

Sometimes the deepest ethical insights come from the simplest, most innocent questions. 

Discover more voices shaping the Responsible AI.

Aimee Van Wynsberghe portrait with her quote on AI ethics
About the Speaker: Prof. Dr. Aimee van Wynsberghe is the Alexander von Humboldt Professor for Applied Ethics of Artificial Intelligence at the University of Bonn and founder of the Bonn Sustainable AI Lab. A global leader in AI ethics and robotics, she co-founded the Foundation for Responsible Robotics and served on the European Commission’s High-Level Expert Group on AI. Her accolades include the UNESCO Women in Science Award and the prestigious Humboldt Professorship. As an educator, researcher, and public voice, she continues to shape global conversations on building a responsible and sustainable future for AI.

Avatar photo

Meera Nair

Drawing from her diverse experience in journalism, media marketing, and digital advertising, Meera is proficient in crafting engaging tech narratives. As a trusted voice in the tech landscape and a published author, she shares insightful perspectives on the latest IT trends and workplace dynamics in Digital Digest.