AI in Healthcare

Elisabetta Biasin on Building Guardrails for AI in Healthcare

In Conversation with Elisabetta Biasin, Doctoral Researcher at the KU Leuven Centre for IT & IP Law (CiTiP) 

Women Building the Guardrails of AI: Behind the rapid rise of artificial intelligence are the people asking the difficult questions about responsibility, safety, ethics, and trust. Across academia, policy, and regulatory institutions, women are playing a vital role in shaping the guardrails that guide how these technologies evolve. From the principles of data accuracy to the governance of AI in healthcare, their work is helping ensure that innovation advances alongside accountability.

In the global conversation around artificial intelligence, headlines focus on speed, scale, and breakthrough innovation. Yet many overlook the principles that keep AI systems fair, safe, and trustworthy. 

Elisabetta Biasin works precisely in that space. 

As a Doctoral Researcher at the KU Leuven Centre for IT & IP Law (CiTiP), she examines one of the most overlooked concepts in data protection law: accuracy. Today, AI systems inform medical diagnoses, clinical simulations, and digital health platforms. In this landscape, accurate data is no longer abstract, it is deeply human. 

Biasin’s research spans AI-based medical device cybersecurity, in silico trials, and health data governance. These fields sit at the structural core of healthcare innovation. AI in healthcare reshapes medicine and clinical decision-making. Hence, her work highlights a central truth: innovation must advance alongside responsibility. 

In this conversation, she reflects on accountability, AI in healthcare, and the principles guiding the technologies shaping our future. 

Your work sits at the intersection of AI, healthcare, and data protection, fields that directly affect human lives. What first drew you to this space, and what keeps you committed to it? 

Elisabetta: It all started during my university studies, with Technology Law and Data Protection. I had a professor, Roberto Caso, who inspired me to think about what technology regulation means for society, going beyond “only” legal principles. 

I then began working and experienced the impact of technology regulation on policy. First, as a trainee at European Digital Rights (EDRi), working on Network Neutrality. And later in practice as a legal advisor for GDPR compliance programmes. I felt it was the only field that truly mattered to me at the time, and although many things have changed, it still feels that way today. 

You focus on something many overlook: accuracy in data protection law under the General Data Protection Regulation. What does “accuracy” mean to you, not just legally, but socially? 

Elisabetta: Accuracy is a multifaceted concept. Of course, it has a meaning in data protection, rooted in history and in the evolution of laws across the world, and influenced by several disciplines. 

Beyond that legal meaning, what fascinates me about this concept is its etymology and its roots. It comes from the Latin word cura, which means “care”. I think this aspect has significant societal implications. It makes me think of theories of care and collective care. And often makes me wonder: how would the world look if we interpreted accuracy as data “handled with care” for and towards individuals, groups, and society? 

Subscribe to our bi-weekly newsletter

Get the latest trends, insights, and strategies delivered straight to your inbox.

When did you realise that the work you were doing wasn’t just academic, but foundational to how AI systems should behave? 

Elisabetta: Academic work, when driven with purpose, is never “just academic” or at least, this is my ambition. 

In my case, it became clearer when I began engaging with developers, members of civil society, policymakers, and decision-makers. Working with institutions such as the EMA and the WHO made me realise that my expertise could help shape policy documents. For example, I provided academic perspectives on regulatory documents related to AI in the medicinal product lifecycle, as well as international blueprints for the use of AI to help prevent burnout in the health workforce. 

AI governance is often discussed in technical or economic terms. From your perspective, how important is diverse legal scholarship, especially women’s voices, in shaping these frameworks? 

Elisabetta: Diverse scholarship should be the baseline for all research, including legal studies. Legal regulation is deeply intertwined with policy and, as in many other fields, is affected by bias and imbalances of power. 

Plurality enriches decision-making and research processes. It can introduce new perspectives or amplify overlooked ones. In law, feminist legal studies offer an example of how concepts can be elaborated through a different lens. Last year, for example, a workshop organised by Vogiatzoglou’s team at the University of Amsterdam dissected the concept of digital sovereignty beyond the usual “securitarian” narrative. I am also thinking of the importance of decolonisation studies in law. 

Have there been moments in your career where you felt the need to assert your perspective more strongly in policy or regulatory spaces? 

Elisabetta: Working in policy and regulatory spaces requires a good dose of assertiveness. Starting a career as a young, non-male researcher, one quickly learns that contributions at the policy table must be crafted strategically. 

Yes, there have been moments when I felt this was particularly important. For example, I stressed the importance of data diversity to avoid underrepresentation of patient populations in AI for medicinal products. On another occasion, I emphasised the importance of GDPR’s data minimisation principle to avoid excessive harvesting of patient data in certain healthcare technologies. 

You’ve contributed to major EU-funded projects on AI-based medical devices and cybersecurity. What does responsible innovation in healthcare truly look like? 

Elisabetta: To me, responsible innovation in healthcare means conducting research with full respect for ethical principles and legal requirements. It involves anticipating potential issues, discussing them with relevant stakeholders, and reducing the risks and impacts that technologies may entail. 

It may sound simple, but in reality, this does not always happen in a meaningful way. 

As simulations and “in silico” trials evolve, how can regulation ensure safety without slowing down life-saving innovation? 

Elisabetta: There is often a false dichotomy in discussions about technology and innovation, and this becomes even stronger in healthcare, where technologies directly affect people’s lives. 

If I may reverse the perspective, I would provocatively ask: what would be the price of health innovation without safety requirements? History shows that health regulations were introduced to prevent “snake oils” and other forms of deception or harmful experimentation. 

The answers to these questions are always nuanced and detailed, and perhaps people do not always want to engage with that complexity. 

If there is one aspect that needs improvement, I would say data sharing. It remains too complicated for researchers to navigate existing requirements. There should be better facilitation for research. 

In your view, what is the most urgent ethical question facing AI in healthcare today? 

Elisabetta: Perhaps urgent precisely because it is not sufficiently considered: transparency in locally developed AI in healthcare systems. 

Together with my colleagues Eylem Karakaya and Sofia Palmieri, I am currently researching this topic. Much health AI research is being developed within local healthcare institutions, but there is limited oversight due to simplified regulatory requirements. In the near future, this may become a significant issue affecting communities at the local level. 

Having worked with institutions like the European Medicines Agency and research centres across Europe and the US, what have you learned about how global governance cultures differ? 

Elisabetta: In legal studies, we are taught early on that legal systems differ significantly. Each system has its peculiarities, and legal cultures are shaped by them — and I find this beautiful. 

Comparative research allows us to identify both similarities and differences. With my former colleague Kamenjasevic, for example, we compared regulatory approaches to AI-based medical device cybersecurity. We found that some countries, including the US, rely more on principle-based approaches. Whereas systems like Europe provide more detailed requirements and stronger local-level oversight. 

These have certainly been fascinating times to study AI governance. It has been remarkable to observe how it began internationally through policy principles and discussions. And later it was translated into local legal systems. 

Looking back, what moments shaped your confidence as a legal scholar working in such a rapidly evolving technological field? 

Elisabetta: I had to learn confidence. For many young scholars, confidence is a roller coaster with many ups and downs. Looking back, I feel like a different person compared to when I started, but it happened step by step. 

I focused on my work, applied for conferences, research stays, and institutional collaborations. Each time, I pushed slightly beyond my comfort zone, sought feedback, pursued opportunities, and tried to be creative. There was no specific moment when I suddenly felt confident. Instead, over time, I realised I had gained enough experience to navigate discomfort. 

What advice would you give young women who want to enter AI law, health technology regulation, or digital policy? 

Elisabetta: I would first ask them: what drives you? Motivation and clarity are key; they create paths and shape connections with others. 

I would also encourage them not to fear interdisciplinarity. Stay up to date and create your own information digest with new events, articles, news, and fresh perspectives. Build a network of supportive professionals who are willing to give feedback and act with kindness. 

Finally, be knowledgeable about AI broadly, but also identify a niche topic that is not yet fully explored. As AI research expands and competition intensifies, arriving early in a field can be an advantage. 

On International Women’s Day, what message would you share with women working quietly behind the scenes to make technology safer and fairer? 

Elisabetta: I would say: “Don’t be quiet.” 😊 

If we met you in a parallel universe where you weren’t shaping AI guardrails, what would you be doing? 

Elisabetta: Somewhere in the music space, perhaps as a musician. Maybe even in A&R. I love discovering new music and digging into old tunes. 

Elisabetta Biasin's quote on AI in healthcare
About the Speaker: Elisabetta Biasin is a Doctoral Researcher at the KU Leuven Centre for IT & IP Law (CiTiP), where she examines the evolving role of accuracy in data protection law and implications for AI in healthcare and digital health governance. Her research sits at the intersection of privacy, medical device regulation, cybersecurity, and digital health governance. She has contributed to several EU-funded research initiatives, including HEREDITARY, InSilicoWorld, and CORE-MD, focusing on AI-based medical devices, simulation-driven clinical trials, and regulatory accountability in healthcare technologies. Her work bridges law, ethics, and innovation, ensuring that emerging medical systems are not only advanced but also trustworthy. Beyond academia, Biasin has served as an External Collaborating Expert on Data Protection of Big Data and Real-World Data at the European Medicines Agency, contributed to the World Health Organisation’s Strategic Partnership on Digital Health, and held research positions at Stanford Law School and the University of Oxford. Through her transatlantic work, she continues to shape conversations around responsible AI governance in healthcare.

Drawing from her diverse experience in journalism, media marketing, and digital advertising, Meera is proficient in crafting engaging tech narratives. As a trusted voice in the tech landscape and a published author, she shares insightful perspectives on the latest IT trends and workplace dynamics in Digital Digest.