Data Consent

Data Consent, AI Privacy and the EU AI Act Delay

In November 2025, the European Commission confirmed it would delay parts of the EU AI Act that apply to so-called “high-risk” systems, pushing some obligations to 2027. Officials framed the move as a simplification. Critics called it a step backwards. One EU lawmaker described the broader reform package as risking a “massive rollback” of digital protections, according to reporting by Reuters and other outlets. 

Beyond the political disagreement, the episode has revived a deeper concern about data consent. Europe already has some of the world’s strongest privacy laws. The GDPR has been in force for years. Yet many people still experience data consent as a banner that flashes across the screen before they can read an article. As AI systems expand into hiring tools, medical diagnostics, fraud detection, and education platforms, the everyday experience of consent becomes more important.

The debate is no longer just about compliance timelines, but about whether current data consent models are strong enough for the systems we are building. 

What GDPR consent requirements were meant to achieve 

When GDPR (General Data Protection Regulation) came into force in 2018, it was presented as a reset moment for digital privacy in Europe. Lawmakers described it as a way to return control to individuals and make organisations more accountable for how they handle personal data. At its core, the regulation was designed to rebalance power through stronger data consent standards. 

Consent, under the regulation, must be: 

  • Freely given 
  • Specific 
  • Informed 
  • Unambiguous 
  • Easy to withdraw 

In theory, that creates real control. People should understand what data is collected, why it is used, and what happens if they say no. In practice, the experience often feels different. 

Cookie banners appear before articles load. Privacy notices stretch across screens. App permissions arrive in batches. The information exists, but few people read it carefully. 

Regulators have noticed. Earlier, the UK ICO reported that several major websites changed their cookie banners after enforcement pressure. The focus was simple: rejecting tracking must be as easy as accepting it. It highlights something often overlooked: data consent is shaped as much by interface design as by legal wording. Europe is not alone in facing these questions.

Around the world, different jurisdictions have adopted their own approaches to data consent and AI governance. 

World map infographic showing GDPR, CPRA, PIPL, DPDP Act, LGPD, PIPA, and APPI data protection laws.

Major global privacy laws influencing data consent and AI oversight across regions 

Subscribe to our bi-weekly newsletter

Get the latest trends, insights, and strategies delivered straight to your inbox.

Consent fatigue and interface pressure 

The problem is not only legal complexity. It is volume. Most users face consent prompts daily — often multiple times. Over time, decision-making turns automatic. People click quickly to access content. This is consent fatigue. 

Research shows that small design choices shape outcomes: 

  • Prominent “accept” buttons increase agreement 
  • Extra clicks to refuse reduce refusal rates 
  • Complex menus discourage review 

These patterns operate quietly but consistently. When refusal requires effort and acceptance does not, user data privacy depends more on interface friction than on genuine understanding. This is where the practical limits of data consent become visible. 

Why data consent is strained in AI systems 

The central problem is not simply poor banner design. AI changes the nature of personal data itself and exposes the weaknesses in traditional data consent frameworks. 

AI systems infer, not just collect 

In older digital systems, organisations mostly stored what people directly submitted. AI models work differently. They analyse patterns and generate new insights. A browsing history can become a political prediction. Shopping behaviour can suggest health conditions. 

Consent notices usually focus on what data is collected. They rarely explain what can be inferred from it. This creates ethical AI privacy issues that standard data consent language struggles to address. Users may agree to processing without fully realising that predictive layers are built on their information. 

Secondary use expands quietly 

Purpose limitation is a core GDPR principle. Data collected for one reason should not automatically serve another. 

AI development complicates that boundary. Historical logs and behavioural data are often reused to train models. Customer support transcripts may become chatbot training material. Search histories may refine recommendation systems. 

These uses may technically fall under broad consent wording, such as “service improvement”. Yet they stretch expectations around data consent and increase concern about AI data misuse. 

Long retention increases AI data privacy risks 

AI systems depend on large datasets accumulated over time.

Data stored for years creates compounding exposure. Even if a user deletes an account, backups, aggregated datasets, or trained models may still exist within defined retention limits. That complexity weakens public understanding of control. Data privacy in AI systems, therefore, extends beyond collection.

It involves lifecycle management, model governance, and oversight that goes beyond a one-time consent interaction. 

The EU AI Act and the politics of delay 

Rather than a simple policy disagreement, the current debate follows a sequence of developments. 

The original framework 

The EU AI Act introduced a risk-based model. AI systems were divided into categories based on impact. High-risk systems, including those used in recruitment, credit assessment, healthcare, education, and law enforcement, were placed under stricter obligations. 

What high-risk systems must do? 

Under the Act, these systems are expected to meet requirements such as: 

  • Documenting how the system works 
  • Ensuring data quality and traceability 
  • Providing transparency to users 
  • Maintaining human oversight 
  • Undergoing conformity assessments 

The timeline adjustment and its context 

In late 2025, the European Commission proposed extending certain high-risk compliance milestones to 2027. The proposal concerns implementation timing rather than the definition of high-risk systems. The legal framework itself remains intact. 

Data consent management cannot carry the full burden 

Data consent management platforms help organisations record preferences and demonstrate compliance. Yet documentation alone does not equal protection. The digital ecosystem is complex. A single service may involve analytics providers, cloud vendors, AI model suppliers, and advertising networks. Data flows across borders and platforms. 

No individual can realistically track every downstream use. This structural imbalance explains why many observers argue that data consent is under strain. The model assumes informed individuals can manage large-scale data ecosystems. That assumption no longer holds. 

Distilled 

Step back for a moment, and the imbalance becomes obvious. The systems running underneath everyday apps are complex, data-hungry, and constantly learning. The approval process for all of that is usually a small banner that disappears in seconds. Most people are not making detailed privacy calculations each time they click. They are moving through a task. At the same time, research keeps showing that people want strong safeguards around AI and feel uneasy when those safeguards seem uncertain. That contrast says more about the limits of current data consent practices than any policy document does. 

The EU’s evolving approach to AI regulation shows how difficult this balance really is. Data consent still matters, but it cannot carry the weight of AI governance alone. Clear limits on data reuse, stronger technical protections, and steady enforcement will likely shape trust more than another layer of pop-ups. If data consent is going to remain credible in the age of predictive systems, it has to connect to something tangible — not just a momentary click. 

Meera Nair

Meera Nair

Drawing from her diverse experience in journalism, media marketing, and digital advertising, Meera is proficient in crafting engaging tech narratives. As a trusted voice in the tech landscape and a published author, she shares insightful perspectives on the latest IT trends and workplace dynamics in Digital Digest.