Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks
Dutch Data Protection Authority Wants Government to Speed Up AI Regulation
Photo by Solen Feyissa / Unsplash

Dutch Data Protection Authority Wants Government to Speed Up AI Regulation

The Dutch Data Protection Authority warns that the Netherlands is running out of time to put proper AI oversight in place, as deepfakes, discriminatory hiring tools, and AI fraud spread faster than regulators can respond.

Lisa Vinogradova profile image
by Lisa Vinogradova

The Netherlands' data protection authority, the Autoriteit Persoonsgegevens (AP), published its sixth Report AI & Algorithms Netherlands today and for the first time its AI Impact Barometer has turned red. The signal is deliberate: the AP believes the risks of artificial intelligence in the Netherlands have reached a level that demands immediate action from both government and businesses.

The AP is calling on the new Dutch cabinet to move quickly on implementing AI regulation and setting up proper oversight. The rules are already in force but they are not yet being enforced.


Rentals in the Netherlands

Signaal tracks the Dutch rental market and notifies you the moment something matches your search. Be first to apply.


What the red barometer means

The AI Impact Barometer is the AP's way of tracking how serious the overall risk of AI use in the Netherlands has become. Green means risks are manageable. Red means they are not.

The AP warns of serious risks from unsafe and discriminatory algorithms that currently cannot be acted against through enforcement. That enforcement gap is one of the core concerns: the rules exist on paper, but the structures needed to actually apply them are still being built.

AP chair Aleid Wolfsen put it directly: "Five years after the childcare benefits scandal, the lessons are clear but follow-up remains lacking. That is mainly because robust rules for algorithms and AI and enforcement of those rules are missing."

The childcare benefits scandal, in which the Dutch tax authority used discriminatory algorithms to wrongly flag tens of thousands of families as fraudsters, remains a defining reference point for AI risk in the Netherlands. The AP's message is that the country has not yet done enough to stop something similar from happening again.

Organisations are gaming the rules

One of the more troubling findings in the report is that some organisations are actively trying to avoid stricter AI rules by mislabelling their systems.

Some organisations attempt to escape the AI Act by classifying their systems as regular algorithms rather than AI systems. One example is OxRec, a tool used by probation organisations to predict reoffending, which was registered as an algorithm despite being an AI system.

This matters because AI systems face far stricter requirements under European law than regular algorithms. By registering them incorrectly, organisations can avoid those requirements entirely. The AP reports seeing new registrations of AI systems listed as algorithms in the register every single week, meaning risks to people are continuously increasing.

AI in job applications

The report also focuses on the use of AI in hiring, a rapidly growing area where the AP sees serious problems.

Many employers are using AI in recruitment and selection. That use must be accurate, non-discriminatory and explainable to candidates. But research and practical tests show that transparency and explainability often fall short. It is frequently unclear how a decision is reached and how candidates can challenge it. As a result, some candidates have virtually no chance of being selected from the start.

Under the EU AI Act, AI systems used in recruitment are classified as high-risk systems. From August 2026, they will have to comply with strict requirements. That deadline is now less than six months away.

Deepfakes, fraud, and chatbot harm

The main dangers that have emerged in 2025 and 2026 are the uncontrollable rise of deepfakes, AI-driven fraud, psychological harm caused by chatbots, and AI security measures increasingly lagging behind technological developments.

The AP points to specific recent incidents. These include the proliferation of AI voting guides and problems with Grok, which could generate indistinguishable fake nude images of any person. Both cases illustrate how fast AI tools can cause real harm before any regulatory response is possible.

What needs to happen

The AP emphasises that intervention after the fact is often difficult or even impossible with AI systems. Once systems are operational and personal data has been processed, errors and undesirable effects are not easy to reverse.

For organisations using or developing AI in the Netherlands, August 2026 marks a major enforcement milestone under the EU AI Act. High-risk AI systems, including those used in hiring, credit decisions, and law enforcement, will need to meet strict requirements by then, covering documentation, transparency, and human oversight.

For the Dutch government, the AP is asking for urgency on three fronts: setting up the national oversight structure, improving the algorithm register so it accurately reflects which systems are actually AI, and closing the gap between rules on paper and enforcement in practice. Whether the new cabinet will move fast enough is, for now, an open question.

Lisa Vinogradova profile image
by Lisa Vinogradova

Subscribe to our weekly recap

Get the biggest Dutch news stories of the week in your inbox every Monday. 100% free.

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More