← Back to blog

The EU AI Act just made understanding your code a legal requirement

On February 2, 2025, Article 4 of the EU AI Act became enforceable. It reads:

Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.

If your company uses GitHub Copilot, Cursor, Claude Code, or any other AI coding tool, you are a "deployer" under this regulation. Your developers are "persons dealing with the operation and use of AI systems." And you are now legally required to ensure they have "sufficient AI literacy."

The question is: how do you prove that?

The compliance gap

Most companies today have no mechanism for measuring whether their developers understand the code that AI generates for them. The best they can offer is a training workshop, maybe a certificate, maybe a slide deck about prompt engineering.

But Article 4 doesn't say "train your staff." It says "ensure a sufficient level of AI literacy." That's a continuous obligation, not a one-time event. A developer who completed an AI training in January can still blindly accept code they don't understand in March.

The Anthropic study from January 2026 showed this isn't hypothetical. Engineers who used AI assistance scored 17% lower on comprehension tests. They shipped code that worked but that they couldn't explain. Junior developers were hit hardest.

What regulators will actually ask

When the enforcement cycle starts, regulators won't ask "did you hold a training?" They'll ask:

  • What measures have you taken to ensure AI literacy? (Article 4)
  • How do you know your developers understand the AI-generated code they're shipping?
  • Can you show evidence that competence is maintained over time, not just at onboarding?

For high-risk AI systems, the bar goes higher. Article 14 requires that people assigned to human oversight can "properly understand the relevant capacities and limitations" of the AI system and "correctly interpret" its output. Article 26 requires deployers to assign oversight to people with "the necessary competence, training and authority."

If a developer is reviewing AI-generated code but doesn't actually understand what it does, that's not human oversight. That's a rubber stamp.

What we're building

Entendi sits inside the development environment and watches the technical concepts that come up as developers work with AI. At natural moments, it asks a question. Not a quiz, not an exam. Just a check: do you actually understand what was just generated for you?

Over time, this builds a per-developer, per-concept knowledge profile. A Bayesian model tracks what each person truly comprehends versus where they're trusting the machine. It detects when understanding degrades, surfaces which parts of the codebase the team doesn't actually understand, and produces timestamped, audit-ready evidence of literacy levels.

For Article 4 compliance, this means:

  • Continuous measurement, not a one-time checkbox
  • Per-concept granularity, not "developer X completed a course"
  • Audit trail, not "we believe our team is competent"
  • Automation bias detection, which directly addresses Article 14(b)'s requirement that people "remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system"

The uncomfortable truth

The EU AI Act created a legal obligation that most companies have no tooling to meet. You can't ensure literacy if you can't measure it. You can't measure it with workshops and certificates. You need something that operates where the work happens, continuously, passively, and with enough granularity to tell you which developer understands JWT authentication but has a gap in SQL injection prevention.

That's what we built. For individuals, it's free. For teams and organizations that need compliance dashboards and reporting, reach out.


The EU AI Act (Regulation 2024/1689) entered into force on August 1, 2024. Article 4 on AI literacy has been applicable since February 2, 2025. The full regulation becomes applicable on August 2, 2026. The official text is available on EUR-Lex.