The AI Label Is the New Cookie Banner
We're about to make the same mistake we made with cookie banners. And we're going to spend a decade on it.
Yuval Noah Harari has raised the alarm. The EU is drafting legislation. The argument: people should know when content is AI-generated. Transparency, they say. Informed consent. The right to know.
It sounds reasonable. It's the wrong solution to the right problem.
The cookie banner lesson
In 2011, the EU introduced cookie legislation. Websites had to ask users for consent before tracking them. The intention was genuine: give people control over their digital privacy.
Fifteen years later, everyone clicks "accept all" because the alternative takes twenty clicks and nobody understands what they're accepting. The banners are everywhere. The understanding is nowhere. The legislation created compliance theater, not informed citizens.
We are about to do this with AI.
The problem with the AI label
Here's the question nobody is asking: what exactly would the label say?
This essay is partially AI-generated. The news article you read this morning was edited with AI assistance. The photo on that magazine cover was AI-enhanced. The summary your doctor sent you was drafted by an AI system. The presentation your colleague gave yesterday was built with AI tools.
What goes on the label? "Contains AI"? That's like labeling food "contains atoms."
In 2026, AI is not a discrete ingredient you can identify and disclose. It's infrastructure. The question "was AI involved?" has become as meaningful as "was electricity involved?"
Harari is right about the problem
Harari's concern is real. If people can't distinguish authentic human expression from synthetic content, trust collapses. Misinformation becomes untraceable. Authority becomes indistinguishable from fabrication.
But the label doesn't solve this. It creates the illusion of a solution while the actual problem goes unaddressed.
Trust has never come from disclaimers. It comes from track record. It comes from consistency between what someone says and what they do over time. It comes from being able to verify claims against reality.
The questions that have always determined whether you should trust a source: Who is saying this? What's their history? Can I check this against other evidence? AI doesn't change those questions. It makes them more urgent.
The real work
What we actually need isn't a label. We need better epistemics — a cultural upgrade in how people evaluate information.
We need education that teaches people to ask: who is behind this claim, and what's their stake in it? We need platforms that make provenance visible — not "AI or not AI", but "who, when, with what sources". We need a public discourse that rewards intellectual honesty over volume.
None of that fits on a banner. None of it can be solved by legislation that forces a checkbox.
What will actually happen
Large platforms will add disclaimers that nobody reads. Bad actors will ignore them. The regulatory burden will fall hardest on small creators who are playing by the rules anyway. And in ten years, we'll have a generation of people who've learned to click "I understand this was made with AI" the same way they click "accept all cookies" — reflexively, meaninglessly, without any actual understanding.
Meanwhile, the actual crisis of trust will have deepened, because we spent our attention on compliance theater instead of the real work.
The better question
Instead of asking "was this made with AI?", ask: does this person's thinking hold up over time? Do their insights cohere with reality? Are they transparent about their reasoning process, their sources, their uncertainty?
Those are the questions that have always distinguished reliable thinkers from unreliable ones. They work whether the tools involved are quill pens, word processors, or large language models.
AI is a tool. An extraordinarily powerful one, with genuinely new risks that deserve serious attention. But the response to those risks should be proportionate and targeted — not a cookie banner for the mind.
We have a choice right now. We can spend a decade on labels that create the feeling of transparency without the substance of it. Or we can invest in the harder, more important work: building a culture that knows how to think.
The clock is running.