AI Policy's Blind Spot
- Derek

- Nov 18
- 3 min read

Policy is all about balancing conflicting values. But when it comes to AI, we often entirely ignore the crucial conflict. We focus, talk, and trumpet social values, but seem to forget that there’s often another value at play - one that is uncomfortable and awkward to acknowledge. We ignore it at our peril.
Take big AI companies' use of unauthorized copyrighted training data as an example. This has been flagged, here in Canada and elsewhere, as an enduring, complicated policy issue.
But is it? Whatever the particulars of copyright law might say, policymakers I interact with don't seem confused about what should be happening: AI companies are making money off artists’ works and should be compensating them. They're not doing that right now.
Most countries have plenty of policy tools to quickly move their position in the right direction. Here in Canada, some options include: block all infringing LLM-providers at the Canadian border; impose extreme penalties on these companies to extract damages done to Canadian creatives; sue these companies in local, foreign, and international courts; build local capacity for creating domestic LLMs that abide by copyright law. In short, enforce the existing copyright law.
However, Canada has been excruciatingly slow to implement any of these... and for good reason. Implementing any of these could have sudden and severe economic consequences. Cutting ourselves off from foreign (infringing) LLMs would disadvantage our workforce and industry. Building domestic, copyright-compliant LLMs could bankrupt us. And even if we survive these outcomes, as we saw with the ill-fated Digital Sales Tax, powerful foreign entities will retaliate quickly and decisively against policy actions they don't like.
While we may be concerned about artistic exploitation, it’s economic pain that’s keeping us from implementing policy. In effect, this AI issue boils down to a tradeoff between artistic rights and economic vitality. This might seem obvious - but, if it is, it's strange that we're not talking about economic consequences much. I recently spent 5 days attending multiple international AI policy events, listening to and speaking with leading policy minds. The candid economic consequences of policy decisions almost never came up. If public policy is about resolving conflicts between two values and we aren’t acknowledging one of those values - then what kind of outcomes do we expect to get? The answer: something that looks very much like where we are right now.
Why is it hard to reign in copyright infringement by AI companies? It's not fundamentally because we lack policy tools. It's because we are prioritizing certain kinds of economic health above artistic rights and compensation. If we want a different solution, we need to choose to strike a different balance between economic health and artistic rights - and embrace the consequences. This involves looking at those economic factors and asking critical questions: what aspect of economic health matters most? International trade? Public health care? Banking regulations? Which of these are less important than protecting artistic ownership? If we don’t embrace the discomfort of looking at economic consequences, then we have no hope of making progress on this AI issue.
Copyright isn't the only place this crops up. Another ready example is sovereign AI: here in Canada, we talk about the principles of data and compute control while often ignoring the relative economic clout of hyperscalers. Why is it hard to make policies around the sovereignty of Canadian data in US-owned hyperscaler data centres? It's not because policy tools don't exist. It's because hyperscalers are so economically powerful that they would simply leave and take their data center capacity and know-how with them. We don't use available policy tools, in large part, because we don’t want to acknowledge that we’re too scared of the economic repercussions of using them.
There is a way forward. There is hope. Public policy has a long history of enabling societies to resolve once intractable value conflicts and progress society towards new ways of living and being: seatbelts, vaccines, taxes. Other countries are even demonstrating that policy can shape AI issues in the face of economic pain: how was the UK able to keep both their online safety act and sales task in their trade deal with the US? How is the EU enforcing its copyright directive on OpenAI without eliciting economic retaliation? Progress is being made in these and other places because policymakers have included gritty, distressing economic consequences as a fundamental part of their policy design framework.
Too often, we are treating AI policy as a one-sided balance: social benefits against the-thing-that-shall-not-be-named. We would do well to speak its name: the pain of economic consequences. The sooner we do this, the sooner we will find we can align the AI ecosystem with both our social values and economic needs.


