Quantum computing is usually framed as a future problem. Regulate it once it’s real, scalable, deployed.
That framing is wrong.
Quantum technologies are already moving into early applications. The question is no longer whether we regulate them – but whether we do it in time to matter. If regulation arrives after the architecture is locked in, it won’t shape anything. It will only react.
Quantum awareness is not optional
Before governance, there’s a more basic problem: most lawyers don’t understand what quantum systems actually do.
I’m not suggesting we all become physicists. But there’s a minimum threshold – enough to ask the right questions. Where are the risks? How do quantum systems interact with existing infrastructure? What happens when they fail, and who is responsible?
Without that baseline, legal frameworks will be either too abstract to apply or too narrow to matter. Right now, we’re not at the stage of detailed regulation. We’re at the stage of building a shared language. That’s the actual work.
AI governance is a starting point – not the answer
Over the past decade, AI forced legal systems to engage with risk, accountability, transparency, and fairness. That work matters – and it’s not finished.
But quantum will stretch those frameworks in ways we haven’t mapped yet. Quantum systems operate on fundamentally different principles than traditional software. And in practice, they’ll often be combined with AI – creating hybrid architectures that don’t fit neatly into the categories we’ve spent years building.
AI governance is necessary. It is not sufficient.
The timing problem
Technological systems follow paths. Once standards, infrastructure, and business models are set, they create dependencies that are genuinely hard to undo. This is path dependence – and it’s the real reason timing matters more in quantum than almost anywhere else.
I watched this pattern play out with AI. By the time serious governance conversations started, many of the foundational decisions had already been made. Quantum is earlier. That’s the only structural advantage we have – and it’s worth using deliberately.
The window is open. It won’t stay open.
The risks aren’t new. The scale is.
Most of the public conversation about quantum focuses on potential: cryptography, drug discovery, logistics, materials science. Those possibilities are real.
But so are the risks – and most of them aren’t unfamiliar. Concentration of power. Unequal access. Security vulnerabilities. Deliberate misuse. What changes is the scale at which these problems can materialize.
A “quantum divide” is already a credible concern. Access to quantum capabilities may concentrate among a small number of companies and states, deepening asymmetries that already exist in AI. Without deliberate intervention, that’s not a worst-case scenario – it’s the default.
From compliance to assessment
One practical response is the idea of a Quantum Impact Assessment – a structured process for evaluating not just whether a quantum system is legally compliant, but how it affects society, how risks evolve over time, and where responsibility sits across complex value chains.
We’ve seen this logic work in AI governance. Quantum will require the same – and probably more, given how early and how fast the technology is moving.
The difference is that with quantum, we still have time to build these tools before they’re urgently needed. That window is narrowing.
What this actually requires
Quantum computing won’t just need new rules. It needs a different way of thinking about the relationship between technology and law – not as separate domains, but as systems that shape each other in real time.
I’ve spent years working at that intersection with AI. Quantum is the next layer – and the one where the choices we make now will matter the longest.
The real challenge isn’t catching up. It’s acting while there’s still room to decide what comes next.
Dr Agata Konieczna | @DrKonieczna
For legal and strategic advisory on AI governance, visit AI Business Studio.