I attended the Digital Regulation Cooperation Forum (DRCF) Thematic Innovation Hub Roundtable on Agentic AI recently at Digital Catapult. The discussion brought regulators and industry together to explore how existing UK frameworks can respond to increasingly autonomous AI systems.
Across the themes of regulatory approach, data protection and liability, one theme stood out clearly: we do not need to reinvent regulation. We need to apply it better.
Outcomes, Not Technology-Specific Rules
There was strong consensus that the UK’s regulatory architecture remains fundamentally sound. Businesses are not asking for entirely new AI regimes. They are asking for clarity.
The difficulty arises when broad principles meet machine execution.
Concepts such as “fairness” work in human systems because interpretation is expected. Machines, however, require specificity. Guard rails only function effectively when they are precisely defined. Ambiguity that is manageable for humans becomes a showstopper for autonomous systems.
Regulatory overlap adds another layer of complexity. Financial advertising, for example, can sit across both the Financial Conduct Authority and the Advertising Standards Authority. AI as a function is inherently cross-cutting and so is bound to uncover inconsistency in the regulatory remit across different regimes.
A harm-focused, risk-based model feels like the right anchor. B2B firms already operate within robust regulatory environments. B2C contexts often carry higher inherent risk. Setting risk appetite upfront enables governance structures to be embedded early, rather than retrofitted later.
A structured, staged “waterfall” approach to deployment with governance designed in from the outset makes it far easier to evidence compliance and best practice.
Data Protection: Principles Are Sound, Application Is Harder
Data protection emerged as the defining tension.
As I have said, AI systems are cross-cutting by nature. Regulatory frameworks are not. That structural mismatch creates uncertainty.
Questions raised included:
-How to apply consent and purpose limitation when decisions are autonomous and real time?
-How to define controller and processor roles in multi-party ecosystems?
-How to ensure transparency when systems operate across services and borders?
-How to prevent harm from inaccurate inference or silent data leakage?
The principles themselves are not broken. But they need clearer implementation models and practical guidance if they are to be applied confidently in agentic environments.
Data portability was also highlighted as a structural issue, central to competition, personalisation and avoiding lock-in as systems become more capable.
The challenge is coordination. AI does not sit neatly within regulatory silos. Our frameworks need to reflect that reality.
During the discussion, I raised the question of how we practically “join up” cross-cutting obligations. One answer lies in how regulatory information itself is structured. If legislation, standards and guidance are authored in machine-readable, interoperable formats, it becomes far easier to map obligations across domains. For example, aligning data protection requirements with financial conduct rules or advertising standards.
Unlocking authoritative content in structured form enables clearer traceability between principles, implementation controls and supervisory oversight. Without that structure, both organisations and regulators are left interpreting fragmented guidance in isolation.
Responsibility, Liability and Cascading Risk
The most difficult questions are about responsibility.
When agents act without direct human instruction, and when multiple systems chain decisions together, traditional accountability models strain. Cascading failure becomes a genuine concern: a flaw in one agent propagating through interconnected systems.
Several points resonated:
-Digital literacy matters. Does the end user really understand what their systems are doing, not just what they are designed to do?
-Graduated licensing or limited permissions models may offer a structured way to manage risk proportionately.
-Engagement between regulator and regulated needs to be earlier and more iterative.
-As autonomy increases, transparency, staged testing and clear documentation become essential components of governance.
This is also where supervisory technology (SupTech) plays its part. If regulated firms are deploying increasingly autonomous systems, regulators must also enhance their own capability. SupTech offers the potential to monitor behaviour, detect systemic risk and support proportionate, evidence-based oversight without stifling innovation and in real time.
Effective SupTech, however, depends on clarity and structure in the underlying regulatory obligations. If guard rails are ambiguous, supervisory tooling will struggle to interpret them consistently.
Evolution, Not Reinvention
The overriding message from the session was pragmatic.
The UK’s regulatory foundations are strong. Innovation will move faster if we focus on outcomes and real-world impact rather than rushing to create technology-specific rules that risk becoming obsolete.
Agentic AI is accelerating productivity. But productivity without trust is unsustainable.
The path forward is coordinated, risk-based and collaborative: clear principles, specific guard rails, structured governance, and open dialogue between regulators and industry.
The conversation is evolving and it needs to continue.
Blog post written by Alan Blanchard, TSO Business Development Director.