This week I attended the 2026 DRCF Responsible AI Forum, hosted by the Digital Regulation Cooperation Forum at County Hall. This event brought together regulators, policymakers and industry experts to discuss one of the most pressing challenges of our time: how to govern AI responsibly while enabling innovation.
Across plenary discussions and breakout sessions, several themes consistently emerged. These highlight both the opportunities and the complexity facing regulators, public sector organisations and the wider economy as AI adoption accelerates.
AI governance must be both sectoral and cross-sectoral
AI is fundamentally a horizontal technology. It cuts across industries, markets and public services.
However, regulation has historically evolved in vertical silos: communications, competition, consumer protection and data protection are typically overseen by different regulatory bodies.
Discussions at the forum highlighted the growing need for integrated governance models that recognise this reality. Effective AI oversight requires coordination across regulatory domains, ensuring that rules remain coherent even as technologies operate across multiple sectors simultaneously.
This is precisely the challenge the Digital Regulation Cooperation Forum was created to address, bringing together regulators to coordinate approaches and reduce fragmentation.
Trust is central to AI adoption
Trust emerged repeatedly as the defining factor for AI adoption.
At present, public trust in AI remains fragile. Many people remain wary of AI making significant decisions on their behalf, particularly in areas such as financial services, employment or public services.
In contrast, high-trust sectors such as pharmaceuticals or aviation demonstrate that new technologies can be widely accepted when governance frameworks are robust and transparent.
For AI systems, trust will depend on clear accountability, transparent processes and mechanisms that allow individuals to challenge decisions.
Questions raised during the forum included:
-How can AI decisions be contested?
-Should contestability become a regulated right?
-How can users understand when to trust an AI system and when verification is required?
These questions highlight the need for governance frameworks that place transparency and accountability at their core.
Human oversight remains essential
The concept of “human in the loop” was discussed throughout the forum, but with an important nuance.
Human oversight is not static. As AI systems evolve and organisations gain experience, the point at which human expertise adds the most value can shift.
In some contexts, the human role may move from direct oversight towards supervision, auditing or governance. In others, expert input may remain critical.
One panellist described this more accurately as an “expert in the loop” model, where domain expertise remains embedded in AI workflows.
This has important implications for organisations deploying AI technologies. Systems must be designed to integrate human judgement effectively, rather than attempting to remove it entirely.
Designing AI systems with safety in mind
Another recurring theme was the need to embed safety and governance into AI systems from the outset.
This “safety by design” approach recognises that harms from AI systems can extend beyond traditional consumer protection frameworks.
For example, a product such as smart glasses may be safe for the consumer using them, but could still cause harm to others if data is captured or processed without consent.
Forum participants suggested that regulators may need to move towards identifying categories of hazards rather than focusing solely on individual instances of harm.
Potential areas of concern include:
-misinformation
-fraud
-manipulation of users
-misuse of data
-poor system design
A systemic understanding of these hazards may help regulators develop more effective oversight mechanisms.
AI adoption in the public sector
A session on public sector AI adoption provided practical insights into how organisations are experimenting with the technology.
Representatives from Citizens Advice described building two AI agents using modern developer tools. Importantly, the development process involved subject matter experts and end users directly.
This collaborative approach increased trust in the final product and ensured that the tools addressed real operational needs.
Interestingly, the panel generally favoured a “build rather than buy” approach for certain use cases. Developing targeted internal tools can sometimes be more efficient than purchasing commercial products, particularly where organisations require highly specialised functionality.
However, this approach raises its own questions.
How do organisations determine whether they are using the right tools?
How do they ensure governance and oversight remain robust as AI capabilities expand?
The role of regulators
Across the discussions, there was broad agreement that regulation should focus on outcomes rather than specific technologies.
AI systems will continue to evolve rapidly. Attempting to regulate individual technologies may prove ineffective if regulatory frameworks cannot adapt quickly enough.
Instead, principles-based approaches may offer greater resilience.
The UK’s regulatory culture, which often emphasises principles and organisational responsibility, was highlighted as a potential advantage in this context.
However, this approach still requires strong coordination between regulators to ensure clarity for organisations navigating multiple regulatory frameworks.
Turning governance into practical systems
Many of the challenges discussed at the forum ultimately come down to how organisations create and manage authoritative information.
Regulation, guidance, standards and policies must be clear, accessible and consistently maintained if they are to support effective AI governance.
This is where infrastructure for managing and publishing regulatory information becomes critical.
At TSO, we work with regulators and public sector organisations to transform complex regulatory content into structured, authoritative information that can be published and maintained efficiently across multiple formats.
This includes:
-structured authoring environments that improve accuracy and consistency
-digital publishing platforms that make regulatory information easier to access and understand
-tools that support compliance, learning and engagement with regulatory content
As AI systems increasingly rely on authoritative information sources, the quality, structure and accessibility of regulatory content becomes even more important.
Responsible AI does not depend solely on algorithms or models. It also depends on the information ecosystems that support them.
Looking ahead
The forum highlighted both the promise and the complexity of AI governance.
AI has the potential to unlock innovation and economic growth. But realising that potential requires systems that people trust, governance frameworks that adapt to new risks and organisations that can manage authoritative information effectively.
As AI continues to evolve, collaboration between regulators, technology providers and organisations will be essential to ensure that innovation and accountability move forward together.
Blog post written by Alan Blanchard, TSO Business Development Director.