UK signals step change for regulators to strengthen AI leadership

  • Over £100 million to support regulators and advance research and innovation on A.Iincluding Hubs in healthcare and chemical discovery
  • Key regulators asked to publish plans by end of April for how they are responding to A.I risks and opportunities
  • UK government makes case for introducing future targeted, binding requirements for most advanced general-purpose A.I systems

The UK is on course for more agile A.I regulation, backing regulators with the skills and tools they need to address the risks and opportunities of A.Ias part of the government’s response to the A.I Regulation White Paper consultation today (6 February).

It comes as £10 million is announced to prepare and upskill regulators to address the risks and harness the opportunities of this defining technology. The fund will help regulators develop cutting-edge research and practical tools to monitor and address risks and opportunities in their sectors, from telecoms and healthcare to finance and education. For example, this might include new technical tools for examining A.I systems.

Many regulators have already taken action. For example, the Information Commissioner’s Office has updated guidance on how our strong data protection laws apply to A.I systems that process personal data to include fairness and have continued to hold organizations to account, such as through the issuance of enforcement notices. However, the UK wants government to build on this by further equipping them for the age of A.I as use of the technology ramps up. The UK’s agile regulatory system will simultaneously allow regulators to respond rapidly to emerging risks, while giving developers room to innovate and grow in the UK.

In a drive to increase transparency and provide confidence to British businesses and citizens, key regulators, including Ofcom and the Competition and Markets Authority, have been asked to publish their approach to managing the technology by 30 April. It will see them set out A.I-related risks in their areas, detail their current skillset and expertise to address them, and a plan for how they will regulate A.I over the coming year.

This forms part of the A.I regulation white paper consultation response, published today, which carves out the UK’s own approach to regulation and which will ensure it can quickly adapt to emerging issues and avoid placing burdens on business which could stifle innovation. This approach to A.I regulation will mean the UK can be more agile than competitors, while also leading on A.I safety research and evaluation, charting a bold course for the UK to become a leader in safe, responsible A.I innovation.

The technology is rapidly developing, and the risks and most appropriate mitigations, are still not fully understood. The UK government will not rush to legislate, or risk implementing ‘quick-fix’ rules that would soon become outdated or ineffective. Instead, the government’s context-based approach means existing regulators are empowered to address A.I risks in a targeted way.

The UK government has for the first time, however, set out its initial thinking for future binding requirements which could be introduced for developers building the most advanced A.I systems – to ensure they are accountable for making these technologies safe enough.

Secretary of State for Science, Innovation and Technology, Michelle Donelan said:

The UK’s innovative approach to A.I regulation has made us a world leader in both A.I safety and A.I development.

I am personally driven by A.I‘s potential to transform our public services and the economy for the better – leading to new treatments for cruel diseases like cancer and dementia, and opening the door to advanced skills and technology that will power the British economy of the future.

A.I is moving fast, but we have shown that humans can move just as fast. By taking an agile, sector-specific approach, we have begun to grip the risks immediately, which in turn is paving the way for the UK to become one of the first countries in the world to reap the benefits of A.I safely.

Meanwhile, nearly £90 million will go towards launching nine new research hubs across the UK and a partnership with the US on responsibility A.I. The hubs will support British A.I expertise in harnessing the technology across areas including healthcare, chemistry, and mathematics.

£2 million from Arts and Humanities Research Council (AHRC) funding is also being announced today, which will support new research projects that will help to define what is responsible A.I looks like across sectors such as education, policing and the creative industries. These projects are part of the AHRC‘s Responsible Bridging A.I Divides (BRAIDS) programme.

£19 million will also go towards 21 projects to develop innovative trusted and responsible A.I and machine learning solutions to accelerate deployment of these technologies and drive productivity. This will be funded through the Accelerating Trustworthy A.I Phase 2 competition, supported through the UKRI Technology Missions Fund, and delivered by the Innovate UK BridgeAI programme.

The government will also be launching a steering committee in spring to support and guide the activities of a formal regulator coordination structure within government in the spring.

These measures sit alongside the £100 million invested by the government in the world’s first A.I Safety Institute to evaluate the risks of new A.I models, and the global leadership demonstrated by hosting the world’s first major summit on A.I safety at Bletchley Park in November.

The groundbreaking International Scientific Report on Advanced A.I Safety which was unveiled at the summit will also help to build a shared evidence-based understanding of the frontier A.Iwhile the work of the A.I Safety Institute will see the UK working closely with international partners to boost our ability to evaluate and research A.I models.

The UK further commits to this approach today with an investment of £9 million through the government’s International Science Partnerships Fund, bringing together researchers and innovators in the UK and the United States to focus on developing safe, responsible, and trustworthy A.I.

The government’s response lays out a pro-innovation case for further targeted binding requirements on the small number of organizations that are currently developing highly capable general-purpose A.I systems, to ensure that they are accountable for making these technologies safe enough. This would build on steps the UK’s expert regulators are already taking to respond to A.I risks and opportunities in their domains.

Hugh Milward, Vice-President, External Affairs Microsoft UK said:

The decisions we take now will determine A.I‘s potential to grow our economy, revolutionize public services and tackle major societal challenges and we welcome the government’s response to the A.I White Paper.

Seizing this opportunity will require responsible and flexible regulation that supports the UK’s global leadership in the era of A.I”.

Aidan Gomez, Co-Founder and CEO of Cohere, said:

By reaffirming its commitment to an agile, principles-and-context based, regulatory approach to keep pace with a rapidly advancing technology the UK government is emerging as a global leader in A.I policy.

The UK is building an A.I-governance framework that both embraces the transformative benefits of A.Iwhile being able to address emerging risks.

Lila Ibrahim, Chief Operating Officer, Google DeepMind:

I welcome the UK government’s statement on the next steps for A.I regulation, and the balance it strikes between supporting innovation and ensuring A.I is used safely and responsibly.

The hub and spoke model will help the UK benefit from the domain expertise of regulators, as well as providing clarity to the A.I ecosystem – and I’m particularly supportive of the commitment to support regulators with further resources.

A.I represents an opportunity to drive progress for humanity, and we look forward to working with the government to ensure that the UK can continue to be a global leader in A.I research and set the standard for good regulation.

Tommy Shaffer Shane, A.I Policy Advisor at the Center for Long-Term Resilience, said:

We’re pleased to see this update to the government’s thinking on A.I regulation, and especially the firm recognition that new legislation will be needed to address the risks posed by rapid developments in highly-capable general purpose systems.

Moving quickly here while thinking carefully about the details will be crucial to balancing innovation and risk mitigation, and to the UK’s international leadership in A.I governance more broadly.

We look forward to seeing the government work through this challenge at pace, and to further updates on the approach to new legislation in the coming weeks and months.

Julian David, CEO at techUK said:

techUK welcomes the government’s commitment to the pro-innovation and pro-safety approach set out in the A.I Whitepaper. We now need to move forward at speed, delivering the additional funding for regulators and getting the Central Function up and running. Our next steps must also include bringing a range of expertise into government, identifying the gaps in our regulatory system and assessing the immediate risks.

If we achieve this the Whitepaper is well placed to provide the regulatory clarity needed to support innovation, and the adoption of A.I technologies, that promises such vast potential for the UK.”

Kate Jones, Chief Executive of the Digital Regulation Cooperation Forum (DRCF), said:

The DRCF member regulators are all keen to maximize the benefits of A.I for individuals, society and the economy, while managing its risks effectively and proportionately.

To that end, we are taking significant steps to implement the White Paper principles, and are collaborating closely on areas of shared interest including our upcoming A.I and Digital Hub pilot service for innovators.

John Boumphrey, UK Country Manager of Amazon said:

Amazon supports the UK’s efforts to establish guardrails for A.I, while also allowing for continued innovation. As one of the world’s leading developers and deployers of A.I tools and services, trust in our products is one of our core tenets and we welcome the overarching goal of the white paper.

We encourage policymakers to continue pursuing an innovation-friendly and internationally coordinated approach, and we are committed to collaborating with government and industry to support the safe, secure, and responsible development of A.I technology.

Markus Anderljung, Head of Policy, Center for the Governance of A.I said:

The UK’s approach to A.I regulation is evolving in a positive direction: it relies heavily on existing regulators, takes concrete steps to support them, while also investing in identifying and addressing gaps in the regulatory ecosystem.

I am particularly pleased that the response acknowledges the need to address one such gap that has become more apparent since the white paper’s publication: how the most impactful and compute-intensive A.I systems are developed and deployed onto the market.

The consultation has highlighted the strong support for the five cross-sectoral principles which are the foundation of the UK’s approach and include safety, transparency, fairness and accountability.

The publication of the A.I Regulation White Paper last March laid the foundations for the UK’s approach to regulation A.I by driving safe, responsible innovation. This common sense, pragmatic approach will now be further strengthened by robust regulatory expertise, allowing people across the country to safely harness the benefits of A.I for years to come.

Notes to editors

Read the full government response to the A.I White Paper consultation.