AI regulation in the UK

What is the UK’s approach to AI regulation

After years of discussion and debate, significant developments in AI regulation have taken place in 2024, in the UK and abroad. In February 2024 the UK Government issued its response to its earlier white paper setting out its approach.

Unlike the detailed legislation in the EU on AI regulation, the UK government has avoided new legislation and adopted a cross-sector framework, underpinned by the existing law and 5 core principles:

  • Safety
  • Security and robustness
  • Transparency and availability
  • Fairness and accountability
  • Contestability and redress

So what happens now?

  • Key industry regulators (such as the FCA, IPO CMA and Information Commission) will regulate AI by implementing the principles in their own sector, applying existing laws and issuing supplementary regulatory guidance.
  • It is expected that targeted legislation will be needed in the future to address gaps in the current framework, e.g. risks posed by complex General Purpose AI systems.
  • Voluntary safety and transparency measures will apply to developers of highly capable AI models to supplement the framework.
  • Organisations must prepare now for increased AI regulatory activity over the coming 12 – 24 months.
  • For businesses with international markets, there will be a need to address regulatory divergence in different jurisdictions, ie complying with the more prescriptive EU AI Act.  

Aims of the UK AI Framework

The government’s aim is to implement AI regulation using the 5 core principles by:

  • Leveraging existing regulatory authorities;
  • Establishing a Central Function to carry out risk monitoring and ensure coordination between the different regulators;
  • Supporting innovation by piloting a multi-agency advisory service – the AI and Digital Hub.

Regulators will use a proportionate and context based approach using existing laws and regulations with the 5 core principles. At the moment, the core principles are not legally binding on the regulators although it is expected the government will introduce a legal duty on regulators to give due consideration to the core principles.  

Definition of AI under the UK regulations 

Although there is no formal definition of AI, the regulatory framework focuses on two key characteristics:

  • Adaptivity – the ability of AI systems to see patterns and make decisions in ways which are not directly envisaged by their human programmers;
  • Autonomy – the capacity of AI systems of taking actions/making decisions without the express intent or oversight of a human. 

    The lack of a formal definition allows UK regulators to interpret these characteristics to create a sector specific definition, good for flexibility but raises the risk of different interpretations by other regulators leading to business uncertainty. 

    Regulating advanced AI systems

    The final framework adopted by the government does now introduce definitions for three types of the most powerful AI systems which are subject to the core principles:

    • Highly capable general purpose AI;
    • Highly capable narrow AI;
    • AI agents (agentic AI).

    Large Language Models (LLMs) serve as a basis for many generative AI systems and are a good example of a highly capable GPAI. Leading AI companies developing highly capable GPAI systems have already committed to voluntary safety and transparency measures in advance of the first global AI safety summit held in November 2023.  

    The government has indicated that it is likely that legislation will be needed in some form to deal with the most powerful GPAI models and it is likely that this will cover areas such as: transparency, data quality,, accountability, corporate governance and misuse or unfair bias. The government’s official position is that legislation will only be introduced if it can be shown that existing legal powers and voluntary transparency and risk management codes are insufficient. It feels that it is too early to assess the impact of GPAI models and wants to obtain a greater industry consensus as to how they should be regulated.

    Under the EU AI Act, GPAI developers are legally obliged to provide technical documentation to downstream providers/users, but there is no direct counterpart under the UK framework. 

    It was not possible to create a code as the working party could not agree on one – instead, the aim is for AI developers and copyright holders to work together using a voluntary code in relation to how content should be used.  

    Timeline for implementing the UK AI framework

    Key milestones in 2024 and 2025 are:

    • 30 April 2024 – regulators published their high-level strategic approach to AI;
    • Spring 2024 – UK government to establish a steering committee of the Central Function, launch consultation on an AI risk register and launch the AI and Digital Hub pilot scheme;
    • End of 2024 – UK government to publish an update on developers of highly capable GPAI systems;
    • By April 2025 – regulators to publish more detailed guidance on AI regulation.

    Requirements for successful implementation of the UK AI framework

    Given the UK government’s decision to rely on a mixture of 5 core principles and the existing law rather than new AI legislation, successful implementation will depend on factors such as:

    • The effectiveness of government bodies dealing with AI issues to promote cooperation between regulators;
    • Consistent interpretation of the core principles by regulators across sectors;
    • Clear regulatory expectations for developers and users of AI.

    Action points for developers and users to ensure compliance with the UK AI framework

    For those involved in developing or using AI tools, compliance can best be achieved by a combination of:

    • Ensuring that senior management takes AI governance seriously and keeps abreast of new guidance from the UK regulators that apply to their sector;
    • Establishing an AI risk register so that all high risk issues are identified early and addressed;
    • Making use of the AI & Digital Hub to obtain guidance with legal and regulatory obligations before products are launched; 
    • Keeping detailed document audit trails, which explain how the AI system functions, on what data it has been trained, what testing has occurred, what human monitoring/control has taken place to minimise bias and what risk and impact assessments have been carried out – those who fail to carry out sufficient assessments are likely to be in danger of breaching one or more of the principles;
    • Effective contracts for developers – as an AI system may not always operate as planned, developers will need to ensure that that obligations in relation to performance and outputs are carefully managed by use of suitable warranties, limits on liability and even disclaimers;
    • Effective contracts for users – users will need to include terms that prohibit an AI system from producing any unlawful outputs, IP infringement and data protection breaches and provide suitable remedies, including indemnities where such breaches occur;
    • Businesses checking that their current insurance cover is sufficient to cover the use of AI and the amount of cover needed – at the same time, they need to ensure that any AI suppliers have sufficient insurance cover in case of any future legal claims.

    How Moore Barlow can help

    Not all commercial relationships run smoothly and not everyone will have the same high business standards as you do. To protect your business, you will need to ensure that you have appropriate commercial contracts in place with suppliers, buyers, sellers and any other organisation that you have a financial relationship with. That’s where we can help.

    Contact the Commercial and technology law team if you have any questions.


    Share