On 1 April 2024, the UK and US signed a memorandum of understanding on the science of AI safety. This partnership is the first of its kind and will see the two countries work together to assess risks and develop safety tests for the most advanced AI models.
Following their announcement of cooperation at the AI Safety Summit in Bletchley Park last November, the UK and US have formally agreed to align their scientific approaches to AI safety testing, with plans to perform at least one joint testing exercise on a publicly accessible model. The partnership will take effect immediately and will see the two countries work together to tackle the safety risks posed by next-generation versions of AI. The agreement will facilitate collaboration between the UK AI Safety Institute (formed last November) and the US AI Safety Institute (which is still in its initial stages) and will include the sharing of vital information and research on the capabilities and risks associated with AI systems, together with the exchange of expertise through researcher secondments between the institutes.
According to the UK Government press release, the institutes intend to work closely to develop an interoperable programme of work and approach to safety research to achieve their shared objectives on AI safety. Specifically, the institutes intend to:
- “develop a shared approach to model evaluations, including the underpinning methodologies, infrastructures and processes
- perform at least one joint testing exercise on a publicly accessible model
- collaborate on AI safety technical research, to advance international scientific knowledge of frontier AI models and to facilitate sociotechnical policy alignment on AI safety and security
- explore personnel exchanges between their respective institutes
- share information with one another across the breadth of their activities, in accordance with national laws and regulations, and contracts
- the partners remain committed, individually and jointly, to developing similar collaborations with other countries to promote AI safety and manage frontier AI risks and develop linkages between countries on AI safety
- to achieve this, the partners intend to work with other governments on international standards for AI safety testing and other standards applicable to the development, deployment, and use of frontier AI models.”
The memorandum of understanding is a non-legally binding instrument in the UK and therefore does not require publication by the UK Government. We will provide further details as and when further developments emerge.
The memorandum of understanding has been agreed against the background of rapidly evolving regulatory approaches. To date, the UK has signalled a pro-innovation approach to AI regulation, proposing to adopt a non-statutory, cross-sectoral, outcomes-based approach to AI regulation, with five core principles for existing UK regulators to interpret and apply within their sector-based domains: (i) safety, security and robustness; (ii) appropriate transparency and explainability; (iii) fairness; (iv) accountability and governance; and (v) contestability and redress.
The UK Government recognises that binding requirements will be required in the future, but that technological understanding and safety insight should precede legislative oversight.
In the US, in October 2023, the Biden administration issued its Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“Executive Order”). This wide-ranging Executive Order includes obligations on AI developers to share safety test results with the US Government before officially releasing AI systems. For more information, see our update on the Executive Order here.
In the European Union, the long-awaited AI Act was approved by the European Parliament on 13 March 2024 and is expected to be endorsed by the European Council in April or May 2024. The AI Act will be the world’s first comprehensive regulation of AI and sets out a framework of obligations for parties involved across the entire AI supply chain. It takes a risk-based approach, with different degrees of obligation depending on the risk classification of a particular AI system, as well as a prohibition on uses of AI deemed to pose unacceptable risk (such as those which manipulate human behaviour or exploit a person’s vulnerabilities). The AI Act also sets out high-level principles to guide and inform the responsible development and use of AI: (i) human agency and oversight; (ii) technical robustness and safety; (iii) privacy and data governance; (iv) transparency; (v) diversity, non-discrimination and fairness; (vi) societal and environmental well-being; and (vii) accountability. For more information, see our article on the AI Act here.
Against this backdrop, the memorandum of understanding indicates that the UK and the US wish to keep pace with emerging AI models and nascent global regulation of such models. It remains to be seen how this transatlantic partnership will influence the course of AI regulation in the two respective countries, and whether it may foster further collaboration within the international community.
The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.