Bletchley Park AI Safety Summit 2023

Share

On 1 and 2 November 2023, world leaders, politicians, computer scientists and tech executives attended the global AI Safety Summit at Bletchley Park in the UK. Key political attendees included US Vice President Kamala Harris, European Commission President Ursula von der Leyen, UN Secretary-General António Guterres, and UK Prime Minister Rishi Sunak. Industry leaders also attended, including Elon Musk, Google DeepMind CEO Demis Hassabis, OpenAI CEO Sam Altman, Amazon Web Services CEO Adam Selipsky, and Microsoft president Brad Smith.

Day 1: The Bletchley Declaration

On the first day of the summit, 28 countries and the EU signed the Bletchley Declaration (“Declaration”). The Declaration establishes an internationally shared understanding of the risks and opportunities of AI and the need for sustainable technological development to protect human rights and to foster public trust and confidence in AI systems. In addition to the EU, signatories include the UK, the US and, significantly, China. Nevertheless, there are notable absences, most obviously, Russia.

Continue reading “Bletchley Park AI Safety Summit 2023”

The UK’s New AI Proposals

Share

On 29 March 2023, the UK Government published its latest proposals on regulating Artificial Intelligence (“AI”). The White Paper follows on from an initial policy paper published in July 2022 (the “2022 Policy Paper”), which we discussed in detail in our previous blog post. The proposals set out in the White Paper have been informed by the feedback received as part of the UK Government’s consultation on the 2022 Policy Paper.

A central theme is that the regulatory framework in the UK must not stifle innovation, but rather harness AI’s ability to drive growth and prosperity, and increase public trust in its use and application.

Continue reading “The UK’s New AI Proposals”

Artificial Intelligence Briefing: NIST Releases AI Risk Management Framework and Playbook

Share

Our latest briefing dives into the public launch of the NIST’s long-awaited AI Risk Management Framework, the EEOC’s new plan to tackle AI-based discrimination in recruitment and hiring, and the New York Department of Financial Services’ endeavor to better understand the potential benefits and risks of AI and machine learning in the life insurance industry.

Continue reading “Artificial Intelligence Briefing: NIST Releases AI Risk Management Framework and Playbook”

Update: AI Regulation in the U.K. — New Government Approach

Share

In October 2022, the U.K. Medicines and Health products Regulatory Agency (MHRA) published its Guidance, Software and AI as a Medical Device Change Programme – Roadmap, setting out how it will regulate software and AI medical devices in the U.K. by balancing patient protection and providing certainty to industry.

Background to the Reforms

The MHRA initially announced the Software as a Medical Device (SaMD) and Artificial Intelligence as a Medical Device (AIaMD) Change Programme in September 2021, designed to ensure that regulatory requirements for software and AI are clear and patients are kept safe. This builds on the broader reform of the medical device regulatory framework detailed in the Government response to consultation on the future regulation of medical devices in the United Kingdom, which recently saw its timetable for implementation extended by 12 months to July 2024.

Continue reading “Update: AI Regulation in the U.K. — New Government Approach”

Artificial Intelligence Briefing: FTC Holds Forum on Commercial Surveillance and Data Security

Share

Our latest briefing explores the recent FTC commercial surveillance and data security forum (including discussion on widespread use of AI and algorithms in advertising), California’s inquiry into potentially discriminatory health care algorithms, and the recent California Department of Insurance workshop that could shape future rulemaking regarding the industry’s use of artificial intelligence, machine learning and algorithms.

Continue reading “Artificial Intelligence Briefing: FTC Holds Forum on Commercial Surveillance and Data Security”

NIST Releases New Draft of Artificial Intelligence Risk Management Framework for Comment

Share

The National Institute of Standards and Technology (NIST) has released the second draft of its Artificial Intelligence (AI) Risk Management Framework (RMF) for comment. Comments are due by September 29, 2022.

NIST, part of the U.S. Department of Commerce, helps individuals and businesses of all sizes better understand, manage and reduce their respective “risk footprint.”  Although the NIST AI RMF is a voluntary framework, it has the potential to impact legislation. NIST frameworks have previously served as basis for state and federal regulations, like the 2017 New York State Department of Financial Services Cybersecurity Regulation (23 NYCRR 500).

The AI RMF was designed and is intended for voluntary use to address potential risks in “the design, development, use and evaluation of AI products, services and systems.” NIST envisions the AI RMF to be a “living document” that will be updated regularly as technology and approaches to AI reliability to evolve and change over time.

Continue reading “NIST Releases New Draft of Artificial Intelligence Risk Management Framework for Comment”

©2025 Faegre Drinker Biddle & Reath LLP. All Rights Reserved. Attorney Advertising.
Privacy Policy