Chances are good that your organization uses algorithms or artificial intelligence to help make business decisions — and that regulatory efforts targeting these automated decision-making systems, including their potential to produce unintended bias, have caught your attention. In this episode of the Faegre Drinker on Law and Technology Podcast, host Jason G. Weiss sits down with Bennett Borden, Faegre Drinker’s chief data scientist and co-founder of the firm’s artificial intelligence and algorithmic decision-making (AI-X) team, to discuss algorithmic bias and what companies should know about the latest regulatory developments.
Category: Artificial Intelligence
The U.S. in the AI Era: the National Security Commission on Artificial Intelligence Releases Report Detailing Policy Recommendations
On March 1, 2021, the National Security Commission on Artificial Intelligence (NSCAI) released its 700-page Final Report (the “Report”), which presents NSCAI’s recommendations for “winning the AI era” (The Report can be accessed here). This Report issues an urgent warning to President Biden and Congress: if the United States fails to significantly accelerate its understanding and use of AI technology, it will face unprecedented threats to its national security and economic stability. Specifically, the Report cautions that the United States “is not organizing or investing to win the technology competition against a committed competitor, nor is it prepared to defend against AI-enabled threats and rapidly adopt AI applications for national security purposes.”
In the Final Report, NSCAI makes a number of detailed policy recommendations “to advance the development of AI, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.” The Report, its findings and recommendations all signal deep concern that the U.S. has underinvested in AI and must play catch-up in order to safeguard its future.
New Executive Order on Maintaining American Leadership in Artificial Intelligence
On February 11, 2019, President Trump signed an Executive Order on “Maintaining American Leadership in Artificial Intelligence.” The Executive Order (EO) recognizes that the United States is the world leader in AI research and development (R&D) and deployment,” and that “[c]ontinued American leadership in AI is of paramount importance. . . .”
Continue reading “New Executive Order on Maintaining American Leadership in Artificial Intelligence”
FCC Announces its Agenda and Speakers for its AI and Machine Learning Forum
On November 7, the FCC—in a relatively terse Public Notice—announced that it would hold a Forum at its headquarters on November 30 designed to focus on artificial intelligence (AI) and machine learning by having experts in AI and machine learning discuss the future of these technologies and their implications for the communications marketplace.
Continue reading “FCC Announces its Agenda and Speakers for its AI and Machine Learning Forum”
The FCC Wades into the Artificial Intelligence (AI), Machine Learning Pool
On November 7, Federal Communications Commission Chairman Ajit Pai issued a Public Notice announcing a first ever FCC Forum focusing on artificial intelligence (AI) and machine learning. This Forum will convene at FCC headquarters on November 30 and will feature experts in AI and machine learning discussing the future of these technologies and their implications for the communications marketplace.
Continue reading “The FCC Wades into the Artificial Intelligence (AI), Machine Learning Pool”
US FDA Approaches to Artificial Intelligence
Artificial Intelligence (AI) can be employed in a health care setting for a variety of tasks, from managing electronic health records at a hospital, to market research at a benefits management organization, to optimizing manufacturing operations at a pharmaceutical company. The level of regulatory scrutiny of such systems depends on their intended use and associated risks.
In the U.S., for medical devices using AI, one of the key regulatory bodies is the Food and Drug Administration (FDA), especially its Center for Devices and Radiological Health (CDRH). CDRH has long followed a risk-based approach in its regulatory policies, and has officially recognized ISO Standard 14971 “Application of Risk Management to Medical Devices.” That standard is over 10 years old now, and therefore is currently undergoing revisions – some of which are meant to address challenges posed by AI and other digital tools that are flooding the medical-devices arena.
Continue reading “US FDA Approaches to Artificial Intelligence”