Search:
Recent Posts
Popular Topics
Contributors
Archives
Legal developments in data, privacy, cybersecurity, and other emerging technology issues
As seen from the recent release of the ChatGPT artificial intelligence (“AI”) tool, AI technologies have a major potential to transform society rapidly. However, the technologies also pose potential unique risks. Because AI risk management is a key component of responsible development and use of AI systems, the National Institute of Standards and Technology last week released its voluntary AI Risk Management Framework, which will be a helpful resource to assist businesses to responsibly incorporate AI into their processes, products and services.
NIST’s AI Framework is designed to equip AI actors with approaches to increase the trustworthiness of AI systems, and to help foster the responsible design, development, deployment, and use of AI systems. The Framework is intended to be practical, to adapt to the AI landscape as AI technologies continue to develop, and to be operationalized by organizations in varying degrees and capacities so society can benefit from AI while also being protected from its potential harms.
The Framework is divided into two parts:
- Part 1 discusses how organizations can frame the risks related to AI and describes the intended audience. In addition, AI risks and trustworthiness are analyzed, outlining the characteristics of trustworthy AI systems, which include valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy enhanced, and fair with their harmful biases managed.
- Part 2 comprises the “Core” of the Framework. It describes four specific functions to help organizations address the risks of AI systems in practice. These functions – GOVERN, MAP, MEASURE, and MANAGE – are broken down further into categories and subcategories. While GOVERN applies to all stages of organizations’ AI risk management processes and procedures, the MAP, MEASURE, and MANAGE functions can be applied in AI system-specific contexts and at specific stages of the AI lifecycle.
Additional resources related to the Framework are included in the AI RMF Playbook, which is available via the NIST AI Framework website.
- Partner|
Steve Wernikoff is a litigation and compliance partner who co-leads the Data, Privacy, and Cybersecurity practice and the Autonomous Vehicle group. As a previous senior enforcement attorney at the Federal Trade Commission's ...