The United States National Institute of Standards and Technology (NIST), under the Department of Commerce, has taken a significant stride towards fostering a safe and trustworthy environment for Artificial Intelligence (AI) through the inception of the Artificial Intelligence Safety Institute Consortium (“Consortium”). The Consortium's formation was announced in a notice published on November 2, 2023, by NIST, marking a collaborative effort to set up a new measurement science for identifying scalable and proven techniques and metrics. These metrics are aimed at advancing the development and responsible utilization of AI, especially concerning advanced AI systems like the most capable foundation models.
Consortium Objective and Collaboration
The core objective of the Consortium is to navigate the extensive risks posed by AI technologies and to shield the public while encouraging innovative AI technological advancements. NIST seeks to leverage the broader community's interests and capabilities, aiming at identifying proven, scalable, and interoperable measurements and methodologies for the responsible use and development of trustworthy AI.
Engagement in collaborative Research and Development (R&D), shared projects, and the evaluation of test systems and prototypes are among the key activities outlined for the Consortium. The collective effort is in response to the Executive Order titled «The Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,» dated October 30, 2023, which underlined a broad set of priorities relevant to AI safety and trust.
Call for Participation and Cooperation
To achieve these objectives, NIST has opened the doors for interested organizations to share their technical
Read more on blockchain.news