close
close

MIT researchers release robust AI governance tool to define, test and manage AI risks

AI-related risks are a concern for policymakers, researchers, and the general public. While extensive research has identified and categorized these risks, a unified framework that ensures consistent terminology and clarity is needed. This lack of standardization makes it difficult for organizations to develop thorough risk mitigation strategies and for policymakers to enforce effective regulations. The differences in AI risk classification hinder the ability to integrate research, assess threats, and develop a consistent understanding needed for robust AI governance and regulation.

Researchers from MIT and the University of Queensland have developed an AI Risk Repository to address the need for a unified framework for AI risks. This repository aggregates 777 risks from 43 taxonomies into an accessible, customizable, and updatable online database. The repository is organized into two taxonomies: a high-level causal taxonomy that classifies risks according to their causes, and a mid-level domain taxonomy that categorizes risks into seven main domains and 23 subdomains. This resource provides a comprehensive, coordinated, and evolving framework to better understand and manage the various risks posed by AI systems.

A comprehensive search was conducted to classify AI risks, including systematic literature reviews, citation tracking, and expert consultations, resulting in an AI risk database. Two taxonomies were developed: the causal taxonomy, which categorizes risks by responsible entity, intent, and timing, and the domain taxonomy, which groups risks into specific domains. The definition of AI risk, which is aligned with the Society for Risk Analysis, includes potential negative consequences of AI development or deployment. The search strategy included a systematic and expert-assisted literature review, followed by data extraction and synthesis using a best-fit framework approach to refine and capture all identified risks.

The study identifies AI risk frameworks by searching academic databases, interviewing experts, and tracking forward and backward citations. This process resulted in the creation of an AI risk database of 777 risks from 43 documents. The risks were categorized using a “best-fit framework synthesis” method, resulting in two taxonomies: a causal taxonomy that classifies risks by entity, intent, and timing, and a domain taxonomy that groups risks into seven key areas. These taxonomies can filter and analyze specific AI risks, helping policymakers, auditors, academics, and industry experts.

The study conducted a comprehensive review, analyzed 17,288 articles, and selected 43 relevant documents focusing on AI risks. The documents included peer-reviewed articles, preprints, conference papers, and reports, mostly published after 2020. The study found different definitions and frameworks for AI risks and highlighted the need for more standardized approaches. Two taxonomies – causal and domain – were used to categorize the identified risks, highlighting issues related to the safety of AI systems, socioeconomic impacts, and ethical concerns such as privacy and discrimination. The results provide valuable insights for policymakers, auditors, and researchers, and provide a structured basis for understanding and mitigating AI risks.

The study provides extensive resources, including a website and database, to help you understand and address AI-related risks. It provides a basis for discussion, research and policy development without emphasizing the importance of any particular risk. The AI ​​Risk Database categorizes risks into high- and medium-level taxonomies, supporting targeted mitigation actions. The repository is comprehensive and customizable, and is designed to support ongoing research and debate.


Check out the Paper and details. All credit for this research goes to the researchers of this project. Also, don't forget to follow us on Þjórsárdalur and join our Telegram channel And LinkedInphew. If you like our work, you will Newsletters..

Don’t forget to join our 48k+ ML SubReddit

Find upcoming AI webinars here



Sana Hassan, a consulting intern at Marktechpost and a double major student at IIT Madras, is passionate about using technology and AI to solve real-world challenges. With his keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-world solutions.