With the rapid expansion of artificial intelligence, concerns about its associated risks have escalated. MIT’s FutureTech Research Project has responded by launching the AI Risk Repository, a detailed database cataloging over 700 AI-related risks. This initiative aims to demystify the potential hazards posed by AI for various stakeholders, from everyday users to policymakers.
In recent years, the discourse around AI risks has primarily been fragmented, with various entities offering piecemeal assessments. MIT’s database stands out by integrating these scattered insights into a unified framework, aiming for a broader and more comprehensive understanding of AI dangers. Unlike prior efforts, which only captured a fraction of the potential risks, this repository aims to provide a more complete picture.
Risk Categories and Sources
The repository categorizes risks into seven main domains: discrimination and toxicity, privacy and security, misinformation, malicious actors and misuse, human-computer interaction, socioeconomic and environmental harms, and AI system safety, failures, and limitations. These domains are further divided into 23 subdomains, covering areas like exposure to toxic content, system security vulnerabilities, and loss of human agency. Financial support for the project comes from organizations such as Open Philanthropy, the National Science Foundation, Accenture, IBM, and MIT.
Expert Contributions and Methodology
The compilation of the database involved a systematic review process that incorporated active learning, a machine learning technique, to sift through over 17,000 records. Experts from various fields were then consulted to validate and refine the identified risks. As a result, the repository is more user-friendly and comprehensive than previous efforts, providing a more holistic view of AI-related challenges.
“We want to understand how organizations respond to the risks of artificial intelligence,” said Peter Slattery, research lead and visiting researcher at FutureTech.
Neil Thompson, director of FutureTech, stated, “Establishing this gives us a much more unified view.”
Although the database is publicly accessible, it is particularly beneficial to policymakers, risk evaluators, academics, and industry professionals. FutureTech plans to continually update and expand the repository, aiming to quantify AI risks and assess the riskiness of specific tools or models in the near future.
The AI Risk Repository represents a significant step toward understanding and mitigating the risks associated with artificial intelligence. By providing a comprehensive, searchable database, MIT’s FutureTech Research Project facilitates a more informed approach to AI governance and risk management. The repository’s ongoing development promises to keep pace with the fast-evolving landscape of AI, ensuring its relevance and utility for various stakeholders.