From the course: Responsible AI: Principles and Practical Applications

Risk and impact assessment models

- The Michigan State Unemployment Insurance Agency struggled for years to efficiently review unemployment claims, and weed out those that were fraudulent, as this work had become a financial drain on the state. Over a four-year period, the state's auditor general estimated losses of tens of millions of dollars in overpayments, and hundreds of millions of dollars in fraud penalties. To streamline its unemployment claims process, the Michigan State Government contracted a third party vendor to develop an AI-enabled system to determine unemployment eligibility, flag fraudulent benefits claims, and automatically intercept income tax refunds to recoup the misallocated funds. The system was rolled out across the state in 2014 and operated for nearly three years before a significant flaw was detected. It had wrongly flagged nearly 40,000 people for unemployment fraud. Many of these individuals' wages and tax returns were automatically garnished for years, putting them in financial turmoil that they're still struggling to overcome. What happened in Michigan is not an isolated occurrence. Unfortunately, governments, industry, and other organizations have been quick to turn to AI to solve a problem, often with little or no oversight. Once deployed, it can actually make their problem worse. Implementation of AI risk and impact assessments are gaining momentum in both the public and private sectors. In this video, we'll see how these assessments are being used to mitigate unintended negative consequences of AI systems. In the United States, the 2020 National AI Initiative Act mandated the National Institute of Standards and Technology, NIST, to develop an AI Risk Management Framework, AIRMF, that could be voluntarily implemented by developers. The final version of the AIRMF is expected to be published in early 2023. In the European Union, the Artificial Intelligence Act would require developers of high-risk AI systems, such as those used in biometric identification or in judicial sentencing to implement conformity assessments that would aid developers in identifying and mitigating risks before deployment. Companies are also voluntarily implementing AI risk and impact assessments to support development of responsible and trustworthy AI systems, and as a mechanism to reduce legal risks associated with liability and negligence if an AI system causes harm. Companies may integrate an AI risk and impact assessment fully, or in part, to established risk management processes, such as those already in place to address cyber security or data privacy. So what's in an AI risk and impact assessment? AI risk and impact assessments typically have three parts that evaluate data and design risks, the nature, scale, and likelihood of harmful impacts, and technology and governance features intended to mitigate the identified data and design risks. While most AI risk and impact assessments contain these three parts, they often vary widely in their structure. For example, Canada's Algorithmic Impact Assessment Tool has 81 questions, whereas the Ethics and Algorithms Toolkit rolled out in San Francisco contains a few dozen questions. When considering data and design risks, AI risk and impact assessments often include questions to address the following: first, data representativeness, provenance, and appropriateness for the application area. And second, whether the model design choice, for example, supervised versus unsupervised machine learning, is appropriate for the application area. Considering the nature, scale, and likelihood of harmful impacts of an AI system aids development of effective design and implementation strategies. Questions to ask include what types of impacts the AI system will have, for example, potential security, discriminatory, or environmental impacts, and who would be affected by these impacts, and whether those who are affected represent a vulnerable population. AI risk and impact assessments often also consider whether appropriate risk mitigation strategies have been implemented, such as steps taken to identify and address biased data, and implementation of continuous monitoring to identify and mitigate specific risks. I encourage you to consider how you might implement an AI risk and impact assessment within your organization. What risks do you think the use of AI systems pose within your organization? Who will be affected by harmful impacts? What risk mitigation strategies can you implement? Development and implementation of AI risk and impact assessments is still in its early stages. While promising to support identification and mitigation of harms before deployment, inadequate assessments, in other words, those that consider risks too narrowly, may inadvertently give a false sense of due diligence that allows non-trivial harms to occur.

Contents