Assessing the Legal Risks in AI—And Opportunities for Risk Managers

This post first appeared on Risk Management Monitor. Read the original article.

Last year, Amazon made headlines for a developing a human resources hiring tool fueled by machine learning and artificial intelligence. Unfortunately, the tool came to light not as another groundbreaking innovation from the company, but for the notable gender bias the tool had learned from the data input and amplified in the candidates it highlighted for hiring. As Reuters reported, the models detected patterns from resumes of candidates from the previous decade and the resulting hiring decisions, but these decisions reflect that the tech industry is disproportionately male. The program, in turn, learned to favor male candidates.

As AI technology draws increasing attention and its applications proliferate, businesses that create or use such technology face a wide range of complex risks, from clear-cut reputation risk to rapidly evolving regulatory risk. At last week’s RIMS NeXtGen Forum 2019, litigators Todd J. Burke and Scarlett Trazo of Gowling WLG pointed toward such ethical implications and complex evolving regulatory requirements as highlighting the key opportunities for risk management to get involved at every point in the AI field.

For example, Burke and Trazo noted that employees who will be interacting with AI will need to be trained to understand its application and outcomes. In cases where AI is being deployed improperly, failure to train the employees involved to ensure best practices are being followed in good faith could present legal exposure for the company. Risk managers with technical savvy and a long-view lens will be critical in spotting such liabilities for their employers, and potentially even helping to shape the responsible use of emerging technology.

To help risk managers assess the risks of AI in application or help guide the process of developing and deploying AI in their enterprises, Burke and Trazo offered the following “Checklist for AI Risk”:

  • Understanding: You should understand what your organization is trying to achieve by implementing AI solutions.
  • Data Integrity and Ownership: Organizations should place an emphasis on the quality of data being used to train AI and determine the ownership of any improvements created by AI.
  • Monitoring Outcomes: You should monitor the outcomes of AI and implement control measures to avoid unintended outcomes.
  • Transparency: Algorithmic decision-making should shift from the “black box” to the “glass box.”
  • Bias and Discrimination: You should be proactive in ensuring the neutrality of outcomes to avoid bias and discrimination.
  • Ethical Review and Regulatory Compliance: You should ensure that your use of AI is in line with current and anticipated ethical and regulatory frameworks.
  • Safety and Security: You should ensure that AI is not only safe to use but also secure against cyberattacks. You should develop a contingency plan should AI malfunction or other mishaps occur.
  • Impact on the Workforce: You should determine how the implementation of AI will impact your workforce.

For more information about artificial intelligence, check out these articles from Risk Management:

Leave a Reply

Your email address will not be published. Required fields are marked *