NIST risk framework sets standards for what responsible AI looks like

This post first appeared on Federal News Network. Read the original article.

The National Institute of Standards and Technology is rolling out new, voluntary rules of the road for what responsible use of artificial intelligence tools looks like for many U.S. industries.

NIST on Thursday released its long-awaited AI Risk Management Framework (RMF). The framework gives public and private-sector organizations several criteria on how to maximize the reliability and trustworthiness of AI algorithms they are about to develop or deploy

Congress requires NIST to create the AI Risk Management Framework as part of the 2020 National Artificial Intelligence Initiative Act.

The framework is non-binding and not specific to any particular industry. It’s the latest in a series of recent federal policies meant to regulate an emerging technology that’s been rapidly evolving, but fraught with challenges.

NIST Director and Undersecretary of Commerce Laurie Locascio said Thursday that the risk management framework reflects how AI algorithms are already driving economic growth and scientific advancements, but can undermine those goals if those systems are left unchecked.

“If we’re not careful — and sometimes even when we are — AI can exacerbate biases and inequalities that already exist in our society,” Locascio said at a launch event for the framework. “The good news is, understanding and managing the risks of AI systems will help to enhance their trustworthiness.”

The framework outlines several criteria for organizations to consider when determining the trustworthiness of AI algorithms. Those criteria include whether the AI algorithm is reliable, safe, secure and resilient, accountable and resilient, accountable and transparent, explainable and transparent, protecting privacy and mitigating harmful bias.

Those criteria, however, will vary in importance on a case-by-case basis, since the voluntary AI framework is meant to cover a wide range of use cases across multiple industries — including health care, cybersecurity, human resources and transportation.

“Addressing these characteristics individually may not ensure AI system trustworthiness. Tradeoffs are always involved. Not all characteristics apply in every setting, and some will be more or less important in any given setting,” Locascio said.

House Science, Space and Technology Committee Chairman Frank Lucas (R-Okla.) said NIST’s framework will allow the U.S. to better mitigate the risk associated with AI technologies, while staying on top of a global competition to develop breakthroughs in AI.

“By having standards and evaluation methods in place, this framework is going to prove critical to our efforts to stay at the cutting edge of reliable and trustworthy AI technologies. I’m looking forward to seeing how organizations began to adopt this voluntary guidance,” Lucas said.

Committee Ranking Member Zoe Lofgren (D-Calif.) said the NIST framework recognizes the opportunities of advancing and implementing AI tools, while considering all the ways AI tools can be used against the federal government and critical infrastructure.

“If AI can, with near perfection, diagnose vulnerabilities from a cybersecurity point of view, that means they can launch a cybersecurity attack as well,” Lofgren said. “We need to think about what structures we need to put in place, down the line, to protect from worst-case scenarios — what is not going to be permitted at all.”

Lofgren said the NIST framework helps set guardrails for the adoption of AI tools, while ensuring these tools uphold civil liberties and

“AI has taken off. It’s not as if it’s in the future. It’s today … and so as we think about these voluntary standards. I need, I think we need to think about some of the risks that we’re going to have to address perhaps in a different way,” Lofgren said.

Alondra Nelson, the deputy director of the White House Office of Science and Technology Policy, said the NIST framework represents the latest piece of a bigger picture on federal AI policy.

Nelson said this week’s final report from the National AI Research Resource (NAIRR) task force and the Biden administration’s Blueprint for an AI Bill of Rights serve as “complimentary frameworks” to the NIST document.

The administration, as part of its AI regulatory agenda, is also directing several agencies to crack down on the discriminatory use of AI.

The Equal Employment Opportunity Commission (EEOC) and the Justice Department issued guidance in May 2022 meant to bar employers and software vendors from using AI hiring tools that may screen out applicants with disabilities.

The Department of Health and Human Services, meanwhile, is taking steps to root out algorithmic bias and discrimination in health care, while the Department of Housing and Urban Development is looking at ways to protect renters and home buyers from automated systems that reinforce housing segregation.

“The work is too big the technology evolving too quickly. The potential outcomes are too important for anyone to stay on the sidelines,” Nelson said. “Now is the time for urgent action across all parts of our government and across all parts of society using every tool at our disposal.”

Deputy Commerce Secretary Don Graves said the voluntary framework will allow organizations to develop and deploy more trustworthy AI, “while managing risks based on our democratic values.”

“It should help to accelerate AI innovation and growth, while advancing, rather than restricting or damaging civil rights, civil liberties and equity,” Graves said.

NIST is asking for feedback from organizations that adopt its framework, specifically whether the document helps them adopt AI tools, while minimizing the risk of its adverse effects.

“The key question is whether the framework and its use, will lead to more trustworthy, and responsible AI,” Graves said.

 

Leave a Reply

Your email address will not be published. Required fields are marked *