OMB sets ‘binding requirements’ for agencies to vet AI tools before using them

This post first appeared on Federal News Network. Read the original article.

The Biden administration is calling on federal agencies to step up their use of artificial intelligence tools, but in a way that keeps risks of misuse in check.

The Office of Management and Budget on Thursday released its first governmentwide policy on how agencies should mitigate the risks of AI while harnessing its benefits.

Among its mandates, OMB will require agencies to publicly report on how they’re using AI, the risks involved and how they’re managing those risks.

Senior administration officials told reporters Wednesday that OMB’s guidance will give agency leaders, such as their chief AI officers or AI governance board, the information they need to independently assess their use of AI tools, identify flaws, prevent biased or discriminatory results and suggest improvements.

Vice President Kamala Harris told reporters in a call Wednesday that OMB’s guidance sets up several “binding requirements to promote the safe, secure and responsible use of AI by our federal government.”

“When government agencies use AI tools, we will now require them to verify that those tools do not endanger the rights and safety of the American people,” Harris said.

OMB’s guidance gives agencies until Dec. 1, 2024, to implement “concrete safeguards” that protect Americans’ rights or safety when agencies use AI tools.

“These safeguards include a range of mandatory actions to reliably assess, test, and monitor AI’s impacts on the public, mitigate the risks of algorithmic discrimination, and provide the public with transparency into how the government uses AI,” OMB wrote in a fact sheet released Thursday.

By putting these safeguards in place, OMB says travelers in airports will be able to opt out of AI facial recognition tools used by the Transportation Security Administration “without any delay or losing their place in line.”

The Biden administration also expects that AI algorithms used in the federal health care system will have a human being overseeing the process to verify the AI algorithm’s results and avoid biased results.

“If the Veterans Administration wants to use AI in VA hospitals, to help doctors diagnose patients, they would first have to demonstrate that AI does not produce racially biased diagnoses,” Harris said.

A senior administration official said OMB is providing overarching AI guidelines for the entire federal government, “as well as individual guidelines for specific agencies.”

“Each agency is in its own unique place in its technology and innovation journey related to AI. So we will make sure, based on the policy, that we will know how all government agencies are using AI, what steps agencies are taking to mitigate risks. We will be providing direct input on the government’s most useful impacts of AI. And we will make sure, based on the guidance that any member of the public is able to seek remedy when AI potentially leads to misinformation or false decisions about them.”

OMB’s first-of-its-kind guidance covers all federal use of AI, including projects developed internally by federal officials and those purchased from federal contractors.

Under OMB’s policy, agencies that don’t follow these steps “must cease using the AI system,” except in some limited cases where doing so would create an “unacceptable impediment to critical agency operations.”

OMB is requiring agencies to release expanded inventories of their AI use cases every year, including identifying use cases that impact rights or safety, and how the agency is addressing the relevant risks.

Agencies have already identified hundreds of AI use cases on AI.gov.

“The American people have a right to know when and how their government is using AI, that it is being used in a responsible way. And we want to do it in a way that holds leaders accountable for the responsible use of AI,” Harris said.

OMB will also require agencies to release government-owned AI code, models and data — as long as it doesn’t pose a risk to the public or government operations.

OMB’s guidance requires agencies to designate chief AI officers — although many agencies have already done so after it released its draft guidance last May.

Those agency chief AI officers have recently met with OMB and other White House officials as part of the recently launched Chief AI Officer Council.

OMB’s guidance also gives agencies until May 27 to establish AI governance boards that will be led by the deputy secretary or an equivalent executive.

The departments of Defense, Veterans Affairs, Housing and Urban Development and State have already created their AI governance boards.

“This is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government who are specifically tasked with overseeing AI adoption and use,” Harris said.

A senior administration official said the OMB guidance expects federal agency leadership, in many cases, such as an AI governance board, to assess whether AI tools adopted by the agency adhere to risk management standards and standards to protect the public.

OMB Director Shalanda Young said the finalized guidance “demonstrates that the federal government is leading by example in its own use of AI.”

“AI presents not only risks, but also a tremendous opportunity to improve public services,” Young said. “When used and overseen responsibly, AI can help agencies to reduce wait times for critical government services, improve accuracy and expand access to essential public services.”

Young said OMB guidance will make it easier for agencies to share and collaborate across government, as well as with industry partners. She said it’ll also “remove unnecessary barriers to the responsible use of AI in government,”

Several agencies are already putting AI tools to work.

The Centers for Disease Control and Prevention is using AI to predict the spread of disease and detect the illicit use of opioids, while the Center for Medicare and Medicaid Services is using AI to reduce waste and identify anomalies in drug costs.

The Federal Aviation Administration is using AI to help manage air traffic in major metropolitan areas to improve travel time.

OMB’s guidance encourages agencies to “responsibly experiment” with generative AI, with adequate safeguards in place.

The administration notes that many agencies have already started this work, including by using AI chatbots to improve customer experience.

Young said the federal government is on track to hire at least 100 AI professionals into the federal workforce this summer, and holding a career fair on April 18 to fill AI roles across the federal government

Biden called for an “AI talent surge” across the government in his executive order last fall.

Later this year, OMB will take action to ensure that agencies’ AI contracts align with OMB policy and protect the rights and safety of the public from AI-related risks.

As federal agencies increasingly adopt AI, Young said agencies must also “not leave the existing federal workforce behind.”

OMB is calling on agencies to adopt the Labor Department’s upcoming principles on mitigating AI’s potential harm to employees. The White House says the Labor Department is leading by example, consulting with federal employees and labor unions both in the development of those principles and its own governance and use of AI.

OMB will be taking further action later this year to address federal procurement of AI, releasing a request for information to collect public input on that work. The public has until April 27 to respond to the RFI.

A senior administration official said OMB, as part of the RFI, is looking for feedback on how to “support a strong and diverse and competitive federal ecosystem of AI vendors,” as well as how to incorporate OMB’S new AI risk management requirements into federal contracts.

The post OMB sets ‘binding requirements’ for agencies to vet AI tools before using them first appeared on Federal News Network.

Leave a Reply

Your email address will not be published. Required fields are marked *