The promise and peril of AI for federal contractors

This post first appeared on Federal News Network. Read the original article.

There is a lot for contractors to unpack in the lengthy Executive Order 14110, titled Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. It is one of the longest EOs I have encountered and prescribes dozens of tasks for agencies to undertake. The order does not really include substantial direct change for contractors, but instead outlines a path for the adoption of AI by agency customers that contractors should seek to understand and leverage. This includes a healthy element of support for research and other efforts to instigate agency adoption of AI.

The EO and the subsequent guidance being issued shows promise for new acquisition opportunities and investments across the federal government. This starts with the requirement for each covered agency to appoint a chief artificial intelligence officer (CAIO) to advise agency leadership and help guide decisions about AI investment. Each agency is also required to create an AI advisory board comprised of senior officials to guide investment decisions and use case selection. These give contractors new points of contact in customer agencies to better understand planning for AI adoption.

Several agencies with specific tasks in the EO have already released their work products and some of them deserve additional attention from contractors. The General Services Administration is charged with facilitating access to governmentwide acquisition contracts for AI services and products and should be expected to leverage the AI Center of Excellence for agency acquisition needs. The center released an AI Guide for Government to help government agencies understand the technology and how to leverage their investments. The chief data and AI officer at the Defense Department also updated their Data, Analytics and Artificial Intelligence Adoption Strategy to provide a roadmap for adoption within the services and the department. The National Institute of Standards and Technology issued a video outlining their role in implementing the EO by developing standards and software development frameworks, and the Federal Risk Authorization and Management Program (FedRAMP) program office is working to prioritize cloud-based AI products in the authorization pathway. Since AI is software based, contractors should also understand how customer agencies will assess risk in acquisition. These risk assessment requirements only apply to safety- or rights-impacting offerings using AI, not for widely available AI-enabled capabilities, like spellchecking text. The EO directs agency CIOs to leverage existing risk assessment mechanisms where possible, so contractors should understand those processes and prepare to get an authority to operate (ATO) or similar agency approval for their AI-enabled offerings. All these efforts are aimed to facilitate agency adoption of AI in a responsible fashion and contractors should expect their customers to accelerate incorporation of AI-enabled capabilities in their efforts to improve mission outcomes, better customer experiences and drive government efficiency.

Another software-related consideration for contractors is they should take note of the peril in this order that comes with the intentional alignment of AI-specific risk to government efforts to shift liability from software end users to developers and their distributors and resellers. Earlier this year, the national cyber director announced that the government would begin to hold software developers and vendors accountable for using secure software development methodologies and practices in their offerings. To implement this intent, the Cybersecurity and Infrastructure Security Agency is finalizing a secure software development attestation form that will require the CEO or chief operating officer of any company offering software to the government to attest that their products are developed using NIST guidance for secure software development. The release of the AI order and the subsequent guidance and instruction make clear that software enabling or operating AI goods and services must adhere to these requirements. CISA has even taken this concept globally with the release of the updated “secure-by-design” guidance that is co-signed by 17 other nations.

Contractors should be pleased that the government has begun to outline a path forward for agency adoption and use of AI-enabled products and services. In pursuing the new and enhanced opportunities that will follow, contractors should not overlook the new risks and liabilities that it will bring to their business with the U.S. government.

Trey Hodgkins has worked with information and communication technologies product manufacturers and services providers to the public sector market for over two decades at tech trades associations, including ITAA, TechAmerica and ITI. He is now part of the team of government procurement experts at Phoenix Strategies and can be reached at trey@phoenixstrategies.co.

The post The promise and peril of AI for federal contractors first appeared on Federal News Network.

Leave a Reply

Your email address will not be published. Required fields are marked *