close
close

White House releases final guidelines for inventorying AI use cases in 2024

The Biden administration has finalized guidelines for federal agencies' artificial intelligence use case inventories for 2024, a more comprehensive process than in previous years and a deadline of mid-December.

This new document is dated August 14, but was posted publicly on AI.gov on Friday, according to a website tracking tool used by FedScoop. While much of the document is consistent with the draft, the final version includes several changes, such as narrowing the scope of excluded use cases and adding extension requests to comply with risk management practices.

The guidelines also set a clear deadline for submitting the inventories to the White House Office of Management and Budget: December 16, 2024. Agencies are required to disclose information about their AI applications through a form administered by the OMB and then publish a “machine-readable CSV file of all publicly available use cases” on their website.

The White House did not immediately respond to a request for comment on the document.

The new policy is the latest version of how agencies outside the Department of Defense and the intelligence community will go about collecting and creating lists of their planned, new and existing AI applications. The inventories were first created as part of a 2020 Trump-era executive order on AI, later enshrined in law and now expanded by the Biden administration.

In the early years, annual AI inventories, which must also be publicly available, suffered from inconsistencies and even errors. Disclosures also varied widely in the type of information included, the format and the method of collection.

New procedure

However, under the new guidelines, the directories will include more standardized categories and multiple-choice answers for agencies.

For each individually inventoried use case, agencies must provide information such as the name of the application, intended purpose, results, and whether the application compromises rights or security, as defined in the OMB's memo on technology governance.

Agencies must also provide more detailed information on a subset of AI use cases based on their stage of development. This information includes categories such as whether an application includes personally identifiable information, whether a model discloses information to the public, and whether custom-developed code is required.

In addition, agencies are not allowed to remove use cases from their inventories once those uses have been retired. Instead, they must mark them as no longer in use, which the Department of Homeland Security has already begun to do.

Other additions to the process this year include a requirement to report aggregate metrics about uses that do not need to be inventoried individually, and mechanisms for agencies to fine-tune the practices they must follow for specific uses. Agencies can opt out of one or more of the required rights and security risk management practices under OMB's AI memo, determining that a use case presumably falling into one of those categories does not meet the definitions in the memo. All of these requirements have a public reporting component.

Changes from the draft

In particular, the final guidance for agencies clarifies which uses are excluded from reporting in the inventories. The final guidance excludes only two categories of uses: research and development use cases and when AI is used as part of a national security system or in intelligence.

However, the draft also excluded the use of “an AI application to perform a stand-alone task once” unless that task is performed repeatedly or used for related tasks. In addition, applications “that are implemented using commercially available or freely available AI products that are intended for government use without modification and are used to perform routine productivity tasks, such as word processing programs and map navigation systems” are excluded.

In fact, under the final directive, authorities must now indicate whether each use case in their portfolio “is implemented exclusively with commercially available or freely available AI products”.

The final policy also creates new powers for authorities to remove information from their public holdings. While the document maintains the prohibition on removing decommissioned use cases, a line has been added stating: “Authorities may remove use cases that no longer meet the inclusion criteria.”

Meanwhile, a new footnote explains in more detail what a “planned” use case is. The document now defines this as a use that has been “initiated by the allocation of funds or resources or by the approval of a formal development, procurement, or acquisition plan.”

The final rule also removes references from the draft to certain information that would be included in the public disclosures for aggregate metrics and exemption disclosures – but still notes that public reporting is required for both.

Regarding aggregate metrics, the OMB memo requires agencies (and the Department of Defense) to report the number of law- and safety-affecting uses and compliance with risk management practices. For exceptions, the policy requires agencies to publish a summary and justification.


Written by Madison Alder

Madison Alder is a reporter for FedScoop in Washington, DC, covering government technology. Her research includes tracking the government's use of artificial intelligence and monitoring changes in federal contracting. She has a general interest in health, legal, and data issues. Before joining FedScoop, Madison was a reporter at Bloomberg Law, where she covered a variety of topics, including the federal judiciary, health policy, and employee benefits. A West Coaster at heart, Madison is originally from Seattle and a graduate of the Walter Cronkite School of Journalism and Mass Communication at Arizona State University.