The White House announced on Thursday a significant step towards the integration of Artificial Intelligence (AI) across federal agencies, mandating the adoption of “concrete safeguards” by December 1. This move aims to protect Americans’ rights and ensure safety as the government broadens the scope of AI applications in a variety of fields.
The directive, issued by the Office of Management and Budget (OMB), compels federal agencies to vigilantly monitor, assess, and test the implications of AI on the public. It emphasizes the necessity to mitigate the risks associated with algorithmic discrimination and mandates public transparency regarding the government’s utilization of AI. Furthermore, it requires agencies to conduct thorough risk assessments and establish precise operational and governance metrics.
Central to the White House’s directive is the requirement for agencies to implement robust safeguards when deploying AI in manners that could potentially impact the rights or safety of American citizens. This includes making detailed public disclosures, ensuring the populace is informed about how and when AI technologies are employed by the government.
In an effort to address concerns over AI systems that pose risks to national security, the economy, public health, or safety, President Joe Biden had previously signed an executive order in October. This order, invoking the Defense Production Act, mandates AI developers to disclose the results of safety tests to the U.S. government before public release.
The newly announced safeguards are designed to accommodate public concerns, including provisions for air travelers to opt out of Transportation Security Administration facial recognition technologies without experiencing delays in screening. Additionally, in federal healthcare settings where AI supports diagnostic decisions, human oversight is mandated to verify the accuracy of the tools’ results.
The rise of generative AI, capable of producing text, images, and videos from open-ended prompts, has sparked both excitement and concern. There are fears that such technology could lead to job displacement, influence elections, and potentially pose existential threats to humanity.
As part of its comprehensive approach, the White House is obliging agencies to publicize inventories of AI use cases, metrics regarding AI utilization, and to release government-owned AI code, models, and data, provided it doesn’t compromise security.
Highlighting the administration’s commitment to leveraging AI responsibly, the White House referenced the Federal Emergency Management Agency’s use of AI for assessing hurricane damage, the Centers for Disease Control and Prevention’s deployment of AI for disease spread prediction and opioid usage detection, and the Federal Aviation Administration’s use of AI for deconflicting air traffic to enhance travel efficiency.
To underscore the significance of this initiative, the White House announced plans to recruit 100 AI professionals to advocate for safe AI usage and mandated federal agencies to appoint chief AI officers within 60 days.
This initiative builds on the Biden administration’s January proposal, which seeks to enforce “know your customer” rules on U.S. cloud companies. This would require them to scrutinize whether foreign entities access U.S. data centers for AI model training, reinforcing the government’s stance on safeguarding national security and public welfare in the era of AI.