In October 2023, The Biden Administration issued an Executive Order on artificial intelligence (AI) that calls for federal agencies to develop new guidance around safe, secure, and trustworthy use of the technology.
“Artificial Intelligence must be safe and secure. Meeting this goal requires robust, reliable, repeatable, and standardized evaluations of AI systems, as well as policies, institutions, and as appropriate, other mechanisms to test, understand, and mitigate risks from these systems before they are put to use. It also requires addressing AI systems’ most pressing security risks — including with respect to biotechnology, cybersecurity, critical infrastructure, and other national security dangers — while navigating AI’s opacity and complexity.” — White House Executive Order on AI
Much of the order pertains to U.S. government requirements and specific industries such as healthcare, financial services, and critical infrastructure owners and operators. However, it also reflects principles aligned with global regulatory efforts initiated by the European Union and others, which include:
- Transparency: Highlights new guidance and standards on disclosing to individuals and relevant regulatory agencies when and how AI is being used.
- Fairness: Calls for cabinet and regulatory agencies to set standards and guidance to minimize bias in areas such as hiring, housing, and healthcare.
- Human oversight: Recognizes the importance of having human oversight in AI decisions, especially higher-risk situations where discrimination or unintentional bias may pose an issue in automated decision making.
- Risk management: Introduces new risk management practices specifically for AI with a particular focus on third-party usage and monitoring. It also calls for an updated National Institute of Standards and Technology (NIST) risk framework.
- Privacy: Renews a call for federal legislation in this area and calls out specific privacy risks that are potentially exacerbated by AI.