AI Regulation Review: AI Legislations in The US

AI Regulation Review: AI Legislations in The US

Artificial Intelligence’s ethical and responsible development is a critical issue facing the world today. Artificial intelligence (AI) is rapidly transforming our world, raising complex questions about its development and use. As the capabilities of AI systems continue to grow, the need for a comprehensive legislative framework to govern their safe and responsible implementation becomes increasingly important.

Harnessing the potential of AI presents a remarkable opportunity for the United States to enhance its governmental services. Across various sectors, including healthcare, public transportation, environmental management, and benefits distribution, the federal government is actively employing AI to better cater to the needs of the public. Moreover, stringent regulations and protocols are being implemented to guarantee the safe and ethical use of AI, prioritizing the protection of individuals’ rights and well-being.

This review delves into the current state of AI legislation in the US, outlining proposed and implemented regulations, exploring the potential impact on various stakeholders, and examining the ongoing efforts to navigate the complex challenges and opportunities presented by this transformative technology. Let’s delve into it!


Defining the essence of AI:

Creating clear and concise definitions of “AI” and its various subcategories is crucial for effective legislation. This can be challenging due to:

  • The Dynamic Nature of AI: The technology is constantly evolving, making it difficult to establish a static definition.
  • Varying Interpretations: Different stakeholders may have different understandings of what constitutes “AI,” leading to confusion and potential loopholes in legislation.

A flexible and adaptable definition that accurately reflects the evolving nature of AI while ensuring clarity and comprehensiveness is necessary.


The current landscape:

The United States currently lacks a single, overarching law governing AI such as the EU AI Act. Instead, the landscape is characterized by:

  • Sector-specific regulations: Existing regulations focus on specific sectors like healthcare, finance, and transportation, addressing issues like bias in algorithms and data protection.
  • Agency guidelines: The National Institute of Standards and Technology (NIST) has developed the “Artificial Intelligence Risk Management Framework” (AI RMF) to assist organisations in assessing and mitigating risks associated with AI systems.
  • Executive orders: The Biden administration has issued executive orders promoting responsible AI development and directing federal agencies to consider AI ethics in their decision-making.


Understanding the importance of AI legislation in the United States

The United States currently lacks comprehensive federal legislation specifically regulating AI. However, the landscape is evolving rapidly, with several key aspects to consider:

  1. Executive order:

In October 2023, President Biden issued Executive Order 14110, titled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” This order emphasizes the need for responsible AI development, focusing on areas like safety, security, bias, and accountability. While not directly imposing regulations, it outlines guiding principles and directs federal agencies to develop specific plans for their responsible use of AI.

  1. State-level initiatives:

Several states have been more proactive in introducing and enacting AI-related legislation. As of December 2023, over 25 states have considered bills related to AI, and many have adopted resolutions or specific regulations. Examples include:

  • New York City’s Local Law 144 requires bias audits for AI-powered hiring tools.
  • California’s Algorithmic Accountability Bill (AB 2273) focuses on the right to explain AI-based decisions affecting individuals.
  • Maryland’s AI and Algorithmic Fairness Act establishes a commission to study and recommend policies on responsible AI development.
  1. Ongoing discussions:
  • The National Institute of Standards and Technology (NIST) is actively involved in developing standards for trustworthy AI systems, engaging with stakeholders across the public and private sectors.
  • Congress continues to debate and consider introducing various bills addressing specific AI-related concerns, such as deepfakes, autonomous weapons, and facial recognition usage.

While the U.S. lacks a single, overarching piece of AI legislation, it’s important to understand that several efforts are underway at different levels. These efforts aim to address the various concerns surrounding AI and pave the way for its responsible development and use.


Legislative proposals:

As society navigates the complexities of integrating artificial intelligence (AI) into various facets of daily life, policymakers are grappling with the need for robust governance frameworks. Several proposals currently under consideration by Congress aim to address various aspects of AI governance, with a focus on transparency, accountability, bias mitigation, and responsible development. These proposals include:

1- The Algorithmic Accountability Act

Spearheaded by a coalition of lawmakers; this bill represents a pivotal step in establishing transparency and accountability for algorithmic decision-making systems, particularly those employed by the government. The Act emphasizes the necessity for organizations to understand the inner workings of the algorithms they utilize and to mitigate any potential biases or discriminatory outcomes. By mandating transparency and accountability measures, such as audits and impact assessments, the Act aims to foster public trust in AI technologies while ensuring that they operate fairly and equitably.

2- The Algorithmic Justice League Act

Recognizing the pervasive nature of algorithmic bias and discrimination in critical areas such as employment, housing, and criminal justice, this proposal seeks to enact targeted measures to address these issues. By leveraging insights from diverse stakeholders, including civil rights organizations and technologists, the Act aims to develop comprehensive strategies for identifying and mitigating algorithmic bias. Additionally, it calls for the establishment of mechanisms to hold accountable entities accountable for the adverse impacts of biased algorithms, thereby safeguarding against systemic injustices perpetuated by AI-driven systems.

3- The National AI Commission Act

In response to the growing recognition of AI’s transformative potential and its associated risks, this bill advocates for the creation of a dedicated commission tasked with studying the multifaceted implications of AI. By bringing together experts from academia, industry, and government, the commission would undertake a rigorous examination of AI’s societal, economic, and ethical dimensions.

Through comprehensive research and analysis, the commission would generate actionable recommendations aimed at guiding the responsible development, deployment, and regulation of AI technologies. By fostering collaboration and knowledge-sharing, the National AI Commission Act seeks to empower policymakers with the insights needed to navigate the complexities of the AI landscape and to ensure that AI innovation aligns with broader societal goals and values.

Collectively, these legislative proposals reflect a concerted effort to address the pressing challenges and opportunities associated with AI governance. By enacting thoughtful and forward-looking policies, policymakers can lay the groundwork for a future in which AI technologies contribute to inclusive growth, equitable outcomes, and the advancement of human welfare.


Deep dive into US AI legislation

The US approach to AI legislation differs from the EU’s centralized AI Act. It focuses on developing principles and frameworks alongside targeted legislation for specific issues. Let’s explore these further:

1. Broad principles and frameworks:

  • The White House’s “Blueprint for an AI Bill of Rights” (October 2022): This document outlines five key principles that should guide AI development and use:
    • Safety: AI systems should not cause harm to people or property.
    • Fairness: AI systems should not discriminate against individuals or groups.
    • Accountability: There should be clear responsibility for the development, deployment, and use of AI systems.
    • Transparency: AI systems should be understandable and explainable.
    • Privacy: AI systems should respect individual privacy rights.

This blueprint doesn’t have legal force, but it signifies the Biden administration’s commitment to ensuring ethical and responsible AI development in the US.

  • The National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF)

To aid in the launch of the AI RMF and its worldwide conformity, NIST established the Reliable and Effective Artificial Intelligence Resource Center on March 30, 2023.

A number of documents were released by NIST on January 26, 2023, including the AI Risk Management Framework (AI RMF 1.0) with its number of perspectives and an associated document. 

This framework offers voluntary guidance for organizations developing and deploying AI systems. It helps them identify and manage potential risks associated with AI use, like bias, security vulnerabilities, and privacy concerns.

2. Targeted legislation addressing specific issues:

These legislative efforts focus on specific areas where AI raises concerns:

  • Deepfakes:
    • Several bills address the creation and spread of harmful deepfakes. These aim to:
      • Discourage creating deepfakes that could be used to damage someone’s reputation or influence elections.
      • Require labeling or disclaimers for deepfake content.
  • Algorithmic Bias:
    • Legislation targets potential bias in algorithms used for:
      • Hiring decisions
      • Credit scoring
      • Other areas impacting people’s lives
    • These efforts aim to ensure fairness and prevent discrimination in AI-powered decision-making.
  • National Security and Public Safety:
    • Legislation focuses on regulating the use of AI in
      • Autonomous weapons systems to prevent unintended harm or escalation.
      • Government AI use in surveillance, ensuring transparency and oversight.


Challenges and considerations in navigating AI legislation in the USA

The United States faces several challenges and considerations when it comes to crafting effective legislation for Artificial Intelligence (AI). These complexities arise from the very nature of AI technology and its rapid evolution. Here are some key aspects to consider:


  • Fast-paced Development: AI is a rapidly evolving field with diverse subfields and applications. This makes it difficult for legislation to keep pace, potentially leading to regulations that are either too broad or too narrow, failing to address the specific risks of various AI applications.
  • Technical Complexity: Understanding the intricate workings of AI systems can be challenging for policymakers, potentially leading to poorly defined regulations that miss crucial aspects of AI technology.
  • Balancing Innovation and Control: There’s a concern that overly stringent regulations might stifle innovation in the field of AI. Finding the right balance between fostering responsible development and mitigating potential risks is crucial.
  • Globalized Landscape: AI is a global technology. Establishing effective regulations requires international cooperation to ensure consistency and prevent potential loopholes for companies operating across different regions with varying legal frameworks.
  • Enforcement Challenges: Enforcing regulations effectively can be challenging due to the complexity of AI systems and the potential for companies to find workarounds.


  • Risk-based Approach: Focusing regulations on specific risks associated with different AI applications might be more effective than a one-size-fits-all approach.
  • Stakeholder Involvement: Engaging all relevant stakeholders, including developers, researchers, ethicists, and the public, in the legislative process is crucial for creating comprehensive and well-informed regulations.
  • Adaptability and Flexibility: Considering the fast-paced nature of AI, regulations need to be adaptable and flexible enough to accommodate future advancements while maintaining their effectiveness.
  • Transparency and Explainability: Encouraging transparency in AI development and promoting explainable AI systems can help build public trust and facilitate responsible use of the technology.

By carefully considering these challenges and considerations, the United States can navigate the complexities of AI legislation and create a framework that fosters responsible development while mitigating potential risks associated with this powerful technology.


Unlocking the benefits: Advocating for AI legislation in the U.S.

The need for AI legislation in the U.S. stems from the potential risks and ethical concerns surrounding this rapidly developing technology. While AI offers tremendous benefits for various sectors, it also carries the potential for misuse and unintended consequences. Here are some key arguments for AI legislation:

1. Mitigating Risks:

  • Bias and Discrimination: AI algorithms can perpetuate existing societal biases if trained on biased datasets. Legislation can ensure fairness and non-discrimination in areas like hiring, loan approvals, and criminal justice.
  • Privacy and Security: AI systems often rely on vast amounts of personal data, raising concerns about data privacy and security. Legislation can establish safeguards for data collection, storage, and usage.
  • Safety and Reliability: AI systems used in critical areas like healthcare, finance, or transportation require robust safety and security measures. Legislation can set standards and conduct oversight to ensure their safe and reliable operation.

2. Protecting Human Values:

  • Complex Algorithm: Many AI algorithms are complex “black boxes,” making it difficult to understand their decision-making process. Legislation can promote transparency and explainability, allowing individuals to understand how AI decisions are made about them.
  • Accountability: Assigning responsibility for AI-related harms is crucial. Legislation can establish frameworks for accountability, ensuring that developers, deployers, and users are responsible for their actions.
  • Human Control: AI systems should ultimately serve humanity, not replace it. Legislation can ensure that humans maintain control over critical decision-making processes and prevent AI from becoming autonomous in a way that threatens human values.

3. Fostering Responsible Development:

  • Clear Guidelines: Legislation can provide clear guidelines and standards for responsible AI development and deployment, encouraging innovation while mitigating risks.
  • Public Trust: Implementing safeguards and ethical considerations can build public trust in AI and encourage its wider adoption.
  • Global Leadership: The U.S. can set an example for other nations by creating a comprehensive framework for responsible AI development and use, fostering international cooperation and collaboration.

It’s important to note that the debate surrounding AI legislation is ongoing, and there are valid concerns about potential restrictions stifling innovation. However, the potential benefits of establishing clear guidelines and safeguards for AI development and use make it a crucial conversation for the future.


Guarding democracy and creativity: addressing the threat of deepfakes

With the advancement of AI, there’s a growing concern about “deepfakes” – these are AI-generated videos and images that mimic individuals’ appearances and voices without their consent. This is particularly problematic in areas like elections and creative endeavors. For instance, political campaigns or foreign entities might use AI to create misleading content to sway elections.

Recently, Leader Schumer highlighted the need for measures to safeguard democracy against such AI manipulations. He pledged to prioritize discussions on this issue in future AI Insight Forums. Already, there are legislative efforts underway:

  • Protect Elections from Deceptive AI Act (S. 2770), spearheaded by Senators Amy Klobuchar, Josh Hawley, Chris Coons, and Susan Collins. This bill aims to ban the spread of misleading AI-generated content in election-related advertisements. It also grants affected candidates the right to request content removal and seek compensation for damages.
  • The Required Exposure of AI-Led (REAL) Political Advertisements Act (S. 1596/H.R. 3044) was introduced by Senators Amy Klobuchar, Cory Booker, Michael Bennet, and Representative Yvette Clarke. This proposal mandates that all political ads containing AI-generated material carry a disclaimer indicating its AI origin.

Lawmakers are also concerned about AI’s impact on art and advertising, especially instances like the unauthorized use of celebrities’ images for product endorsements or AI-generated music using artists’ voices without consent. In response, Senators Chris Coons, Marsha Blackburn, Amy Klobuchar, and Thom Tillis put forth a discussion draft of the Nurture Originals, Foster Art, and Keep Entertainment Safe Act (original ones).

This draft proposes holding individuals or companies accountable for creating unauthorized digital replicas of performers. It also targets platforms hosting such content if they’re aware it’s unauthorized.


Why is there no single law?

There are several reasons why the US does not have a single law governing AI, unlike the EU’s newly passed AI Act:

  • Decentralized approach: The US regulatory system generally adopts a decentralized approach to emerging technologies, with different agencies overseeing specific aspects based on their existing mandates. This means issues like bias in AI used for lending might fall under the Consumer Financial Protection Bureau, while the National Highway Traffic Safety Administration might address safety in self-driving cars.
  • Legislative gridlock: The US legislative process is known for its complexity and gridlock, making it difficult to pass comprehensive laws, especially on multifaceted topics like AI. This is further complicated by the constantly evolving nature of the technology, making it challenging to write future-proof regulations.
  • Focus on Innovation: Some argue that the US prioritizes innovation and fears that strict regulations might stifle the development and deployment of beneficial AI applications. This is partly due to the strong presence of private companies in the US tech sector compared to the EU.
  • States taking the Initiative: In the absence of a single federal law, several US states have begun to take their own initiatives and propose AI-related legislation. This piecemeal approach addresses specific concerns but can create confusion and inconsistency for companies operating across different states.

It’s important to note that while the US lacks a single law, it’s not entirely without any regulations regarding AI. Existing laws and policies can be applied to specific aspects of AI development and deployment, and the federal government has issued non-binding guidelines on responsible AI development.


The path forward:

The United States is still in the early stages of developing a comprehensive AI governance framework. It’s a dynamic debate with diverse stakeholders, including policymakers, tech companies, civil society organizations, and the public. Ongoing discussions and collaboration are essential to establish a framework that fosters responsible AI development and utilization, ensuring its benefits reach everyone while mitigating potential risks.


Empowering compliance: Navigating AI regulations with Zartis

Zartis is here to help you navigate the world of AI rules and tech. We’ve got teams all over, ready to help you follow the ever-changing AI laws. No matter what field you’re in, we’ve got the know-how to keep you on the right side of the law.

Additionally, our dedicated development teams provide actionable solutions, ready to execute your compliant AI strategy seamlessly. Whether you need data experts, AI pros, or project managers, we’ve got you covered. We make sure your AI projects follow all the rules, from keeping data safe to meeting industry standards. And if you need help putting your plans into action, our expert teams are ready. Whether you need data experts, AI pros, or project managers, we’ve got you covered.

We focus on being clear accountable, and reducing risks. We’ll help you make the most of AI while staying on the right side of the law. If you’re ready to dive into AI with confidence, Contact us right away and team up with Zartis. We’ll tailor our advice and support to fit your needs perfectly. Let’s make AI work for you without any legal headaches.


Staying updated:

Since the legislative landscape is constantly evolving, here are some resources to keep you informed about US AI legislation:

  • News Sources: Follow news outlets covering technology policy and legislation in the US, like:
    • The Washington Post
    • The New York Times
    • Politico
    • TechCrunch
  • Government Websites: Monitor websites of relevant government agencies like:

By understanding both the broad principles and targeted legislation, you can get a better picture of how the US is addressing the challenges and opportunities of artificial intelligence.

Share this post

Zartis Tech Review

Your monthly source for AI and software related news