The Evolving Landscape of AI Regulations: Insights from the EU, UK, and US

In an era where Artificial Intelligence (AI) is reshaping every facet of our lives, understanding AI regulations and the jurisdictional differences is crucial for businesses and policymakers alike.

On November 17th 2023, we hosted the first Zartis AI Summit in Madrid, where leaders from many technology and services companies gathered to share their experiences, and expectations in implementing AI across their businesses.

During the Summit, Dr. Florian Ostmann of the Alan Turing Institute, a leading policy contributor in AI today, held a keynote session on “AI Regulatory Landscape”. His insights into the complexities of AI regulations in different jurisdictions offer a valuable perspective for anyone navigating this evolving field.

Here are some of the highlights and key lessons derived from the session, merged with the latest developments in the AI regulatory landscape!

 

The EU’s Holistic Approach to AI Regulation

EU AI regulation

The European Union’s approach to AI regulation, particularly through the EU AI Act, is comprehensive. Overall, the EU AI Act represents a valid approach to regulating AI, balancing the need to protect public interests and fundamental rights with the desire to foster innovation and economic growth in the AI sector.

AI Act aims to achieve several key objectives:

  • Risk-Based Regulation of AI Systems

The Act introduces a risk-based approach to AI regulation, categorizing AI systems into different risk levels (unacceptable risk, high-risk, limited risk, and minimal risk). This approach is designed to ensure that stricter regulatory measures are applied to AI applications that pose significant risks to safety, fundamental rights, and public interests.

  • Protection of Fundamental Rights and Safety

One of the primary goals of the EU AI Act is to safeguard fundamental rights, including privacy, non-discrimination, and human dignity, against the potential risks posed by AI systems

  • Transparency and Accountability

The Act mandates transparency in the use of AI systems, especially those that interact directly with individuals or make decisions that significantly impact individuals’ lives. This includes requirements for clear information about the operation and capabilities of AI systems and accountability measures for decisions made by AI.

  • Establishing Legal Certainty and Market Trust

By setting clear rules and standards for AI, the EU AI Act aims to create a trustworthy environment for AI innovation and deployment. Legal certainty is seen as crucial for fostering investment and innovation in AI technologies, as well as for building public trust in these technologies.

 

This Act, first drafted in 2021, aims to address AI across the board, adopting a risk-based framework. It categorizes AI systems into three risk levels: unacceptable, high, and low:

 

1. Unacceptable Risk AI Applications

Certain AI uses are deemed as posing an ‘unacceptable risk’ and are prohibited. This includes applications that could potentially infringe on human rights or societal harm. A few examples of AI applications that could be considered as posing an ‘unacceptable risk’ under the EU AI Act:

Live Facial Recognition: There’s significant debate and concern regarding the use of live facial recognition technology, particularly by law enforcement agencies. The EU’s position leans towards a potential prohibition of such technologies due to privacy and human rights concerns.

Manipulative and Exploitative AI Systems: AI applications that could manipulate human behavior or exploit vulnerable individuals are also under scrutiny. These could include systems designed to exploit psychological vulnerabilities or manipulate user behavior in unethical ways.

Social Scoring Systems: The AI Act is likely to prohibit systems similar to social scoring as practiced in some countries, where individuals are scored based on their social behavior and compliance, impacting their access to services and opportunities. The concern here is about the potential for misuse and discrimination.

These examples illustrate the types of AI applications that the EU AI Act aims to prohibit outright, reflecting the emphasis on protecting fundamental rights and preventing societal harm. Each of these cases presents significant ethical and societal implications, prompting the need for strict regulation and, in some cases, outright prohibition.

 

2. High-Risk AI Applications

High-risk AI applications as defined under the EU AI Act encompass a wide range of systems that could significantly impact safety, fundamental rights, or other critical areas. High-risk applications are subject to stringent regulatory requirements focusing on data governance, transparency, and human oversight, ensuring robustness and unbiased operation.

Some concrete examples of these high-risk AI applications include:

  • AI in Critical Infrastructure Management: This involves AI systems used in the management and operation of essential services like electricity, water supply, and telecommunications.
  • Educational and Vocational Training Applications: AI systems deployed in education and training settings, which could significantly influence the educational paths and career opportunities of individuals.
  • Employment and Worker Management Systems: AI used in the workplace for purposes such as employee monitoring, performance evaluation, or hiring decisions.
  • AI in Law Enforcement: This includes AI applications used in policing and criminal justice, such as predictive policing tools or systems used in evidence evaluation.
  • Healthcare AI Applications: AI systems used in medical devices, diagnostics, and patient care management fall under this category due to their direct impact on individual health outcomes.
  • AI in Migration, Asylum, and Border Control Management: These systems are used in making decisions regarding immigration, asylum applications, and border security.
  • AI for Legal Interpretation and Application: Systems that assist in legal interpretation, decision-making, or application of laws.
  • AI in Transport and Automotive Industry: This includes AI systems used in vehicles, such as advanced driver-assistance systems, as well as broader transportation management systems.
  • AI for Public Service Access and Delivery: Systems used to manage or deliver public services, including those that determine eligibility for welfare benefits.
  • Financial Services AI: AI applications used in the banking and insurance sectors, particularly those involved in risk assessments, credit scoring, or fraud detection.

 

3. Low-Risk AI Applications

Low-risk AI applications, as outlined in the EU AI Act, typically refer to AI systems that have a minimal risk of adversely impacting users’ rights or safety. These applications are subject to more lenient regulatory obligations compared to high-risk AI applications. Most AI systems are categorized as ‘low-risk’, facing minimal regulatory obligations and adhering to voluntary codes of conduct. Examples of low-risk AI applications include:

  • AI Chatbots: These are used for customer service or information purposes. They interact with users in a conversational manner but do not make significant decisions that affect users’ rights or safety.
  • Spam Filters: AI-driven systems in email services that help sort and filter out spam emails. These have a minimal impact on user rights or public safety.
  • AI in Entertainment and Gaming: AI used for creating more engaging and interactive entertainment experiences, including video games and online platforms.
  • Retail Recommendation Systems: AI algorithms that suggest products to customers based on their browsing history or purchase patterns. These systems have a relatively low impact on fundamental rights or safety.

 

4. Special Transparency Requirements

Under the EU AI Act, AI systems that present limited risk, such as those used to generate or manipulate image, audio, or video content – including deep fakes – are subject to special transparency requirements. These requirements are designed to ensure that users are adequately informed and can make conscious decisions regarding their interactions with such AI-generated content. Here’s a summary of these transparency requirements:

  1. Disclosure Obligation: AI systems like deepfakes must clearly disclose that the content has been generated or manipulated by AI. This ensures that users are aware they are engaging with AI-generated content, which is crucial in preventing deception or misinformation.
  2. Content Identification: The regulation mandates that these AI systems should be designed in a way that makes it easy to recognize and identify AI-generated content. This might involve watermarks or other identifiable features that distinguish AI-generated content from authentic human-generated content.
  3. Informing Users: Companies must inform users when they are interacting with AI, especially in cases where the AI is generating or manipulating content. This includes chatbots and other interactive AI systems where users might otherwise assume they are communicating with a human.
  4. Prohibition of Misleading Practices: Companies are required to ensure that their AI systems do not engage in practices that could mislead users. This includes preventing the use of deepfakes or similar technology to impersonate real individuals in a deceptive manner.
  5. Ethical Considerations: While not strictly a legal requirement, there is an expectation that companies developing and deploying these technologies will consider the ethical implications of their use, particularly concerning privacy, consent, and potential harm.

 

These transparency requirements reflect the EU’s commitment to safeguarding fundamental rights and the integrity of information in the digital age. By imposing these obligations, the EU AI Act aims to foster an environment where AI technologies like deepfakes can be used responsibly and ethically, without compromising public trust or individual rights.

 

The UK’s Post-Brexit Regulatory Environment

 

UK AI regulation

At the time of this discussion, the UK’s Post-Brexit approach to AI regulation differs significantly from the EU. The UK Government published an AI White Paper on 29 March 2023, sharing its plans for regulating the use of AI in the UK. The White Paper is a continuation of the AI Regulation Policy Paper which introduced the UK Government’s vision for the future “pro-innovation” and “context-specific” AI regulatory regime in the United Kingdom.

The White Paper suggests a unique way to regulate AI in the UK, unlike the EU’s AI Act. Instead of creating new extensive laws, the UK Government aims to set AI development and usage expectations while giving more authority to existing regulators like the Information Commissioner’s Office (ICO), the Financial Conduct Authority (FCA), and Competition and Markets Authority (CMA). They will provide guidance and oversee AI use in their respective areas.

This strategy delegates the responsibility of AI regulation to existing regulatory bodies, asking them to interpret and apply five broad principles to AI. This approach suggests a more sector-specific regulation, allowing industries to navigate AI challenges with greater flexibility.

Here are some key aspects of the AI regulatory landscape in the UK:

 

1. Principle-Based Regulation

Instead of detailed legislation, the UK government has been inclined towards setting out broad principles for AI use. These principles are intended to guide AI development and usage in a way that is ethical, safe, and respects privacy and data protection laws.

 

2. Role of Existing Regulatory Bodies

Rather than establishing new regulatory frameworks specifically for AI, the UK approach leans on existing regulatory bodies. These bodies are expected to interpret and apply the established principles within their respective sectors, such as healthcare, finance, and transportation.

 

3. National AI Strategy

The UK government has been working on a National AI Strategy, aiming to promote the growth of AI in the country while addressing challenges such as ethical use, data governance, and public trust.

 

4. Focus on Innovation and Ethics

There is a significant emphasis on balancing innovation in AI with ethical considerations. The UK seeks to foster an environment where AI can thrive without compromising ethical standards and human rights.

 

5. Collaboration with NGOs and Academia

The UK’s approach involves collaboration with non-governmental organizations (NGOs), academia, and industry experts. This collaboration aims to ensure that a wide range of perspectives are considered in shaping AI policies.

 

6. Potential Future Legislation

While as of my last update there wasn’t a specific AI law like the EU AI Act, the dynamic nature of the field means that the UK government could consider more formal legislation in the future, especially as AI technologies continue to evolve and become more integrated into various sectors.

 

7. International Influence

The UK’s AI policies and regulations are also influenced by international standards and agreements, as the country seeks to align with global best practices while maintaining its unique approach.

 

This landscape reflects the UK’s commitment to nurturing AI as a key driver of economic growth and innovation, while also recognizing the importance of ethical considerations and public trust in AI technologies. It’s important to note that the regulatory environment is subject to change, and new developments might have occurred after my last training data in April 2023.

 

The US Perspective on AI Regulation

US AI regulation

The US, meanwhile, is yet to establish a comprehensive legal framework akin to the EU AI Act. The US government’s executive order on AI is an important development, focusing on instructing government agencies to develop guidelines for AI use. This approach indicates a preference for voluntary standards and industry-led initiatives, rather than binding legal requirements.

A notable take on legislative action is the NIST AI Risk Management Framework (AI RMF), which is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.

In collaboration with the private and public sectors, NIST has developed a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI). On March 30, 2023, NIST launched the Trustworthy and Responsible AI Resource Center, which will facilitate the implementation of, and international alignment with, the AI RMF.

 

Here’s an overview of the AI regulatory landscape in the USA:

 

Executive Orders and Initiatives

The US federal government has issued executive orders and initiatives to guide AI development and policy. For instance, the American AI Initiative, launched in 2019, was aimed at promoting AI R&D, training the workforce, and setting governance standards. These initiatives typically focus on maintaining American leadership in AI and ensuring that AI development aligns with American values and interests.

 

Sector-Specific Regulations

Rather than a broad, overarching AI law, the US has seen more sector-specific guidance and regulations. Different federal agencies, such as the Food and Drug Administration (FDA) for healthcare, the Federal Aviation Administration (FAA) for aviation, and the Federal Trade Commission (FTC) for consumer protection, have issued guidelines or policies related to AI in their respective domains.

 

Ethical Guidelines and Principles

Several US agencies have developed ethical guidelines for AI. For example, the Defense Department has adopted ethical principles for the use of AI in military contexts, emphasizing responsibility, equitability, traceability, reliability, and governability.

 

State-Level Legislation

In the absence of comprehensive federal legislation, some US states have begun to introduce their own AI regulations. This state-level legislation often addresses specific concerns such as privacy, data protection, and the use of AI in decision-making processes.

 

Focus on Innovation and Economic Competitiveness

Similar to the UK, the US has emphasized maintaining technological leadership and fostering innovation in AI. Policies tend to encourage the development and adoption of AI technologies across various sectors of the economy.

 

Public-Private Partnerships

The US government often collaborates with the private sector, academia, and NGOs to advance AI technologies. This collaborative approach aims to leverage the strengths of each sector to drive AI innovation while addressing ethical, safety, and governance challenges.

 

Global Engagement

The US actively engages in international discussions and agreements related to AI, contributing to global standards and norms.

 

It’s important to note that the AI regulatory landscape in the US is evolving. As AI technologies continue to advance and their implications become more pronounced, there may be more significant moves towards comprehensive federal legislation or guidelines.

 

The Critical Role of International Standards

A common theme across all jurisdictions is the reliance on international standards to guide AI regulation. Organizations like ISO, IEEE, and CEN-CENELEC play crucial roles in developing these standards. For instance, the EU AI Act is complemented by standards developed at the European level, which will likely influence global AI practices. This trend towards harmonized standards hints at a future where AI regulations are more aligned globally, facilitating international collaboration and innovation.

 

Conclusion

Navigating the AI regulatory landscape requires an understanding of the nuances and differences across jurisdictions. The EU’s comprehensive, risk-based approach contrasts with the UK’s principle-based, flexible strategy and the US’s current focus on voluntary standards. As AI continues to evolve, staying informed about these regulatory frameworks becomes increasingly crucial for businesses and policymakers worldwide.

Feel free to reach out to us for further information on AI regulations or how to build compliant and competitive products!

Get in touch.

Find out how our experts can help you create tailored solutions for your software needs.