AI Regulations are coming. What do you need to know?

An overview of the current AI regulatory landscape, with a focus on the EU AI Act.

AI Regulations are coming. What do you need to know?
Photo by Miko Guziuk / Unsplash

As AI starts driving more and more of our daily lives, the need for regulating them has become a necessity. The main driving factor is the central role of "data" in the development of AI systems, and growing concerns over privacy and security of that data.

In the US, President Biden has issued an AI Executive Order to establish a new standard of AI safety and security. This has raised a lot of eyebrows with proponents both for and against such regulations in the US.

The EU has gone one step further, and drafted the EU AI Act in late 2023. This will be the first comprehensive and legally binding major regulatory framework for AI. The EU has generally been the most forward-thinking economic block in terms of regulating big tech and protecting the fundamental rights of their citizens. Hence, it is worth going through the main highlights of this Act to know where the future of AI is headed.

Defining AI

The first thing the Act does is to define Artificial Intelligence (AI). AI is defined as:

a software that is developed with one or more of the techniques and approaches listed below and that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

The list includes techniques for:

  1. Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
  2. Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
  3. Statistical approaches, Bayesian estimation, search and optimization methods.

What I like about this definition is that they define AI as broadly as possible to include traditional symbolic AI, Machine Learning and any combination of those. It also aims to be as neutral as possible to cover technologies that have yet to be developed to be future-proof.

A Risk-based Approach

Another noticeable part of the Act is the acknowledgement that AI systems deployed in different sectors and scenarios have different levels of risks. Thus, each AI system will be categorized under different tiers based on their risk factor.

Source: Summary presentation on the EU AI Act by the European Commission
  1. Unacceptable Risk: This tier includes systems that are outright banned as they contravene EU values. Some examples of this includes social scoring of citizens by public authorities, real-time biometric identification systems in publicly accessible spaces by law enforcement, and systems that exploit vulnerable groups like children or people with disabilities.
  2. High Risk: This tier includes systems deployed in critical infrastructure such as healthcare, recruitment, law enforcement, transportation etc. AI system in this category are subject to highly stringent compliance and legal requirements.
  3. Limited Risk: This tier consists of systems that are the most popular form of AI at the moment. These include systems like chatbots, generative AI, emotion detection etc. They are not subject to stringent requirements as the ones above, but are under transparency obligation to
      1. notify humans that they are interacting with an AI system
      2. notify humans that emotional recognition or biometric categorization is being applied to them
      3. apply labels to deep fakes (unless necessary for the exercise of a fundamental right or freedom or for reasons of public interests)
  4. Minimal or No Risk: This tier consists of systems where the risk to a person's rights and safety are considered negligible. Thus, are only subject to voluntary codes of conduct for AI with specific transparency requirements.

I think this tiered based categorization is a good approach that will enable companies deploying AI systems to know their regulatory obligations clearly.

High Risk Systems

Of the four tiers, high-risk systems are subject to rigorous compliance checks and standardization to operate in the EU market.

CE Marking

The Act clearly states that all high-risk systems should bear the CE marking to indicate their conformity under this Act. In order to affix the CE marking to a high-risk AI system, a provider should undertake the following steps.

Source: Summary presentation on the EU AI Act by the European Commission

Requirements

The Act also specifies some requirement on high-risk systems to – establish and implement risk management processes and in light of the intended purpose of the AI system.

These include:

  1. Use high-quality training, validation and testing data (relevant, representative etc.)
  2. Establish documentation and design logging features (traceability & auditability)
  3. Ensure appropriate certain degree of transparency and provide users with information (on how to use the system)
  4. Ensure human oversight (measures built into the system and/or to be implemented by users)
  5. Ensure robustness, accuracy and cybersecurity

In Conclusion

Recent advancements in AI have put the impetus on governments to ensure the safety and fundamental rights of consumers and users of these systems are upheld. AI presents one of the biggest opportunities and also challenges of our times. I didn't so deep into regulations in other jurisdictions like China, Canada, UK, Japan etc. I plan to delve deeper into those as they mature over the coming months.

Any company that have or planning to put systems driven by AI in front of their users need to adapt to this rapidly evolving regulatory landscape. I'm excited to be part of this domain and help develop systems that are both compliant and also enable other companies to do the same.