EU AI Act: June 2023 Amendments to the EU AI Act


The Game Changer: Unraveling The Recent Amendments to the EU AI Act

Introduction: A Landmark in AI Governance

The recent amendments to the EU AI Act in June 2023 have changed the dynamics of AI governance. These amendments have not only set the bar high for AI regulations but have also drawn attention from the global tech community. This Act is the most ambitious and comprehensive regulatory framework on artificial intelligence, reflecting the EU’s commitment to leading the AI conversation globally.

Breaking Down the Objectives of the EU AI Act

The EU AI Act is committed to building a foundation where AI systems are safe, and in alignment with fundamental human rights and Union values. Additionally, it intends to:

  1. Provide legal certainty to stimulate investments and innovation.
  2. Strengthen governance and ensure effective enforcement of laws.
  3. Develop a single market for lawful, safe, and trustworthy AI applications.
  4. Prevent market fragmentation.

The AI Risk Pyramid: Understanding the Categories

One of the most significant features is the classification of AI applications into three categories based on risk – unacceptable risk, high-risk, and low-risk.

Unacceptable Risk

Applications that are deemed to have an unacceptable risk, like government-run social scoring, are banned.

High-Risk

High-risk applications, such as resume screening tools, must comply with specific legal requirements.

Low-Risk

The rest of the applications, which are not high-risk or banned, fall under the low-risk category, facing minimal regulation.

Generative AI Systems Under the Microscope

With the surge in deepfakes and AI-generated content, the EU AI Act has introduced stricter regulations for generative AI systems, like ChatGPT. These systems must:

  1. Disclose that the content is AI-generated.
  2. Ensure safeguards against generating illegal content.
  3. Share detailed summaries of the copyrighted data used for training with the public.

Facial Recognition: The Double-Edged Sword

Despite a general ban on facial recognition, exceptions exist. For instance, police can use facial recognition if images are captured with a delay or if it’s used to find missing children. This dual-stance reflects the complex nature of this technology.

The Rigid Nature of the Act

Critics argue that the Act’s inflexible structure can be problematic. If an AI application emerges as dangerous in the future, the law doesn’t provide mechanisms to label it as “high-risk” immediately.

The Economic Consequences: A Tight Rope for Businesses

The Act comes with a hefty price tag for businesses. The Center for Data Innovation suggests that the EU AI Act will cost approximately €31 billion over five years, potentially reducing AI investments by almost 20%.

What This Means for AI Globally

The EU’s pioneering step is likely to inspire other countries to follow suit. This sets the stage for a domino effect, with countries like the United States and those in Asia potentially developing their own regulatory frameworks.

Drawing Parallels: The EU AI Act vs. GDPR

Just like GDPR, which reshaped data protection laws globally, the EU AI Act aims to be the benchmark for AI regulation. But unlike GDPR, the AI Act targets a much more dynamic and evolving technology, making it a trickier landscape to govern.

Conclusion: A Giant Leap, But Is It Enough?

The recent amendments to the EU AI Act signify a momentous development in the legal landscape of AI.

Please rate our article
0 / 5 5

Your page rank: