June 18, 2024

Europe Makes History with the Approval of the EU AI Act

Europe recently approved groundbreaking AI legislation that was introduced in April 2021, named the Artificial Intelligence (AI) Act (or the EU AI Act). In mid-March of this year, members of the European Parliament (MEPs) approved the regulation one month earlier than expected, emphasizing the urgency and importance of such legislation given the dramatic rise of AI. 

The EU AI Act attempts to regulate the burgeoning technology, which has yet to be subject to specific regulations that curtail risks associated with its far-reaching and sometimes questionable capabilities.  

Europe hopes to reduce uncertainty with the new legislation, which aims to protect individuals, society, the environment, and the overall rule of law from high-risk AI’s negative and potentially devastating impacts. High-risk AI systems include AI systems used in critical infrastructure, transportation, healthcare, recruitment, law enforcement, justice administration, and education. These systems will be subject to heightened requirements and obligations.  

Other AI systems using cognitive behavioral manipulation and social scoring fall under the category of unacceptable AI and will be banned by the new law. Predictive policing involving profiling and the use of biometric data to classify people based on religion, race, sexual orientation, or other protected statuses is also outlawed.  

In crafting the AI Act, Europe acknowledged the usefulness of AI and its contribution to continued innovative enterprises. The EU AI Act states: “As a pre-requisite, AI should be a human-centric technology. It should serve as a tool for people, with the ultimate aim of increasing human well-being.” 

A Law in the Making 

Getting to this point, though, was a process. The legislation experienced its most significant move forward when it underwent major revisions in late 2023. Lengthy negotiations resulted in several amendments to the initial law proposal, including: 

  • Bans on intrusive and discriminatory AI use 
  • Expanded classification of high-risk AI to include harm to people’s health, safety, and fundamental rights or the environment, and AI systems used to influence voters in political campaigns or recommender systems on social media 
  • Obligations for foundation model providers 
  • Exemptions to rules for research activities and AI components provided under open-source licenses to support continued innovation efforts 
  • Exemptions for law enforcement to use biometric identification in certain situations 
  • Citizens’ right to file complaints and receive explanations  

Unified Front Against High-Risk AI 

Europe isn’t the only one responsible for advancing AI risk management. According to The Brookings Institution, a Washington, D.C.-based nonprofit tasked with collecting data that helps improve policy and governance, it’s important that the U.S. and Europe align on their approaches to mitigating potential harms associated with high-risk AI. The organization argues it’s necessary to “facilitate bilateral trade, improve regulatory oversight, and enable broader transatlantic cooperation,” with the latter being most critical. U.S. support is paramount to getting other countries on board.  

Even though the U.S. has assessed and outlined AI-associated risks in several federal documents, it currently lacks consistent AI policies, per Brookings. This unpredictable approach to AI risk management can leave citizens vulnerable.   

The National Artificial Intelligence Advisory Committee (NAIAC) recently reported its findings on the potential future risks of AI in an October 2023 document. The committee divided the risks into two broad categories: near-term and long-term. It acknowledged that near-term risks could already be seen within society as they happen in real time. These risks include: 

  • Algorithmic bias and discrimination (especially against marginalized and underrepresented groups) 
  • Misuse of surveillance 
  • Privacy infringements 
  • Potential job displacements in certain sectors  

Some argue that several perceived long-term AI risks may be existential. Since they haven’t happened yet, there’s no way to know. However, these risks might include: 

  • Intensification in power differences and control outside of predetermined limits 
  • Super intelligent AI acting in ways contrary to human values 
  • Autonomous weapon systems having negative impacts on the nature of warfare  

According to Forbes, despite knowing the risks, the U.S. is having trouble keeping pace with AI legislation. The U.S. may look to Europe for a more established and continuous framework as the AI Act gets underway.  

Regulation vs. Innovation 

However, not everyone views AI regulation in the same way. A group of 150 AI professors and PhD scholars signed an open letter relaying the importance of certain exemptions for AI research and open-source projects. Specifically, the letter asks that the AI Act exempt “research use and open-source releases aimed at research use from any requirements.”   

The authors of the letter state that the proposed regulation of foundation models is what researchers should find troublesome. They assert that introducing several requirements for the producers of these large AI models capable of performing a “wide range of distinctive tasks,” including generating video, images, text, and even computer code, “threatens to kill European frontier AI research.”   

The authors believe that European researchers releasing their models would be at a disadvantage compared to non-EU researchers, having to “face a significant additional burden of complying with the requirements.” The letter alleges that requirements stretch beyond those of research and “risk violating the freedom of research.” These requirements include extensive documentation and external testing by certified “red teams.” There is also the possibility of imposed fees.  

However, balance is key. Regulations mitigate risks and protect individuals and society, but policymakers can also inhibit innovation efforts. When the correct balance is found, regulated AI can help instill trust and encourage responsible use while not hindering progress. Additionally, it can promote more ethical R&D endeavors with the right market incentives.   

Looking Ahead: How and When Will the EU AI Act Be Enforced? 

Following the European Parliament’s vote of approval, the EU AI Act entered into force 20 days after publication in the EU Official Journal—usually occurring within a few days after a vote affirming the proposed regulation. This then sets the compliance timelines into motion, with the law taking full effect two years from the date of force. In addition to the two-year mark, some provisions will become applicable at earlier or later dates.   

According to Reuters, depending on the violation, companies can fall prey to fines ranging from 7.5 million euros or 1.5% of turnover to 35 million euros or 7% of global turnover.   

However, organizations can start preparing now to adapt to new guidelines and avoid potential penalties. Oxford can help you learn about AI currently in use within your industry. The more you know about AI systems within your organization, the better you can respond to and comply with changing regulations. It’s helpful to categorize your organization’s AI applications using a risk-based approach, including identifying high-risk AI systems requiring stricter compliance measures.   

Developing and implementing robust governance structures is also critical, including establishing internal policies and procedures for data management, transparency, and accountability. Training staff on the requirements of the EU AI Act and fostering a culture of ethical AI development are also essential steps.   

Digital Transformation Practice Director and AI expert at Oxford Alie Doostdar provided feedback on the new AI legislation, stating: “The European Parliament approval of the most recent EU AI Act is demanding companies to ensure their AI technologies usage are aligned with new legislations. Otherwise, organizations can be fined 1.5% to 7% of their revenue per violation which can negatively impact the company. It is inevitable that the U.S. government will implement and enforce similar regulations following the EU AI Act. Therefore, it is important for companies to establish guiding principles prior to deployment of any AI technologies.”    

Doostdar continued: “Oxford’s AI Implementation Strategy takes a proactive approach by introducing the establishment of Responsible AI Council, which is the governing body overseeing AI technology deployment and use. Without responsible AI guiding principles, the outcomes of AI use can be extremely negative for both the individuals and companies. Responsible AI guiding principles are centered on Privacy, Fairness, Security, Safety, Validity, Reliability, Explainability, Transparency, and Accountability. We employ a risk-based approach to categorize AI applications within organizations, identifying high-risk systems that require stricter compliance measures. Oxford consultants have the knowledge and expertise in creating appropriate guiding principles to ensure AI technology deployment delivers positive business outcomes.”    

By proactively addressing the AI hot spots within your organization, you can comply with the EU AI Act requirements and leverage it to enhance your AI practices and build trust with stakeholders.  

Quality. Commitment.
Trust.

Whether you want to advance your business or your career, Oxford is here to help. With nearly 40 years’ experience, we know that a great partnership is key to success. Start a conversation today.