Google and Meta have criticized European AI regulations this week, warning that they could stifle innovation in the region.
Leaders from Meta, along with Spotify, SAP, Ericsson, and Klarna, signed an open letter to Europe, expressing concerns over “inconsistent regulatory decisions.” They argue that interventions by European Data Protection Authorities have created uncertainty about which data can be used for training AI models.
The signatories are urging for clear, consistent, and timely decisions on data regulations, similar to GDPR, that allow the use of European data.
The letter also warns that Europe risks missing out on the latest “open” AI models. They are freely available, and “multimodal” models, which handle inputs like text, images, speech, and video.
By limiting innovation in these areas, regulators are “denying Europeans access to technological advances” already available in the U.S., China, and India. Furthermore, without access to European data, AI models won’t accurately reflect local knowledge, culture, or languages.
“We want Europe to thrive in AI research and technology,” the letter concludes. “But inconsistent regulations are making Europe less competitive and innovative, putting it at risk of falling behind in the AI era.”
EU’s AI Regulations: A Balanced Approach That Protects Rights and Fosters Innovation, Says Expert
Some AI policy experts disagree that the E.U.’s AI policies are harmful. Hamid Ekbia, director of the Autonomous Systems Policy Institute at Syracuse University, shared a different perspective with TechRepublic. He explained that the European approach focuses on civil rights, offering a clear risk classification based on potential harm to consumers.
Ekbia emphasized that the regulations protect citizens’ rights by enforcing strict rules for “high-risk systems.” These include AI used in areas like education, employment, law enforcement, and border control.
He believes the E.U. law doesn’t hinder innovation. Instead, it promotes it through regulatory sandboxes, which provide a controlled environment for developing and testing AI systems. Additionally, the law offers legal clarity, especially for small and medium-sized businesses (SMEs). According to Ekbia, SMEs benefit more from clear regulations than from the absence of them. By creating a level playing field, the E.U. law supports smaller companies in competing effectively.
Google proposes allowing copyrighted data for training commercial models
Google has raised concerns about U.K. laws that prevent training AI models on copyrighted materials.
“If we don’t act now, we risk falling behind,” said Debbie Weinstein, Google’s U.K. managing director, in an interview with The Guardian.
She pointed out that the unresolved copyright issue is hindering development. From Google’s perspective, a solution is to revisit the 2023 stance, which allowed text and data mining (TDM) for commercial use.
TDM, the process of copying copyrighted work, is currently only permitted for non-commercial purposes. The U.K. government dropped plans to expand it for commercial use after receiving backlash from creative industries in February.
This week, Google also released a document titled “Unlocking the U.K.’s AI Potential.” It includes several policy recommendations, such as permitting commercial TDM, establishing publicly-funded computational resources, and launching a national AI skills service.
Also Read About –
Mastercard Leverages AI to Tackle the UK’s Growing APP Fraud Crisis
UN experts warn: AI development cannot be left to market whim