26 September 2024
By Greg Woolf, AI RegRisk Think Tank
The European Union’s strict approach to artificial intelligence regulation is starting to shake up the tech world, affecting market access and possibly the bottom line of some major tech companies. While the EU’s AI Act aims to protect consumer rights and data privacy, it’s causing concerns for AI-forward companies like Apple, X (formerly Twitter), and Meta.
Can AI be Creepy?
Imagine being approached by a salesperson who uses their phone to scan your face. In a split second, they have access to a trove of information about you: your name, age, family details, work history, leisure activities, and possibly even credit score and health information from the deep web. Then, an AI crafts a sales pitch so perfectly tailored to your profile that it’s nearly irresistible. Creepy, right? This is the kind of scenario that regulations aim to prevent, but at what cost to innovation and progress?
Apple’s iPhone Launches Without “Apple Intelligence”
Apple recently unveiled its latest iPhone, but if you were expecting groundbreaking AI features in Europe – you’re out of luck. The much-anticipated “Apple Intelligence” features have been delayed or limited, reportedly due to the EU’s stringent regulations on AI and data usage. Granted, a lot of this is driven by Europe’s Digital Markets Act (DMA) and General Data Protection Regulation (GDPR) —both of which are central to concerns about AI adoption.
Let’s put some numbers to this. From Q1 to Q3 of 2024, Apple’s year-to-date revenue in the EU was around $76 billion out of a global total of $288 billion—that’s about 26%. This represents a significant chunk of change. By holding back innovative features in such a substantial market, Apple risks European consumers turning to competitors who are willing to push the regulatory envelope or innovate within the guidelines for safer AI products.
Ireland Sues Elon Musk’s X Over Data Usage
In another twist, Ireland’s Data Protection Commission is reportedly taking legal action against X for allegedly using consumer data to train its AI models without proper consent, a violation of GDPR. This move underscores the EU’s tough stance on data privacy but also raises concerns about stifling innovation. If companies are bogged down with lawsuits and compliance issues, will they still have the appetite—or the resources—to innovate?
Meta Halts AI Model Releases in Europe
Similarly, Meta, the parent company of Facebook and Instagram, has put the brakes on rolling out its latest AI models in Europe. The reason? You guessed it—regulatory concerns. By halting these releases, Meta is essentially depriving European users of advanced features that could enhance their experience. This is important because Meta’s latest open-source (i.e., free) large language models are on par with OpenAI’s GPT-4 series and have been downloaded 350 million times since their launch in April 2024, providing foundational free AI development capabilities to individuals and companies from startups through global enterprises.
Will the EU Fall Behind in Innovation?
While the EU is arguably taking a more conservative approach to AI by prioritizing social good and consumer rights over commercialization, there’s a flip side here. The rest of the world, particularly the US and China, is charging ahead with AI development less encumbered by stringent regulations. This could leave the EU lagging in AI innovation and economic growth. Are the regulations worth the potential risks of lower economic growth from falling behind in the AI revolution?
This Brings Us to the US
Back in the USA, California Governor Gavin Newsom is wrestling with the same dilemma: innovation versus consumer rights. It’s a tough call, and indications are that he’s going to veto California Senate Bill 1047, which would impose stricter regulations on AI. Recently, he made a big show of signing other AI-related legislation, to protect artistic digital likeness and prevent the spread of deepfakes, during an onstage interview with Marc Benioff (CEO of Salesforce) at the Dreamforce conference. Could this be public grandstanding to soften the blow if he potentially vetoes SB 1047?
The Stakes Are High
I don’t envy Governor Newsome; this isn’t a decision to take lightly. Regulations are notoriously hard to undo and can have long-lasting impacts for years or even generations. Let’s not forget that California is home to AI forerunners like OpenAI, Google, Meta, Apple, NVIDIA and more. The way this plays out could set the tone for the future of AI development not just in California but across the entire United States.
Final Thoughts
We’re at a crossroads: The EU’s cautious approach prioritizes consumer protection but risks stifling innovation and economic growth. The US seems to be leaning towards fostering innovation, potentially at the expense of consumer rights. That wouldn’t be a first — EU rules around data privacy are far more restrictive than in the US. However, there must be a “3rd door” alternative?
Perhaps industry-led initiatives for safe and responsible AI can provide the flexibility needed to innovate while safeguarding the public. In a world where technology moves faster than legislation, it’s crucial to find safe AI solutions that don’t leave anyone behind—or worse, create an AI backwater in regions that could have prospered.
Greg Woolf is an accomplished innovator and AI strategist with over 20 years of experience in founding and leading AI and data analytics companies. Recognized for his visionary leadership, he has been honored as AI Global IT-CEO of the Year, received the FIMA FinTech Innovation Award, and was a winner of an FDIC Tech Sprint. Currently, he leads the AI Reg-Risk Think Tank, advising financial institutions, FinTech companies, and government regulators on leveraging AI within the financial services industry. https://airegrisk.com
The post AI REGS & RISK: EU Blocks AI to the Detriment of Big Tech appeared first on Dwealth.news.
© Copyright 2024, Inc. All Rights Reserved.