AI REGS & RISK: The Dark Side of AI, Part 2 – Open Foundati

By Greg Woolf, AI RegRisk Think Tank

Welcome back to the dark side of AI! In our last column, we discussed the high-level global risks posed by AI, showcasing the contrasting views of industry titans like Andrew Ng and Vinod Khosla. This week, we’re diving even deeper into the shadows, tackling a hot topic: do open foundation models enable bad actors to run amok without any monitoring? Buckle up as we explore this chilling possibility.

The Question: Do Open Foundation Models Enable Malicious Activities?

Open foundation models, which you might know as open-source AI models, are freely accessible to everyone. Unlike proprietary models that are locked behind APIs and monitored by the companies that control them, these open foundation models can be downloaded and run on any machine. This means no API trail, no central oversight, and plenty of opportunities for misuse by those lurking in the dark corners of the internet.

What Exactly Are Open Foundation Models?

Think of open foundation models as the wild west of AI. They’re models whose code and weights are out in the open for anyone to use, tweak, and deploy. This openness is fantastic for innovation and collaboration, but it also opens the door to some pretty scary possibilities. Without the need for an API, these models can operate on any device, anywhere, completely under the radar. No central authority, no tracking—just pure, unmonitored AI power.

Real-World Horrors: Examples of Potential Misuse

Voice Cloning: Imagine getting a call from your boss, except it’s not your boss—it’s a voice clone created using an open foundation model. Scammers can use this tech to impersonate people convincingly, leading to all sorts of fraud. And because there’s no digital footprint, tracking down the perpetrators is next to impossible.

Cybersecurity Nightmares: Cybercriminals can use open foundation models to craft sophisticated phishing attacks or hunt for vulnerabilities in systems. They can do all this offline, making it incredibly hard for traditional security measures to catch them. It’s like giving a burglar the keys to your house and turning off the alarm.

Biosecurity Risks: Now for something really terrifying: the potential misuse of AI in biosecurity. Open foundation models could be used to design harmful biological agents or automate the creation of dangerous substances. With no one watching, the risks to global health and safety are staggering.

Non-Consensual Intimate Image (NCII) Generation: Deepfakes are already a huge problem, but open foundation models take it to another level. These models can create realistic but fake images or videos of people, leading to severe ethical and legal issues. The creation of NCII is not just harmful; it’s devastating. And without oversight, it’s a nightmare waiting to happen.

The Stanford Whitepaper: A Glimmer of Hope?

Enter the smart folks at Stanford’s Center for Research on Foundation Models (CRFM). They recently published a whitepaper that digs into the risks and benefits of open foundation models. The paper acknowledges the scary stuff but also argues that the incremental risk increase is marginal compared to the massive benefits of transparency, innovation, and collaboration. Essentially, they’re saying it’s worth the risk, but with some caveats.

Managing the Madness and Real-World Applications

To handle these risks, Stanford’s research suggests a structured risk assessment framework, similar to cybersecurity protocols. This involves identifying and evaluating risks, developing mitigation strategies, and continuously monitoring and reviewing the model’s performance. Applying this framework to real-world examples like misinformation, cyber threats, and privacy violations shows that with proper risk management, the benefits of open foundation models can outweigh the risks. For a detailed breakdown, check out the full framework and its applications here.

Aligning with the EU AI Act Governance Model

The European Union often leads the charge in governance and regulations, setting the stage for others to follow—just look at GDPR, which paved the way for the CCPA and other US data privacy laws. The EU AI Act, finally adopted on May 21, is a prime example of this forward-thinking approach. This groundbreaking legislation takes a ‘risk-based’ approach, slapping stricter regulations on AI systems that have a higher potential to harm society. The goal? To standardize AI rules and potentially set a global standard for AI regulation. The Act aims to promote the development and adoption of safe and trustworthy AI systems within the EU’s single market, ensuring they respect fundamental rights while fostering investment and innovation. Of course, the Act exempts certain areas like military and defense applications, and research purposes.

The Act will be published into law in 20 days, with a two-year timeline for full adoption. However, certain high-risk AI uses will be banned in just six months. These banned uses include:

Manipulative and Deceptive Techniques
Exploiting Vulnerable Population Segments
Social Scoring and Profiling
Unauthorized Biometric and Facial Recognition
Workplace Monitoring of Emotions and Behavior

The clock is ticking, and AI developers need to align their practices with these new regulations soon. You can check out the details here.

Conclusion

The open foundation model debate is complex and fraught with potential pitfalls. While the risks are real and scary, the benefits of transparency and innovation are significant. By implementing a comprehensive risk assessment framework, we can strike a balance that mitigates dangers and aligns with governance models like the EU AI Act. This way, we can harness the power of AI responsibly, ensuring it serves the greater good without unleashing chaos.

Stay tuned for our next column, where we’ll tackle specific threats to wealth managers and their clients and offer practical advice on how to stay safe in the ever-evolving AI landscape.

Greg Woolf Bio

Greg is an accomplished innovator and AI strategist with over 20 years of experience in founding and leading AI and data analytics companies. Recognized for his visionary leadership, he has been honored as AI Global IT-CEO of the Year, received the FIMA FinTech Innovation Award, and was a winner of an FDIC Tech Sprint. Currently, he leads the AI Reg-Risk Think Tank, advising financial institutions, FinTech companies, and government regulators on leveraging AI within the financial services industry. https://airegrisk.com

The post AI REGS & RISK: The Dark Side of AI, Part 2 – Open Foundation Models and Their Risks appeared first on Dwealth.news.