Skip to Content

Coded for Privileged Access: How Big Tech Undermined the EU's AI Code of Conduct

The European Union's upcoming Code of Conduct for general-purpose AI models, slated for release next month, is facing significant criticism. A new study, "Coded for Privileged Access," jointly published by Corporate Europe Observatory (CEO) and LobbyControl, reveals how Big Tech successfully lobbied to weaken the proposed regulations, significantly reducing protections against potential AI risks. This report details the troubling imbalance of power during the code's development and the concerning implications for the future of AI regulation in the EU.

Unequal Access: A System Rigged in Favor of Big Tech

The study paints a stark picture of unequal access to the Code of Conduct's development process. Tech giants like Google, Microsoft, Meta, and Amazon enjoyed significantly greater access than other stakeholders, including smaller companies and civil society organizations. A select group of large AI model providers, the very entities the code aims to regulate, were invited to exclusive workshops with the working group chairpersons. This privileged access extended to fifteen US corporations, highlighting the significant international influence at play.

In stark contrast, other stakeholders, including crucial voices from civil society, faced severely limited participation. Their contributions were often relegated to an online platform, utilizing emoji-based feedback on questions and comments. This severely constrained their ability to effectively engage with the process and voice meaningful concerns. The limited and largely symbolic participation drove several organizations, including Reporters Without Borders, to withdraw altogether, citing the overwhelming influence of Big Tech.

This disparity in access highlights a fundamental flaw in the development process. A fair and effective regulatory framework requires inclusive participation from all relevant stakeholders, ensuring that diverse perspectives inform the final outcome. The biased access afforded to Big Tech created a significant power imbalance, skewing the process in their favor.

Conflicts of Interest and the Role of Consulting Firms

The EU Commission's AI Office engaged external consultants to assist in the Code of Conduct's development. However, the study reveals concerning business relationships between these consulting firms and Big Tech, raising serious questions about potential conflicts of interest.

Wavestone, a major French consulting company and the primary contractor for this project, serves as a prime example. Wavestone implements Microsoft's KI-Tool 365 Copilot for French companies – effectively working on behalf of Microsoft. This close relationship is further underscored by Wavestone's recognition as the "Microsoft Partner of the Year Award" recipient in 2024. Simultaneously, Wavestone played a key role in advising the EU Commission on the development of the AI Code of Conduct. This inherent conflict raises concerns about potential bias and the prioritization of Big Tech's interests over those of the broader public.

This situation exemplifies the need for greater transparency and stricter conflict-of-interest protocols in the EU's regulatory processes. Independent and unbiased expert consultation is essential to ensure the integrity and effectiveness of regulations designed to govern powerful industries.

Weakening the Definition of Risk: The Case of "Illegal Discrimination"

One of the most contentious issues centered around the risk taxonomy. An earlier draft of the code differentiated between "systemic risks," such as the loss of human supervision, and a less impactful category of "additional risks." The study reveals a concerted lobbying effort by Google and Microsoft that successfully resulted in the removal of "illegal discrimination" from the list of systemic risks. This action significantly weakened the overall risk assessment, diminishing the potential for meaningful regulation and oversight.

This strategic removal underscores how powerful lobbying can manipulate the regulatory process to minimize accountability. By downplaying the systemic risk of illegal discrimination, the code reduces the potential for interventions to mitigate this crucial harm. The implications are significant, potentially leading to the proliferation of AI systems perpetuating discriminatory outcomes without sufficient regulatory oversight.

The Broader Context: A Deregulatory Wave in the EU?

The weakened AI Code of Conduct appears to be part of a larger trend towards deregulation within the EU. With key regulations on AI, data protection, and privacy scheduled for review this year, there are concerns that Big Tech stands to benefit greatly. Their substantial lobbying resources enable them to actively weaken the EU's digital rules established in recent years. Furthermore, the influence of the US government, with its close ties to Silicon Valley tech oligarchs, adds another layer of complexity and potential pressure on EU regulatory bodies.

This concerted lobbying effort raises concerns about the EU's commitment to meaningful regulation of powerful tech companies. The prioritization of "simplification" and "competitiveness," as highlighted by Bram Vranken from Corporate Europe Observatory, creates an environment conducive to aggressive lobbying tactics that undermine the effectiveness of regulatory efforts.

The Consequences of Undermining AI Regulation

The weakening of the AI Code of Conduct has significant implications:

  • Increased Risk of Harm: Reduced regulations create a higher risk of AI systems causing harm, including discrimination, privacy violations, and the erosion of human oversight.
  • Erosion of Public Trust: A perceived bias towards Big Tech in the regulatory process erodes public trust in the EU's ability to effectively govern the AI landscape.
  • Unlevel Playing Field: The weakened regulations disproportionately benefit large tech companies, creating an uneven playing field for smaller businesses and startups.
  • Global Implications: The EU's regulatory approach significantly influences global AI governance. A weak regulatory framework sets a dangerous precedent for other regions.

A Call for Stronger Regulation

The "Coded for Privileged Access" study concludes with a powerful call for the EU Commission to resist the influence of tech monopolies and strengthen AI regulation. Both CEO and LobbyControl emphasize the critical need for a more inclusive and transparent regulatory process, ensuring that the voices of civil society and smaller companies are not drowned out by the disproportionate influence of Big Tech.

The study provides a compelling case for a more robust and effective approach to AI regulation. This includes:

  • Increased Transparency: Open and accessible information on the regulatory process, including stakeholder involvement, lobbying efforts, and decision-making rationale.
  • Independent Expertise: Utilizing independent experts without conflicts of interest to advise on the development of AI regulations.
  • Robust Enforcement Mechanisms: Establishing clear and effective mechanisms to ensure compliance with AI regulations and address violations.
  • Protecting Whistleblower Rights: Creating a safe environment for individuals to report unethical practices within the industry.
  • International Cooperation: Collaborating with other countries to develop global standards for AI governance.

This fight against weakening AI regulations requires a unified effort. Civil society organizations, smaller businesses, and concerned citizens must actively participate in the regulatory process, demanding accountability and pushing for stronger protections against the potential harms of unregulated AI. The EU's response to this challenge will be crucial not only for its own citizens but also for shaping the global landscape of AI governance. The future of responsible AI development hinges on a commitment to fair, inclusive, and robust regulations, resisting the undue influence of powerful tech companies.

The study serves as a wake-up call. The EU must reject the current path of deregulation and prioritize the development of a comprehensive AI regulatory framework that safeguards the interests of its citizens and promotes responsible innovation. The alternative – a future shaped by the unchecked power of Big Tech – poses significant risks to democratic values, individual rights, and societal well-being. The fight for a fairer, safer, and more equitable AI future is far from over. The findings of this report highlight the critical need for ongoing vigilance and collective action to ensure that AI serves humanity, not the other way around.

Meta AI: A Deep Dive into the New Standalone AI Chatbot App