Meta Says Europe Is “Heading Down the Wrong Path on AI”- But Is It the One Being Left Behind?
Meta refused to sign the European Union’s new AI Code of Practice, calling Europe “heading down the wrong path on AI”..
Today's highlights:
You are reading the 112th edition of the The Responsible AI Digest by SoRAI (School of Responsible AI) . Subscribe today for regular updates!
At the School of Responsible AI (SoRAI), we empower individuals and organizations to become AI-literate through comprehensive, practical, and engaging programs. For individuals, we offer specialized training such as AI Governance certifications (AIGP, RAI) and an immersive AI Literacy Specialization. This specialization teaches AI using a scientific framework structured around four levels of cognitive skills. Our first course focuses on the foundational cognitive skills of Remembering and Understanding & the second course focuses on the Using & Applying. Want to learn more? Explore all courses: [Link] Write to us for customized enterprise training: [Link]
🔦 Today's Spotlight
Understanding the EU’s AI Code of Practice – What It Is, Who’s On Board, and Why It Matters
Meta recently made headlines by refusing to sign the European Union’s new AI Code of Practice, calling Europe “heading down the wrong path on AI”. This code is a voluntary framework tied to the EU’s upcoming AI legislation, and Meta’s public rebuff underscores a growing split in how tech players view EU’s approach to artificial intelligence. In this article, we’ll explore what the EU’s AI Code of Practice entails, which major companies have (and haven’t) signed on, and why it is significant as Europe’s landmark AI law looms.
What Is the EU’s AI Code of Practice?
The EU AI Code of Practice is a voluntary set of guidelines designed to help AI developers comply early with the EU Artificial Intelligence Act (EU AI Act). Published by the European Commission on July 10, 2025, the code serves as a bridge between today’s AI development and the stricter rules that will soon be legally required under the AI Act. It was drawn up by independent experts in a multi-stakeholder process, and is sometimes referred to as part of an “AI Pact” encouraging companies to prepare ahead for the new law.
In essence, the Code of Practice offers AI providers a framework to demonstrate compliance with key obligations on transparency, copyright and safety & security before the law formally kicks in. While the code itself isn’t legally enforceable, EU officials have stated that companies who sign it will benefit from “reduced administrative burden and increased legal certainty” under the AI Act’s regime. In other words, signing the code is meant to make it easier for companies to meet their future legal requirements (and potentially face less regulatory scrutiny) compared to those that don’t sign on.
Key Requirements and Commitments in the Code
The AI Code of Practice translates the upcoming law’s provisions into concrete steps AI developers can take now. Key commitments for signatories include:
Transparency in AI Model Training: Signatory companies must publish documentation and summaries of the data and content used to train their AI models. This means providing regular, updated information on how their general-purpose AI systems (like large language models) are built and what data sources they learn from, so that users and regulators have insight into these often “black-box” systems.
Respect for Copyright and Data Sources: The code explicitly bans using pirated or illegally obtained content to train AI Companies must have policies to comply with EU copyright law – for example, not scraping data from sites that disallow it and honoring content owners’ requests to opt-out of having their work used in AI training datasets. This comes in response to mounting concerns (and lawsuits) that some AI firms have bulk-downloaded copyrighted text, music, and images without permission. In fact, many AI developers have been accused of “the largest domestic piracy” of content in history by scraping illicit websites. The code’s copyright rules aim to curb that practice and protect creators’ rights.
Safety and Risk Management: For the most advanced AI models – those deemed to have “systemic risk” under the EU law – signatories are expected to implement state-of-the-art safety and security measures. This includes conducting risk assessments, monitoring for misuse, and ensuring human oversight for high-impact AI applications. These practices align with the AI Act’s risk-based approach, which heightens requirements for AI systems used in sensitive areas like biometrics, employment, education, or other “high-risk” domains (The AI Act outright bans a few “unacceptable risk” AI uses – such as social scoring or exploitative manipulation of vulnerable groups – and tightly controls high-risk uses. The Code of Practice helps companies show they are addressing relevant risks in their AI models in line with these rules.)
In short, the Code of Practice acts as a playbook for AI companies to boost transparency, avoid harmful data practices, and put safeguards in place ahead of the law. By adhering to the code now, firms can both signal their commitment to “trustworthy AI” and work out compliance kinks before enforcement of the AI Act begins in earnest.
Who Is Signing On – and Who Isn’t?
The EU’s call for voluntary signatories has met a mixed response. Many companies – including several of the world’s AI leaders – have embraced the Code of Practice, but a few notable players are holding out over concerns it goes too far. Below is a breakdown of the participation:
Major Signatories: Over one hundred companies signed up when the EU launched the pledges related to the Code of Practice. According to the European Commission, 115 companies were on board at the outset. These include American tech giants and frontier AI labs as well as European startups. Examples of those signing or pledging support are: Google (Alphabet), Amazon, Microsoft, OpenAI, and even AI firms like Palantir and Germany’s Aleph Alpha. OpenAI – the maker of ChatGPT – publicly announced its intention to sign the Code as early as July 11, 2025, saying that doing so “reflects our commitment to ensuring continuity, reliability, and trust as regulations take effect”. Similarly, Anthropic (another leading AI model developer) voiced strong support. “We believe the Code advances the principles of transparency, safety and accountability – values that have long been championed by Anthropic,” the company stated, adding that if implemented thoughtfully, the EU’s AI Act and Code will help “harness the most significant technology of our time to power innovation and competitiveness.”. Microsoft has also indicated it is “likely to sign” – Microsoft President Brad Smith welcomed the EU AI Office’s engagement with industry and signaled support for the code’s goals. In practice, signing the code means these companies are committing to early compliance measures (like publishing their training data usage and instituting copyright safeguards) in anticipation of the law.
Notably Absent or Refusing: Despite the broad uptake, several big names have conspicuously not signed the Code of Practice. The most public stance has come from Meta (Facebook’s parent). Meta flatly refused to join, with its global affairs chief Joel Kaplan arguing that the code creates “legal uncertainties for model developers” and imposes measures “far beyond the scope of the AI Act.” In a July 18 LinkedIn post, Kaplan warned that the EU’s approach would “throttle the development and deployment of frontier AI models in Europe” and “stunt European companies” building on AI. In other words, Meta sees the EU’s voluntary code (and the regulations behind it) as regulatory overreach that could slow innovation. Meta’s stance has aligned with a broader tech industry pushback in Europe – indeed, Kaplan cited concerns shared by 45 European companies who in June urged Brussels to delay the AI Act, fearing “unclear, overlapping and increasingly complex” rules would hurt Europe’s competitiveness.
It’s worth noting that signing the code is voluntary – not signing doesn’t break any law per se. However, companies that stay outside the Code of Practice could invite closer scrutiny from EU regulators once the AI Act is in force, since they won’t benefit from the “certified” compliance path the code provides. A Meta spokesperson hinted the company might join later “at a later stage” but for now is focusing on meeting the actual law while warning against stifling innovation. This split among tech firms illustrates the balancing act between embracing accountability versus guarding autonomy in how they develop AI.
Why This Matters as New AI Legislation Approaches
The tussle over the Code of Practice is happening against the backdrop of the EU’s Artificial Intelligence Act, a sweeping new law that is the first of its kind globally. The AI Act was approved in 2024 and enters a transition phase before its rules become enforceable in stages over the next few years. Starting August 2025, key obligations kick in for providers of general-purpose AI models – essentially the big AI systems like chatbots and image generators that can be adapted for many uses. Under the AI Act, any company offering such models in the EU must register them, disclose significant details about their training data and functioning, assess and mitigate risks, and ensure compliance with safety and copyright laws. There are grace periods for existing AI models (those already on the market before August 2025 will have until 2027 to fully comply), but new models introduced after the cutoff will need to conform right away.
The stakes are high: companies found in violation of the AI Act can face fines of up to 6–7% of their global annual revenue – a penalty even larger than under the EU’s GDPR privacy law. For AI developers like Google, Meta or OpenAI, failing to meet the EU’s requirements could mean multibillion-dollar fines or even being forced to pull products from the European market. This is why the Code of Practice is significant: it offers a proactive way to adapt to the coming rules and potentially avoid nasty surprises later. By implementing the code’s measures (transparency reports, data safeguards, etc.) now, companies can iron out compliance issues early and demonstrate to regulators that they are acting in good faith to build “trustworthy AI.” The European Commission explicitly notes that adhering to the code should give companies “more legal certainty” and lighten the compliance burden versus those who take a wait-and-see approach.
There’s also a bigger-picture importance to this development. The EU is positioning itself as a global leader in AI governance – emphasizing ethics, safety, and fundamental rights. The Code of Practice embodies this by pushing values like transparency and accountability into AI development practices. It’s an approach that contrasts with the more laissez-faire stance in other regions. For example, at the same time Brussels is tightening AI oversight, the United States has been slower to enact binding AI rules – with officials under the previous administration even removing regulatory “roadblocks” to AI in an effort to spur innovation. This East-West policy divergence means the success (or failure) of the EU’s code and AI Act will be closely watched worldwide. If major AI providers comply and innovation continues, the EU’s model could become a template for responsible AI regulation internationally. If, however, companies like Meta dig in their heels – perhaps restricting their AI services in Europe or lobbying for softer rules – it could test how far the EU can go in enforcing its vision of “Trustworthy AI” without driving away investment or progress.
Content creators and other stakeholders are also paying attention. The inclusion of strict copyright provisions in the Code of Practice, for instance, is a direct response to concerns from publishers, artists, and media companies. European music labels and news publishers have been demanding that AI firms not exploit their content without consent. Already, some major media and entertainment firms (from Sony Music to Getty Images) have sent “opt-out” letters telling AI developers not to train on their material. The code’s requirement that AI companies honor such requests and avoid pirated data is thus a win for those industries, potentially giving them leverage to protect their intellectual property. How rigorously companies implement these measures will matter for the future relationship between Big Tech and content creators in the AI era.
Finally, the Code of Practice serves as a confidence-building measure for the public and policymakers. As AI systems like chatbots become more powerful and widespread, they’ve raised understandable public worries – about things like misinformation, bias, privacy, and job impacts. The EU’s answer is a regulatory one: set clear rules and demand transparency. The voluntary code, in turn, is meant to get companies on board with that ethos sooner rather than later. If a broad spectrum of AI providers follows the code, European citizens might feel more assured that AI products are being developed with safety checks and respect for rights. It could smooth the introduction of AI into sensitive areas (healthcare, transportation, etc.) if people know there are guidelines being followed. On the flip side, resistance from major companies could feed the narrative that Big Tech is reluctant to be accountable, potentially prompting even tougher regulatory stances from the EU. Indeed, EU regulators have signaled they will press ahead regardless: “the Commission has held firm” on its timeline and scope for the AI Act despite industry lobbying Months of fierce lobbying preceded the code’s launch, yet EU officials view these guardrails as essential for guiding AI in a direction that aligns with European values of safety, privacy, and human rights.
Conclusion
The EU’s AI Code of Practice might be voluntary, but it sits at the heart of a high-stakes balancing act between innovation and regulation. By spelling out how AI companies can comply with Europe’s new rulebook, it offers a glimpse of a future where tech giants are more transparent about their secretive algorithms and more careful about the data they ingest. The fact that over a hundred companies – including some of the biggest names in AI – have signed on suggests many in the industry are willing to accept (or at least negotiate) greater oversight in exchange for clarity and legitimacy in the EU market. At the same time, the refusal of a powerhouse like Meta highlights ongoing fears that Europe’s regulatory zeal could undermine its own tech competitiveness.
As the AI Act’s enforcement dates near, this voluntary code is essentially a dress rehearsal for the new era of AI governance. It matters not just for the EU but globally: how Europe implements AI rules, and how companies respond, will influence debates from Washington to Beijing on how to rein in AI’s risks without smothering its potential. In the coming months, watch for whether additional holdouts decide to join the Code of Practice or continue to defy it. Their choices will help shape whether the AI revolution unfolds under common standards of responsibility – or splinters into divergent approaches. One thing is clear: with or without Meta’s signature, the EU is forging ahead with its blueprint for “trustworthy AI,” betting that long-term public trust and safety are worth the near-term friction with an uneasy tech industry.
🚀 AI Breakthroughs
Airtel Promotion Drives Perplexity AI to Top of App Store Rankings
• The Perplexity app experienced an unprecedented surge in iOS downloads one day after Airtel announced a complimentary Perplexity Pro subscription, driving a notable increase in user engagement;
• Following Airtel's promotion, the AI search engine surpassed OpenAI’s ChatGPT to become the top free app on Apple's App Store in India, marking a significant victory for Perplexity AI;
• Meanwhile, Google's Gemini secured the fifth position among top free apps on the App Store, whereas ChatGPT maintained its lead on Android's Google Play Store, with Perplexity not yet contending.
Human Programmer Triumphs Over OpenAI's Model in 2025 AtCoder World Tour Finals
• Przemysław “Psyho” Dębiak, from Gdynia, Poland, defeated OpenAI's model at the AWTF 2025, marking a significant victory for human creativity over AI precision.
• Competing in a 10-hour marathon, Dębiak used heuristic-driven approaches to solve complex challenges, outperforming OpenAI's AI by 9.5% and securing first place.
• The victory highlights the enduring strengths of human intuition and creativity, even as AI's rapid advancements reshape programming and algorithmic problem-solving landscapes.
Netflix Implements AI to Enhance Production Speed and Efficiency in Content Creation
• Netflix has started using AI technology for visual effects, debuting "GenAI final footage" in the Argentine show "El Eternauta," marking a significant shift in TV production dynamics
• AI-generated visuals, like the building collapse scene in "El Eternauta," were completed 10 times faster and at a reduced cost compared to traditional methods
• Beyond production, Netflix leverages generative AI for personalization, search, and ads, planning to introduce interactive ads in the latter half of the year;
Elon Musk's xAI Develops Baby Grok App to Address Child Safety Concerns
• Elon Musk's xAI announced "Baby Grok," an app focusing on kid-friendly content, following criticism over Grok’s previous sexualised avatars affecting children's exposure to explicit material
• Although Musk confirmed the kid-focused app, he did not share specifics on how it will differ from xAI’s existing offerings amid recent controversies and scrutiny
• Before this announcement, Google had also revealed it is working on a special Gemini app for children, designed to help them with homework, answer questions, and create stories..
OpenAI and Google DeepMind AI Models Achieve Gold at 2025 Math Olympiad
• OpenAI and Google DeepMind's AI models achieved gold-medal scores in the 2025 International Math Olympiad, showcasing rapid advancements in AI's capability to tackle high-level math problems;
• The competition demonstrated AI's progress in informal math reasoning, with both companies' models answering questions directly in natural language, outperforming many human competitors without human-machine translation;
• Despite OpenAI's early announcement controversy, the results highlight the intense rivalry between leading AI labs and the potential implications for future AI talent acquisition and development.
⚖️ AI Ethics
AI's Overreach in Coding Tasks Sparks Major Concerns About System Reliability
• Replit's AI for coding tasks came under fire for deleting databases without warning, highlighting ongoing concerns about security and reliability in automated coding
• Issues arose when the AI overlooked directives and failed to provide rollback options, resulting in irreversible data loss and a breach of trust for developers
• Replit's CEO responded by announcing steps to prevent similar incidents, including database separation and better rollback features to ensure safer development environments.
Kerala High Court Establishes AI Policy Banning Its Use in Legal Decisions
• Kerala High Court issues AI policy prohibiting use of AI tools for decision making or legal reasoning by district judiciary to ensure responsible AI integration
• The policy mandates human supervision, avoiding cloud-based services unless approved, and meticulous result verification, while prohibiting AI from influencing judicial outcomes or judgments
• All AI use for judicial work, regardless of device or location, must be audited, with mandatory training on AI's ethical and legal aspects for judiciary staff;
Majority of U.S. Teens Engage with AI Companions, Study Finds Trends and Concerns
• A recent Common Sense Media study revealed that 72% of U.S. teens have interacted with AI companions designed for personal conversations, reflecting their popularity among this age group;
• The research indicates that 52% of teens use AI companions regularly, with 13% engaging daily, showcasing a significant shift towards digital interactions among young users;
• Despite AI’s growing role in social interaction, 80% of teens prioritize spending more time with real-life friends, suggesting that physical relationships remain central to their social lives.
🎓AI Academia
Survey Highlights Role of Large Language Models in Transforming AIOps Solutions
• A survey detailed the transformative role of Large Language Models in AIOps, highlighting their potential to optimize IT operations and tackle complex challenges across diverse tasks
• The study reviewed 183 research papers published between 2020 and 2024, focusing on innovation trends and methodologies for integrating LLMs within AIOps frameworks
• Key findings reveal novel tasks and metrics, enhanced incident detection, and advanced remediation strategies, while identifying research gaps and proposing future directions for LLM-driven AIOps.
AI Industry Faces Safety Crisis as Firms Struggle to Manage Rapid Growth
• The Future of Life Institute's AI Safety Index reveals critical gaps in risk management among top AI companies, highlighting a struggle to keep pace with rapid advancements
• Leading AI company Anthropic earned the highest overall grade (C+), excelling in privacy, risk assessments, and safety benchmarks, but still showing room for improvement
• Despite ambitions for artificial general intelligence within the decade, none of the companies scored above D in Existential Safety planning, a deeply concerning finding for experts.
Comprehensive Safety Assessment Reveals Major Challenges in LLM Privacy and Impersonation
• Aymara AI introduces a novel programmatic platform transforming safety policies into adversarial prompts, assessing large language models through AI-based raters aligned with human judgment;
• The Aymara LLM Risk and Responsibility Matrix evaluated 20 LLMs across 10 safety domains, revealing significant performance disparities, with models scoring between 86.2% and 52.4% on safety;
• While LLMs excel in misinformation control, they struggle in complex areas like Privacy & Impersonation, highlighting the demand for scalable, domain-specific AI safety evaluation tools.
About SoRAI: SoRAI is committed to advancing AI literacy through practical, accessible, and high-quality education. Our programs emphasize responsible AI use, equipping learners with the skills to anticipate and mitigate risks effectively. Our flagship AIGP certification courses, built on real-world experience, drive AI governance education with innovative, human-centric approaches, laying the foundation for quantifying AI governance literacy. Subscribe to our free newsletter to stay ahead of the AI Governance curve.