NO, EU AI Act itself has not been officially delayed YET!
++ White House drafts guidance to bypass Anthropic risk flag for new models; China blocks Meta’s $2B Manus deal and orders full unwind..
++ White House drafts guidance to bypass Anthropic risk flag for new models; China blocks Meta’s $2B Manus deal and orders full unwind; U.S. judges weigh AI use in courts; studies show AI search results diverge from Google rankings, AI health summaries are preferred despite trust concerns, and AI text made 35% of new websites by mid-2025; Musk testifies xAI partly trained Grok using OpenAI distillation; OpenAI adds stronger ChatGPT account security with new YubiKeys; Maharashtra approves AI Policy 2026 targeting Rs 10,000 crore investment and 1.5 lakh jobs; researchers publish TRUST decentralized AI auditing and V.O.I.C.E. synthetic voice risk taxonomy; new reports track agentic AI impacts on enterprise software and dev workflows, plus open problems in frontier AI risk management.
Today’s highlights:
On 28 April 2026, the European Parliament, the Council of the EU and the European Commission held their second political trilogue on the Digital Omnibus on AI, a proposal to amend the EU AI Act by introducing simplification measures and postponing some high-risk AI compliance deadlines. After about 12 hours of negotiations, the institutions failed to reach an agreement. Reuters reported that talks would resume the following month, with people familiar with the negotiations saying the next round was likely in about two weeks; DLA Piper later identified 13 May 2026 as the scheduled further trilogue.
The disagreement was not reported as a debate over whether the EU should regulate AI at all. It was about how far the existing AI Act should be simplified or delayed, especially for high-risk AI systems and for AI systems already covered by sector-specific regulation. One key sticking point was whether sectors already subject to rules such as product safety regulation, including areas like medical devices and toys, should receive exemptions or different treatment under the AI Act.
At the School of Responsible AI (SoRAI), we help both individuals and organizations build practical, real-world AI literacy and Responsible AI capability through structured, engaging, and action-oriented programs. For individuals, this includes AI Literacy, globally relevant certification training such as AIGP, RAI, and AAIA, as well as career transition and advisory support for professionals moving into AI governance roles. For organizations, we offer customized enterprise AI literacy training, Responsible AI strategy and governance setup, and AI assurance support to help teams understand, operationalize, and validate AI responsibly. At the core of SoRAI is a progressive three-layer approach: first helping people understand AI, then build the right governance foundations, and finally validate readiness through assurance and audit-focused thinking. Want to learn more? Explore our AI Literacy programs, certification trainings, and career support offerings, or write to us for customized enterprise solutions.
⚖️ AI Ethics
White House Drafts Guidance to Let Agencies Bypass Anthropic Risk Flag for New AI Models
The White House is drafting guidance that could let federal agencies bypass Anthropic’s supply-chain risk designation and adopt new AI models, including Mythos, according to an Axios report cited by Reuters. The draft executive action is said to offer the Trump administration a way to ease its dispute with Anthropic, though Reuters said it could not independently verify the report. The issue follows tensions earlier this year between Anthropic and the Pentagon after the company refused to remove safeguards blocking use of its AI for autonomous weapons or domestic surveillance. The White House and Anthropic had not responded to requests for comment at the time of reporting.
China Blocks Meta’s $2 Billion Manus Acquisition, Orders Full Unwinding After Months-Long Probe
China’s top economic planner, the NDRC, has blocked Meta’s reported $2 billion acquisition of AI startup Manus after a months-long probe, ordering the deal to be fully unwound without giving a detailed explanation. The decision is a major intervention in a cross-border AI deal and could hurt Meta’s push into the fast-growing AI agents market. Manus, founded by Chinese engineers and later moved to Singapore, had already integrated closely with Meta, with about 100 employees reportedly working from Meta’s Singapore office and its founders taking executive roles. The case also highlights wider scrutiny around Chinese-linked AI firms, as the deal had already drawn attention in both Beijing and Washington over ownership, investment, and national security concerns.
South Africa Withdraws Draft AI Policy After AI-Generated Fake Citations Undermine Document Credibility
South Africa has withdrawn its draft national AI policy after officials found that several academic citations in the document were fabricated, likely generated by AI without proper verification. An investigation found at least six of the policy’s 67 references pointed to journal articles that do not exist, even though the journals themselves were real. The withdrawn draft had proposed new AI oversight bodies and incentives such as tax breaks, grants, and subsidies to grow the country’s AI sector. The government said the policy will be revised and reissued for public comment, while warning that the episode shows the risks of using generative AI without strong human oversight.
Elon Musk Testifies xAI Partly Trained Grok Using OpenAI Model Distillation Techniques
Elon Musk testified in a California federal court that xAI partly used distillation on OpenAI models to help train Grok, describing it as a common practice across AI companies. The statement came during Musk’s lawsuit against OpenAI, Sam Altman, and Greg Brockman over the company’s shift from its original nonprofit mission to a for-profit structure. Distillation, which involves training a model by querying another model or API, has become a flashpoint as OpenAI, Anthropic, and Google try to curb such efforts, especially those linked to Chinese firms. Musk also said xAI remains much smaller than its rivals and ranked Anthropic, OpenAI, and Google ahead of xAI in the current AI race.
OpenAI Adds Advanced ChatGPT Account Security as Yubico Partnership Brings New YubiKeys
OpenAI has rolled out Advanced Account Security, an optional set of protections for ChatGPT users, especially those at higher risk such as journalists, researchers, political dissidents, and public officials. As part of the effort, Yubico is partnering with the company to offer two co-branded security keys, the YubiKey C NFC and YubiKey C Nano, aimed at reducing phishing and unauthorized access to ChatGPT accounts. The move comes as chatbot users face growing threats from cybercriminals seeking sensitive personal and business information stored in AI conversations. While hardware security keys can provide stronger protection than standard login methods, losing the key could permanently lock users out because account recovery may not be possible.
Maharashtra Approves AI Policy 2026, Targets Rs 10,000 Crore Investment and 1.5 Lakh Jobs
Maharashtra’s cabinet has approved the Artificial Intelligence Policy 2026, aiming to attract Rs 10,000 crore in investment and generate 1.5 lakh jobs in the sector. The policy includes plans for six AI Excellence Centres, five AI Innovation Cities, 2,000 GPUs for computing infrastructure, and training for two lakh youth in AI skills. It also targets the development of 50 AI tools and financial support for 5,000 MSMEs, alongside a Rs 500 crore venture fund for AI startups. The policy further proposes incentives such as electricity and stamp duty concessions, support for Marathi and local language databases, and a separate ethical AI framework for the state.
Amazon Targets Mass Hiring With AI Agents and Human-Centric Software to Streamline Recruitment
Amazon has rolled out new AI software aimed at speeding up large-scale hiring by reducing the need for face-to-face interviews, a move likely to be used for seasonal recruitment such as the holiday rush. The system, called Connect Talent, can screen candidates, run AI-led interviews at any time, and prepare notes for recruiters, while Amazon said applicants will be told when AI is being used. The company also outlined a new AI design approach called “humorphism,” which it says is meant to make AI work in ways that better match human behavior. Alongside this, Amazon unveiled Connect Decisions, a tool that uses AI to analyze data for supply chain planning, as it pushes deeper into autonomous AI agents despite wider concerns about oversight, safety, and possible job losses.
EU Expands Big Tech Rules to Target Cloud Services and AI Under Digital Markets Act
The European Union said it will extend its Digital Markets Act focus to cloud and artificial intelligence services, after reporting early gains in making digital markets more competitive. Regulators said the law has already helped users switch services more easily and improved interoperability for device makers, and they now want cloud and AI markets to become fairer and more contestable. The Commission is examining whether some AI services should be classified as core platform services and is also investigating whether Amazon and Microsoft should be designated as gatekeepers for cloud computing. While Apple criticised the rules over privacy, security and innovation concerns, the EU said it does not plan to force interoperability between major social networks and believes the current DMA framework remains fit for purpose.
US Judges Debate AI Risks and Benefits as Technology Expands Into Judicial Work
U.S. judges are sharply divided over the use of generative AI in court work as the technology spreads through chambers and legal practice without clear system-wide rules. At a Maryland conference on judges and AI, some judges argued that core judicial tasks such as judgment, fact analysis, and writing should remain human-led, while others said AI use in the judiciary is inevitable and needs formal guardrails. The debate comes as lawyers have faced discipline for AI-generated false citations and at least two federal judges have withdrawn opinions containing AI-related errors. A recent study found about 60% of U.S. federal judges use at least one AI tool, while many others either ban or discourage its use.
🚀 AI Breakthroughs
Anthropic Expands Claude for Creative Work With New Connectors Across Adobe, Blender, Autodesk and More
Anthropic has released a new set of Claude connectors aimed at creative professionals, allowing the AI assistant to work directly with widely used tools including Adobe Creative Cloud, Ableton, Blender, Autodesk Fusion, SketchUp, Splice, Resolume, and Affinity. The company said the integrations are designed to help users with ideation, software guidance, coding, 3D modeling, asset management, and repetitive production tasks across creative workflows. It also unveiled Claude Design, an Anthropic Labs product for prototyping software experiences with export support starting with Canva. Separately, Anthropic said Blender’s MCP-based connector is now officially available and confirmed, in an update dated May 1, 2026, that its support for Blender will be made as a one-time donation; the company is also partnering with art and design schools including RISD, Ringling, and Goldsmiths to test the tools in education.
Google Grants Pentagon Classified AI Access After Anthropic Refuses Unrestricted Defense Department Terms
Google has granted the U.S. Department of Defense access to its AI on classified networks under terms that reportedly allow all lawful uses, expanding the Pentagon’s options after Anthropic refused similar conditions. Anthropic had sought guardrails to block uses such as domestic mass surveillance and autonomous weapons, and is now suing the DoD after being labeled a “supply-chain risk,” a designation a judge has temporarily blocked. OpenAI and xAI also moved quickly to sign DoD deals after Anthropic’s refusal. Google’s contract reportedly says it does not intend for its AI to be used for domestic mass surveillance or autonomous weapons, but reports say it is unclear whether those limits are legally enforceable.
Google Begins Gemini Rollout to Millions of Cars With Google Built-In Across the U.S.
Google said it is starting to roll out its Gemini AI assistant to cars with Google built-in in the U.S., beginning with English support and expanding over the coming months. The upgrade replaces the current Google Assistant experience with more natural voice conversations, letting drivers ask complex questions, get route-based recommendations, control car functions, send hands-free replies, and access vehicle information. The move comes a day after General Motors said Gemini will reach about 4 million 2022-and-newer vehicles across Cadillac, Chevrolet, Buick, and GMC, though Google did not tie the broader rollout to any single automaker. Google also said compatible existing vehicles, not just new models, will get Gemini through software updates, with deeper connections to services like Gmail, Calendar, and Google Home planned later.
Gemini Now Lets Users Generate and Export PDFs, Word, Excel and Workspace Files
Google has rolled out a new Gemini app feature that lets users generate downloadable files directly from a chat prompt, removing the need to copy, paste and reformat content across apps. The tool can create Google Docs, Sheets and Slides, along with PDF, Microsoft Word (.docx), Excel (.xlsx), CSV, LaTeX, TXT, RTF and Markdown files. For most formats, files can be downloaded to a device or exported to Google Drive from within Gemini. The feature is now available globally to all Gemini app users, allowing them to turn ideas, drafts and summaries into ready-to-share documents more quickly.
Amazon Adds AI-Powered Audio Q&A Feature to Product Pages for Real-Time Shopping Assistance
Amazon has launched “Join the chat,” a new AI-powered feature in the Amazon Shopping app that lets customers ask product questions and receive real-time conversational audio answers on select product pages. The tool is part of its broader “Hear the highlights” experience, which provides short audio summaries using information drawn from product details, customer reviews, and other relevant insights. Amazon said the feature is designed to save shoppers time by reducing the need to read long descriptions and reviews, while allowing follow-up questions by text or voice. “Hear the highlights” began testing last May and is currently available in the U.S. on millions of product pages, though audio summaries are still limited to select products.
Microsoft Reports Over 20 Million Paid Copilot Users as Enterprise Adoption and Engagement Rise
Microsoft said its Microsoft 365 Copilot business has grown to more than 20 million paid enterprise seats, pushing back on claims that the AI assistant lacks real adoption. During its latest earnings call, the company said usage is rising, with queries per user up nearly 20% from the previous quarter and weekly engagement now matching Outlook levels. It also highlighted large customer deals, including Accenture’s 740,000-seat rollout and deployments of more than 90,000 seats at Bayer, Johnson & Johnson, Mercedes, and Roche. Microsoft added that Copilot is not tied to a single AI model and now supports multiple models and newly available agent features that can handle multistep tasks inside Word, Excel, and PowerPoint.
Microsoft Expands Copilot in Outlook With Agentic Inbox and Calendar Management Features
Microsoft said Copilot in Outlook is gaining new agentic capabilities that let it actively manage email and calendar work instead of only helping with single tasks like drafting messages or summarizing threads. The update allows Copilot to prioritize inbox items, identify unanswered emails, draft follow-ups, create inbox rules, and help users catch up after time away. On the calendar side, it can respond to invites, reschedule conflicting 1:1 meetings, rebook rooms, block focus time, and help draft meeting agendas or adjust schedules. These features will begin rolling out through Microsoft’s Frontier program on April 27, with inbox tools available across Outlook endpoints and calendar tools initially available on Outlook for Windows and the web.
Report Says OpenAI Is Developing an iPhone Rival Phone for a Potential 2028 Launch
OpenAI is reportedly planning its first smartphone, marking a shift from earlier indications that it was not building a phone. A new report says the device is targeting mass production in 2028, with work underway on smartphone processors alongside manufacturing and design partners. The phone is expected to be shaped around AI agents, suggesting a user experience that could differ significantly from today’s iPhone model. The move adds to OpenAI’s broader hardware push, which has also been linked to other AI-focused devices such as a smart speaker, smart glasses, and a smart lamp.
🎓AI Academia
Study Finds Generative AI Search Results Diverge Sharply From Google Organic Rankings and Sources
A SIGIR 2026 study found that Google’s AI Overviews appeared for 51.5% of 11,500 real-world queries and often showed up above standard search results, including for many controversial questions. The research said Google Search, AI Overviews, and Gemini Flash 2.5 pulled notably different source sets, with very low overlap, suggesting generative search is changing which websites users see. Traditional Google Search was more likely to surface popular government and education sites, while AI-driven results were more likely to cite Google-owned properties. The paper also said sites that block Google’s AI crawler were less likely to appear in AI Overviews, and that AI Overviews were less consistent across repeated runs and small query changes, raising questions about reliability, publisher visibility, and web traffic.
Australian Survey Finds Consumers Prefer AI-Generated Health Summaries Despite Ongoing Trust and Safety Concerns
A new Australia-based mixed-methods survey of 275 people found that consumers are broadly open to AI in digital health, rating it as useful and easy to use, but still showing notable concern about accuracy, safety, and how their data is handled. The study also tested reactions to an AI-generated consultation summary against one written by a clinician. Participants largely preferred the AI version for quality, empathy, and overall usefulness, while most could not reliably tell which summary was written by AI. The findings suggest that public acceptance of healthcare AI may depend less on the technology alone and more on clear communication, strong oversight, and visible human supervision in clinical use.
Researchers Detail TRUST, a Decentralized AI Auditing Framework for Reliable and Private Multi-Agent Reasoning
A new arXiv paper describes TRUST, a decentralized framework for auditing AI systems in high-stakes settings such as large reasoning models and multi-agent systems. The study argues that centralized auditing can be fragile, hard to scale, opaque, and risky for privacy, and proposes a blockchain-linked design that breaks reasoning into smaller parts for parallel checking while tracing failures across interacting agents. In reported tests, TRUST reached 72.4% accuracy, outperforming baseline methods by 4% to 18%, stayed resilient with 20% corruption, and its attribution method achieved 70% root-cause accuracy with 60% token savings. The paper also says its consensus system can tolerate up to 30% adversarial participants and points to potential uses in decentralized auditing, tamper-proof AI leaderboards, trustless data annotation, and governance for autonomous agents.
Agentic AI Rewrites Enterprise Software Buy-or-Build Economics, but SaaS Disruption Remains Limited
A new preprint argues that agentic AI is changing the economics of the classic buy-or-build decision in enterprise software, but not enough to wipe out SaaS as some “SaaSocalypse” predictions suggest. The paper says AI coding agents can sharply lower the cost and speed up in-house software creation, especially for simple internal tools and custom apps that help companies stand out. But it also finds that regulated, high-risk, and mission-critical systems are still more likely to be bought than built because compliance, reliability, and governance remain hard problems. Its core conclusion is that AI is not simply reviving traditional in-house development; it is creating a hybrid model where companies may own the code but still depend heavily on outside AI infrastructure.
Agentic AI Reshapes Software Development Lifecycle With New Architecture, Evidence, and Engineering Impact
A new preprint argues that software engineering is moving from AI-assisted code completion to “agentic” systems that can plan, test, debug, and make changes across an entire codebase under human supervision. The paper says performance on the SWE-bench Verified benchmark rose from 1.96% in October 2023 to 78.4% by April 2026, while controlled studies cited in the review found developer time savings ranging from 13.6% to 55.8%. It also outlines a six-layer architecture for these systems and describes how they could reshape the software development lifecycle, shifting engineers toward oversight, review, and coordination. At the same time, the study warns that evaluation, governance, technical debt, changing skill demands, and the economics of attention remain major unresolved issues as agentic AI spreads through software work.
Study Identifies Open Problems in Frontier AI Risk Management Across Safety and Governance
A new research paper argues that current AI risk management methods are not well suited to frontier AI systems, which are general-purpose models capable of handling a wide range of tasks. The paper says these systems not only intensify known safety risks but also create new ones, while rapid technical change has made it hard to build stable scientific consensus. It reviews open problems across the full risk management process, including planning, identifying, analyzing, evaluating, and reducing risks. Rather than offering fixes, the report maps where the biggest gaps remain and points to the groups that may need to address them, including AI companies, regulators, standards bodies, researchers, and independent evaluators.
V.O.I.C.E. Risk Taxonomy Maps Synthetic Voice Generation Threats Using Large-Scale Empirical Incident Data
A new research paper proposes V.O.I.C.E., a risk taxonomy for synthetic voice generation that maps how harms from AI voice cloning emerge across privacy, security, identity, and governance. The framework is based on a large empirical review of 569 incidents from major databases and complaint sources, 1,067 direct reports from U.S. participants, and 2,221 Reddit discussions. The study says current threat models are too broad and fail to capture how risk changes based on factors such as a person’s public exposure, social visibility, and access to legal protections. It also highlights that groups with widely available voice recordings, including public figures, influencers, and voice actors, may face greater danger from fraud, impersonation, misuse of voice data, and non-consensual synthetic content.
Study Finds AI-Generated Text Accounted for 35% of New Websites by Mid-2025
A new study examining websites archived between 2022 and 2025 found that by mid-2025, about 35% of newly published websites were classified as containing AI-generated or AI-assisted text, up from effectively zero before ChatGPT’s late-2022 debut. The researchers said this rise was linked to lower semantic diversity and a greater prevalence of positive sentiment online. However, the study did not find statistically significant evidence that more AI text reduced factual accuracy or stylistic diversity, despite widespread public belief that it does. A related user survey found most U.S. adults believed AI text harms all four areas, with skepticism strongest among infrequent AI users and people with more negative views of the technology.
About SoRAI: SoRAI is committed to advancing AI literacy through practical, accessible, and high-quality education. Our programs emphasize responsible AI use, equipping learners with the skills to anticipate and mitigate risks effectively. Our flagship AIGP certification courses, built on real-world experience, drive AI governance education with innovative, human-centric approaches, laying the foundation for quantifying AI governance literacy. Subscribe to our free newsletter to stay ahead of the AI Governance curve.



