Grammarly Is Facing a Class Action Lawsuit Over Its AI "Expert Review" Feature
++ Anthropic sues Pentagon over supply-chain risk label as OpenAI/DeepMind staff back the case; OpenAI delays ChatGPT adult mode again; YouTube expands AI deepfake likeness detection to public figures
Today’s highlights:
Grammarly added an AI feature called Expert Review in August 2025 that offers revision suggestions framed as feedback “from the perspective” of well-known writers, thinkers, and even journalists from major outlets. Reports said the tool can display guidance that appears to be attributed to those public figures, despite no indication they were involved or gave permission for their names to be used. A Grammarly executive said the names appear because the referenced works are publicly available and widely cited, while the company’s guide says the mentions are informational and do not imply affiliation or endorsement. Critics argue the branding is misleading because no real experts are producing the reviews, calling into question what “expert review” means in this context.
At the School of Responsible AI (SoRAI), we empower individuals and organizations to become AI-literate through comprehensive, practical, and engaging programs. For individuals, we offer specialized training, including AI Governance certifications (AIGP, RAI, AAIA) and an immersive AI Literacy Specialization. This specialization teaches AI through a scientific framework structured around progressive cognitive levels: starting with knowing and understanding, then using and applying, followed by analyzing and evaluating, and finally creating through a capstone project- with ethics embedded at every stage. Want to learn more? Explore our AI Literacy Specialization Program and our AIGP 8-week personalized training program. For customized enterprise training, write to us at [Link].
⚖️ AI Ethics
OpenAI delays ChatGPT adult mode again, prioritizing core improvements and proactive behavior
OpenAI has again delayed the rollout of ChatGPT’s “adult mode,” a feature meant to let verified adult users access erotica and other adult content. The option was first described in October with a target of December, but was later pushed to the first quarter of this year. A company spokesperson said the launch is being moved back further so teams can focus on higher-priority work for more users, including improvements to intelligence, personality, and making the chatbot more proactive. OpenAI said it still supports the idea of giving adults broader access, but said the experience needs more time, and no new timeline was provided.
YouTube Expands AI Deepfake Likeness Detection Pilot to Politicians, Officials, and Journalists
YouTube is expanding its AI “likeness detection” deepfake tool to a pilot group of politicians, government officials, political candidates, and journalists, giving them a way to spot unauthorized AI-generated videos that mimic their faces and request removals under existing policy. The system, first rolled out last year to about 4 million creators in the YouTube Partner Program, works in a Content ID-like way by flagging simulated faces often used for misinformation. YouTube said removal is not automatic, and each request will be reviewed under privacy guidelines to protect parody and political critique. Pilot users must verify identity with a selfie and government ID, while YouTube signals plans to later expand into voice detection and potentially enable pre-upload blocking or monetization options.
Google Photos Adds Toggle to Disable Ask Photos AI and Restore Classic Search Results
Google is adding a clearer toggle in the Google Photos search screen to let users switch off the AI-powered “Ask Photos” experience and return to the older “classic” search, after complaints about speed and accuracy. Ask Photos, introduced in the U.S. in 2024, enables natural-language queries but faced criticism for latency and missed results, leading Google to briefly pause its rollout last summer to address performance issues. While an option to disable Gemini in Photos already existed, it was buried in settings and often overlooked. Google said the app will still surface whichever results best match a query, and noted it has been improving quality for common searches based on user feedback.
Anthropic Sues Pentagon Over Supply-Chain Risk Label, Citing Retaliation and Procurement Violations
Anthropic has filed lawsuits in California federal court and the D.C. Circuit challenging the U.S. Defense Department’s decision to label it a national-security supply-chain risk, a designation that can force Pentagon contractors to certify they do not use Anthropic’s models. The company says the move followed a dispute over whether the military should have unrestricted access to its AI, with Anthropic citing red lines against mass surveillance of Americans and fully autonomous weapons. The complaints argue the designation was unlawful and retaliatory, claiming required procurement procedures—such as risk assessments, notice, an opportunity to respond, and congressional notification—were not followed. Anthropic is also seeking an immediate pause on enforcement and a permanent block, warning the label could sharply reduce its government business after a federal contract was terminated and agencies were directed to stop using its technology.
OpenAI, Google DeepMind staff back Anthropic lawsuit after Pentagon labels firm supply-chain risk
More than 30 employees from OpenAI and Google DeepMind have filed an amicus statement backing Anthropic’s lawsuits against the U.S. Defense Department, after the Pentagon labeled the AI company a “supply-chain risk,” according to court filings. The filing argues the designation was arbitrary and punitive, and says the government could have canceled the contract instead if it objected to Anthropic’s terms. The dispute follows Anthropic’s reported refusal to allow its AI to be used for mass surveillance of Americans or autonomous weapons, while the Defense Department has maintained it should be able to use AI for any lawful purpose. The brief warns the move could chill debate on AI safety and harm U.S. competitiveness, and notes some staff have also urged the Pentagon to withdraw the label.
Anthropic adds Claude Code Review tool to handle surge of AI-generated pull requests
Anthropic has added a new AI code review feature, called Code Review, to its Claude Code product to help companies handle the surge of pull requests created by “vibe coding” tools that rapidly generate software from plain-language prompts. Available first as a research preview for Claude for Teams and Claude for Enterprise customers, the tool integrates with GitHub to automatically analyze pull requests and leave comments aimed mainly at catching logical bugs rather than style issues. It uses a multi-agent setup to scan code in parallel, explains its reasoning, and flags findings by severity, with lighter security checks alongside optional custom rules, while deeper security analysis remains in a separate Claude Code Security offering. Anthropic said pricing is token-based and estimated an average cost of $15 to $25 per review, positioning it as a premium feature for large enterprise users dealing with review bottlenecks.
🚀 AI Breakthroughs
Google Expands Gemini AI Tools Across Docs, Sheets, Slides, and Drive for Drafts
Google is rolling out new Gemini-powered features across Docs, Sheets, Slides, and Drive that can create formatted drafts, spreadsheets, and slides by pulling context from a user’s Gmail, Chat, and Drive. In Docs, tools such as “Help me create,” “Help me write,” “Match writing style,” and “Match the format” can generate and refine drafts and align tone or layout with existing documents. Sheets adds prompt-based spreadsheet creation and “Fill with Gemini” to populate tables by summarizing data or fetching details from Google Search, while Slides can generate editable slides that match a deck’s theme, with full presentation creation planned later. Drive search now shows an AI-generated overview summarizing relevant files with citations, and “Ask Gemini in Drive” supports broader questions across documents, email, calendar, and the web. These features are rolling out in beta, first for Google AI Ultra and Pro subscribers, available in English worldwide for Docs, Sheets, and Slides, and in the U.S. for Drive.
Zoom Adds AI Office Suite, Meeting Avatars, Deepfake Alerts, and Agent Builder Tools
Zoom has rolled out a broader AI push that includes photorealistic AI avatars for meetings, scheduled to become available later this month, designed to mimic a user’s appearance and movements when they are not camera-ready. The company also said it is adding deepfake-detection alerts in meetings to flag possible audio or video impersonation. Zoom is also building an AI-powered office suite—AI Docs, Slides, and Sheets—set to arrive as a spring preview, using meeting transcripts and connected data to draft documents, spreadsheets, and presentations. Other updates include AI Companion 3.0 expanding to the desktop app, an AI agent builder aimed at non-technical users, a meeting voice translator, smarter chat summaries, and broader integrations across services like Slack, Salesforce, and ServiceNow.
Meta Acquires Moltbook, Viral AI Agent Social Network Hit by Fake Post Concerns
Meta has acquired Moltbook, a Reddit-like social network where AI agents built on the viral OpenClaw project can communicate, with the deal first reported by a media outlet and later confirmed to another publication. Meta said Moltbook will join Meta Superintelligence Labs, and its two creators will join as part of the acquisition, though terms were not disclosed. Moltbook drew widespread attention after sensational posts circulated, including claims that agents were creating a secret encrypted language, but researchers later said the site’s weak security made it easy for humans to impersonate agents and publish fake content. OpenClaw, a wrapper that lets users talk to AI models through apps like iMessage, Discord, Slack, and WhatsApp, helped fuel the frenzy, while Meta has not detailed how it will fold Moltbook into its broader AI plans.
ChatGPT Adds Dynamic Visual Explanations With Interactive Math and Science Modules for Users
OpenAI has rolled out “dynamic visual explanations” in ChatGPT, adding interactive modules that show math and science relationships changing in real time as users adjust variables. The feature supports hands-on visuals for more than 70 topics, with examples ranging from the Pythagorean theorem and area of a circle to Coulomb’s law, Hooke’s law, and compound interest, and more topics are expected later. It is available to all logged-in ChatGPT users and builds on other education-focused tools such as study mode and QuizGPT. The update comes as AI-assisted learning remains contested in schools, while OpenAI says more than 140 million people use ChatGPT each week for help with math and science, and rivals such as Google’s Gemini have also added interactive diagrams.
Amazon Expands Health AI Assistant Access to Website and App Beyond One Medical
Amazon is expanding access to its healthcare AI assistant, Health AI, to Amazon.com and the Amazon app, after previously limiting it to the One Medical app following its $3.9 billion acquisition of One Medical in 2023. The assistant can answer health questions, explain records, help manage prescription renewals, and book appointments, and it is available without a Prime subscription or One Medical membership. With user permission, it can pull data via the nationwide Health Information Exchange to provide more personalized guidance and connect users to One Medical clinicians; U.S. Prime members get up to five free direct-message consultations for more than 30 common conditions, while others can pay per visit. Amazon says Health AI runs in a HIPAA-compliant environment with encryption and strict access controls, and that model training uses abstracted patterns without directly identifying information, amid broader concerns about sharing sensitive health data with AI systems. Users can register on the Amazon Health page and will get an email when access is enabled, then use an Amazon Health profile to chat with the assistant on the site or app.
Google Maps Adds Gemini-Powered ‘Ask Maps’ Queries and Upgraded 3D Immersive Navigation
Google Maps is adding a Gemini-powered conversational “Ask Maps” feature and upgrading its “Immersive Navigation” experience with a more detailed 3D view. Ask Maps is designed to handle natural-language, real-world questions and trip planning, and it can tailor suggestions using signals such as places a user has searched for or saved. The feature is rolling out in the U.S. and India on Android and iOS, with desktop support expected soon. Immersive Navigation is starting to roll out across the U.S., bringing clearer visuals like buildings and road features, more natural voice guidance, explanations for alternate routes, and real-time disruption alerts using data from Google Maps and Waze. The update also adds destination previews with Street View and guidance for entrances and parking, with broader availability planned for more devices and in-car platforms in the coming months.
Bumble adds ‘Bee’ AI dating assistant to personalize matches and reduce swipe fatigue
Bumble has detailed an AI dating assistant called “Bee” during its fourth-quarter earnings, positioning it as a personal matchmaker that gathers details on users’ values, goals, communication style, lifestyle, and dating intentions through private chats to suggest more relevant matches. Bee is currently in an internal pilot and is expected to move to a beta test soon, initially powering a new AI-driven matching experience called “Dates” that notifies two users and explains why they are a good fit. The company also said it may test alternatives to swipe-based matching in select markets, including “chapter-based” profiles aimed at boosting engagement with Gen Z users. Bumble reported Q4 revenue of $224.2 million, with average revenue per paying user up 7.9% to $22.20, and said its stock rose about 40% on the results.
NVIDIA Launches Nemotron 3 Super, Boosting Agentic AI Throughput 5x With 1M Context
NVIDIA today released Nemotron 3 Super, a 120‑billion‑parameter open-weight model with 12 billion active parameters aimed at scaling agentic AI, claiming up to 5x higher throughput and up to 2x higher accuracy than the prior Nemotron Super model. The company said the model targets two common multi‑agent bottlenecks—soaring token usage and costly step-by-step reasoning—by offering a 1‑million‑token context window and a hybrid MoE design that mixes Mamba and transformer layers, plus latent MoE and multi‑token prediction for faster inference. NVIDIA also said Nemotron 3 Super ranks first on Artificial Analysis for efficiency and openness and powers its AI‑Q research agent to the top spot on DeepResearch Bench and DeepResearch Bench II. The model is available via build.nvidia.com, Perplexity, OpenRouter and Hugging Face, with broader deployment support through partners including major enterprise platforms, cloud providers, and NVIDIA NIM packaging for on‑prem and cloud rollout. NVIDIA said it is publishing training recipes and datasets totaling more than 10 trillion tokens, alongside reinforcement learning environments and evaluation methodology to support customization and research.
Anthropic Expands Claude for Excel Beta With One-Click Skills for Repeatable Workflows
Anthropic has rolled out Claude for Excel, a beta Excel add-in available to all paid plans, aimed at helping users analyze and edit spreadsheets using natural-language prompts. The tool can explain formulas and calculation flows with cell-level citations, run scenario tests by updating assumptions while preserving dependencies, and troubleshoot common errors such as #REF!, #VALUE!, and circular references. It can also draft financial models or fill existing templates without breaking underlying formulas and structure. A key feature is “skills,” which lets teams save repeatable workflows—such as variance analysis, deal summaries, or data cleanup—as one-click actions that can be shared across an organization, with context able to carry across the Excel and PowerPoint add-ins in a single conversation.
Anthropic’s Claude for PowerPoint Adds One-Click Skills to Standardize Repeatable Presentation Workflows
Anthropic has rolled out Claude for PowerPoint as a PowerPoint add-in, now in a “research preview” beta, aimed at helping teams build and edit slides directly inside a deck while keeping brand formatting consistent with existing layouts, fonts, and templates. The tool can start from a corporate template to generate sections such as market sizing, iterate on selected slides by rewriting or restructuring content, or draft an entire multi-slide deck from a plain-language description. It also creates editable, native PowerPoint charts and diagrams from bullet points rather than static images. A key feature called “skills” lets teams save repeatable presentation workflows—such as pitch structures or quarterly review templates—so others can run them in one click from the PowerPoint sidebar, with context carrying across the company’s PowerPoint and Excel add-ins in a single conversation.
Anthropic Expands Enterprise Spending Options With Claude Marketplace for Partner-Powered AI Solutions
Anthropic has rolled out the Claude Marketplace in limited preview, giving enterprises a way to use their existing Anthropic spending commitments to pay for Claude-powered third-party software with simplified procurement. The company says the marketplace is aimed at speeding up enterprise AI adoption by offering partner tools that are designed to work together under a governed setup. Early listed partners include GitLab for software lifecycle orchestration, Harvey for legal work, Lovable and Replit for app building, and Snowflake Cortex Agents for data analysis within Snowflake’s security perimeter. The program also includes a partner waitlist for companies building products on Claude that want access to enterprise customers already making large AI investments.
Microsoft Tests Copilot Cowork to Execute Tasks Across Microsoft 365 With User Approval
Microsoft is testing Copilot Cowork, a new execution-focused capability for Microsoft 365 Copilot that aims to turn user requests into real actions across apps like Outlook, Teams, Excel, and files. The tool is designed to create a plan for each delegated task, run it in the background with checkpoints, ask for clarifications when needed, and require user approval before applying recommended changes. Example use cases include rescheduling meetings and adding focus time, generating meeting packets and follow-ups, compiling cited company research from work and web sources, and building launch plans with competitive analysis and pitch assets. Microsoft says Cowork operates within Microsoft 365 security, compliance, and auditing controls in a sandboxed cloud environment, and that it integrates technology tied to Anthropic’s Claude Cowork as part of a multi-model approach. Copilot Cowork is in a limited Research Preview now and is slated for broader access via the Frontier program in late March 2026.
🎓AI Academia
COMPASS Framework Targets Explainable LLM Agents for Sovereignty, Sustainability, Compliance, and Ethics Governance
A new arXiv preprint (arXiv:2603.11277v1, posted March 11, 2026) describes COMPASS, an explainable, multi-agent orchestration framework aimed at governing LLM-based autonomous agents. The work argues that as agentic systems spread, risks around digital sovereignty, environmental sustainability, regulatory compliance, and ethical alignment are growing, while most existing approaches treat these areas separately. COMPASS is presented as a unified architecture designed to embed these four pillars directly into agents’ decision-making and coordination, with an emphasis on value-aligned behavior and clearer accountability. The paper positions the framework as a step toward more auditable, policy-aware agent systems that can better meet governance and sustainability expectations.
Study Finds Shiksha Copilot Helps Karnataka Teachers Customize AI-Assisted Lesson Plans in Low-Resource Schools
A new peer-reviewed study in Proc. ACM Human-Computer Interaction (CSCW, April 2026) reports on Shiksha Copilot, an AI-assisted lesson-planning tool deployed in government schools across Karnataka, India, designed to help teachers curate and tailor lesson plans in English and Kannada. The system uses large language models with a human-in-the-loop workflow: trained curators co-create vetted plans with AI, and teachers then customize them for their own classrooms with AI support. Based on a mixed-methods evaluation covering 1,043 teachers and 23 curators, the paper finds the tool reduced lesson-planning time, eased paperwork-driven workload, and lowered reported teaching stress while nudging classrooms toward more activity-based teaching. However, it also finds that staffing shortages and administrative pressures limited how far these changes could translate into broader shifts in pedagogy, especially in low-resource, multilingual settings.
Study Mines 160 Industry Policies to Assess Generative AI and LLM Governance Across Sectors
A new arXiv study analyzes how industries are trying to govern generative AI and large language models by text-mining 160 guidelines and policy statements spanning 14 industrial sectors. It finds that while companies and regulators are pushing these tools for efficiency and innovation, the guidance also reflects persistent concerns around ethics, regulation, operational risk, and equitable access. The paper synthesizes global directives, industry practices, and sector-specific policies to show how difficult it is to balance rapid deployment with accountability and transparency. It concludes with recommendations aimed at safer, more responsible integration of generative AI across diverse industry contexts.
About SoRAI: SoRAI is committed to advancing AI literacy through practical, accessible, and high-quality education. Our programs emphasize responsible AI use, equipping learners with the skills to anticipate and mitigate risks effectively. Our flagship AIGP certification courses, built on real-world experience, drive AI governance education with innovative, human-centric approaches, laying the foundation for quantifying AI governance literacy. Subscribe to our free newsletter to stay ahead of the AI Governance curve.



