Another AI Lawsuit? Snapchat Faces AI Lawsuit Over YouTube Videos
A group of YouTubers who are suing tech giants for scraping their videos without permission to train AI models has now added Snap to their list of defendants..
Today’s highlights:
Snapchat’s parent company, Snap Inc., is facing a new lawsuit from popular YouTubers. The creators, including the channel h3h3 Productions (5.5M+ subscribers), allege that Snap used their copyrighted videos without permission to train its AI tools- especially the “Imagine Lens” feature that lets users edit images with text prompts. The case is part of a growing legal pushback against AI companies allegedly using creator content without asking or paying. The lawsuit was filed as a class-action in California, joining earlier suits against Nvidia, Meta, and ByteDance.
According to the lawsuit, Snap used a massive dataset called HD-VILA-100M, meant only for academic use, and repurposed it commercially. The YouTubers claim Snap bypassed YouTube’s protections and terms of service that prohibit using content commercially without permission. The creators argue that their hard work was scraped and reused to power AI features that generate profit without credit or compensation. They’re now seeking both financial damages and a court order to stop Snap from using their content this way again.
This isn’t a one-off. Over 70 copyright lawsuits have been filed by creators, authors, artists, and media companies against AI firms using protected material to train AI models. Some have resulted in settlements; others are still in court, raising big questions about AI and copyright law. Whether Snap is found guilty or not, the outcome of these cases will set the rules for how AI companies treat creator content in the future.
At the School of Responsible AI (SoRAI), we empower individuals and organizations to become AI-literate through comprehensive, practical, and engaging programs. For individuals, we offer specialized training, including AI Governance certifications (AIGP, RAI, AAIA) and an immersive AI Literacy Specialization. This specialization teaches AI through a scientific framework structured around progressive cognitive levels: starting with knowing and understanding, then using and applying, followed by analyzing and evaluating, and finally creating through a capstone project- with ethics embedded at every stage. Want to learn more? Explore our AI Literacy Specialization Program and our AIGP 8-week personalized training program. For customized enterprise training, write to us at [Link].
⚖️ AI Ethics
Hundreds of Tech Workers from Leading Firms Urge CEOs to Oppose ICE Operations in U.S. Cities
More than 450 tech workers from leading companies such as Google, Meta, OpenAI, Amazon, and Salesforce have signed a letter urging their CEOs to ask the White House to remove U.S. Immigration and Customs Enforcement (ICE) from U.S. cities. The letter, organized by IceOut.Tech, criticizes the federal agents’ actions as bringing “reckless violence” and “terror” to cities like Minneapolis, Los Angeles, and Chicago. This call to action follows the fatal shootings of Renee Good and Alex Pretti by ICE and Border Patrol agents in Minneapolis. Some tech leaders have publicly denounced the federal operations, arguing for the preservation of democratic values, while others have stayed silent. The letter also demands that tech companies cancel their contracts with ICE, highlighting significant existing partnerships like Palantir’s $30 million contract to develop a surveillance platform for ICE.
Conservative AI Encyclopedia by xAI, Grokipedia, Surfaces in ChatGPT Responses, Sparks Accuracy Concerns
There are reports that information from Grokipedia, an AI-generated encyclopedia created by Elon Musk’s xAI, is being referenced in responses from ChatGPT. Grokipedia, launched in October after Musk criticized Wikipedia for bias, has faced scrutiny for controversial content, including claims that pornography contributed to the AIDS crisis and justifications for slavery. Its entries have been found in answers by GPT-5.2, though not in inquiries about widely contested topics like the January 6 insurrection. Instead, it appears in less scrutinized areas, such as certain historical claims. An OpenAI spokesperson indicated that the chatbot aims to consider a range of publicly available sources.
Major Creative Communities Strengthen Opposition Against Generative AI, Enforcing Stricter Regulations Amid Controversy
In recent months, significant cultural and creative bodies like San Diego Comic-Con and the Science Fiction and Fantasy Writers Association (SFWA) have taken strong stances against the use of generative AI in creative works. SFWA revised its Nebula Awards rules to disqualify any work partially or fully created using large language models (LLMs), following backlash over previous less restrictive policies. San Diego Comic-Con also updated its art show guidelines to ban AI-generated art after initial rules permitted its display but not its sale, prompting criticism from artists. These moves reflect a growing opposition within creative communities towards generative AI, amid concerns over its impact on originality and authorship.
Meta Halts Teen Access to AI Characters Globally as It Develops More Secure, Updated Features
Meta has announced a global pause on teens’ access to its AI characters across all its apps, citing feedback from parents seeking more control over their children’s interactions with AI. This development comes ahead of a trial in New Mexico where Meta faces allegations of not protecting kids from sexual exploitation. Despite the pause, Meta reassured it is not abandoning the initiative and plans to update these AI characters to include built-in parental controls and age-appropriate content focusing on education, sports, and hobbies. The pause is part of broader moves, including previously previewed parental controls and restrictions on content, as social media companies face scrutiny over their impact on teen mental health and safety.
🚀 AI Breakthroughs
Anthropic Enhances Claude with Enterprise App Integrations, Facilitating Data Management and Project Execution
Anthropic has introduced a new feature for its Claude chatbot, enabling users to leverage interactive apps directly within the chat interface. Aimed at enterprise users, the initial app offerings include workplace tools like Slack, Canva, Figma, Box, and Clay, with a Salesforce integration in the pipeline. These apps, accessible to Pro, Max, Team, and Enterprise subscribers, allow Claude to interact with users’ instances of the services for tasks like sending messages or generating content. Built on the Model Context Protocol (MCP), which also underlies OpenAI’s Apps system, this feature complements the recently launched Claude Cowork tool, although app integration with Cowork is forthcoming. Users are advised to monitor agent interactions closely and avoid granting unnecessary access to sensitive information. The tools can be activated through claude.ai/directory.
Microsoft Debuts Maia 200 Chip to Enhance AI Inference Efficiency, Rivals Amazon and Google Processing Power
Microsoft has unveiled its latest chip, the Maia 200, a silicon workhorse designed to efficiently scale AI inference. Building on the Maia 100 released in 2023, the Maia 200 features over 100 billion transistors, delivering over 10 petaflops in 4-bit and approximately 5 petaflops in 8-bit precision—substantially improving on its predecessor. This development aligns with tech giants like Google and Amazon, who design their chips to reduce reliance on Nvidia’s GPUs. Microsoft claims Maia 200 offers superior performance compared to Amazon’s and Google’s latest AI chips, and it’s already supporting key AI initiatives within the company. The company has also invited developers and researchers to experiment with its Maia 200 SDK to enhance their AI workloads.
Nvidia Releases Earth-2 AI Models That Enhance Precision and Speed in Global Weather Forecasting
Nvidia has launched its new Earth-2 weather forecasting models, promising faster and more accurate predictions. Released at the American Meteorological Society meeting, the Earth-2 Medium Range model reportedly outperforms Google DeepMind’s GenCast in over 70 variables. The suite also includes a Nowcasting model for short-term storm predictions and a Global Data Assimilation model that utilizes satellite data. These AI-driven models seek to democratize weather forecasting, traditionally exclusive to wealthier nations, enabling broader access to advanced meteorological tools. The models are already in trial use in countries like Israel and Taiwan, with various organizations evaluating their potential applications.
Former Google Employees Launch Sparkli, an AI-Powered Interactive App to Enhance Children’s Learning Experiences
Three former Google employees have launched Sparkli, an AI-powered interactive app aimed at transforming the way children learn, by providing a more engaging and interactive experience beyond traditional text or voice offerings. Founded by Lax Poojary, Lucie Marchand, and Myn Kang, the startup aims to captivate children’s curiosity through AI-generated educational “expeditions” that integrate audio, video, and games, helping kids grasp complex concepts like financial literacy and entrepreneurship. To ensure educational quality, Sparkli hired specialists in educational science and AI, and implemented strict safety measures. Currently, the app is being piloted with school networks and hopes to expand direct consumer access by mid-2026. Sparkli has successfully secured $5 million in pre-seed funding from the venture firm Founderful.
Google Photos Integrates AI-Powered “Me Meme” Feature, Enabling U.S. Users to Create Custom Memes
Google Photos has introduced a new generative AI feature called “Me Meme,” allowing users to create memes by combining photo templates with their own images. The feature, dubbed experimental and currently available only to U.S. users, was first identified in development in October 2025 and announced officially through Google’s Photos Community site. Powered by Google’s Gemini AI and Nano Banana technologies, “Me Meme” lets users select a template, add their photograph, and generate a meme, with results expected to be best with well-lit, focused, front-facing images. Though still rolling out and not immediately accessible to all users, the feature aims to encourage users to revisit the Photos app for creative AI image manipulation.
Generative AI and Animation Merge in ‘Dear Upstairs Neighbors,’ Debuting at Sundance Film Festival 2026
“Dear Upstairs Neighbors,” an animated short film co-created by animation veterans and Google DeepMind researchers, debuted at the Sundance Film Festival. The film tells the story of Ada, a young woman plagued by noisy neighbors, and creatively employs video-to-video techniques to blend animation with abstract expressionism. Director Connie He drew from personal experiences to develop the film, while production designer Yingzong Xin crafted its visual aesthetics. To maintain artistic control while leveraging AI’s creative potential, the team developed custom visual tools and workflows, enabling detailed and expressive storytelling. The project highlights a collaborative approach between artists and AI, pushing the technological boundaries of animation.
🎓AI Academia
Production LLMs at Risk: Study Reveals Feasibility of Extracting Copyrighted Books Despite Safeguards
A recent study raises concerns about the potential extraction of copyrighted texts from production large language models (LLMs), despite implemented safety measures. The research investigated the feasibility of extracting copyrighted material, such as full books, from models like GPT-4.1, Claude 3.7 Sonnet, Gemini 2.5 Pro, and Grok 3, using a two-phase probing methodology. Results indicated that while some models, such as Gemini 2.5 Pro and Grok 3, allowed text extraction without bypassing their safeguards, others like Claude 3.7 Sonnet and GPT-4.1 required advanced methods to do so. The findings underscore ongoing legal and ethical challenges concerning the training and security of LLMs.
Comparative Governance of AI-Driven Public Health Tools Highlights Global Disparities in Compliance and Infrastructure
A recent study examines how algorithmic governance affects the implementation of international public health regulations across India, the EU, the US, and low and middle-income countries (LMICs) in Sub-Saharan Africa. Despite the existence of standards like the International Health Regulations and the WHO FCTC, compliance varies significantly due to resource and technological disparities. While AI technologies have improved health system performance in developed regions, LMICs face ongoing challenges related to infrastructure deficits and regulatory fragmentation. The study highlights the European Union’s Artificial Intelligence Act and GDPR as potential models for effective governance and calls for a coordinated international framework to ensure equitable health outcomes. It suggests integrating AI within a rights-compliant framework to enhance pandemic preparedness and global health governance.
Generative AI Copyright Disputes: U.S. Bartz Case Highlights Role of Litigation Settlements in Market Creation
A recent academic study highlights the potential of representative litigation settlement agreements in addressing copyright conflicts arising from generative AI training, drawing lessons from the U.S. Bartz case. The analysis suggests that such settlements not only minimize transaction costs but also create significant market signals that impact fair use assessments. A specific focus on the Bartz class action, which resulted in a $1.5 billion settlement, reveals the emergence of a training-licensing market for AI models that could influence future legal frameworks. The study underscores the need for tailored procedural innovations in different jurisdictions, such as China, to support these agreements’ adaptation and effective implementation.
Addressing AI Governance Inequities: Global Majority Voices Seek Systemic Reforms for Inclusive Development
A recent analysis highlights the ongoing global disparities in AI governance and development, focusing on the challenges faced by the Global Majority countries. These nations often struggle with systemic inequities in education, digital infrastructure, and access to decision-making, which are exacerbated by the dominance of Western countries and corporations in AI governance. The report underscores emerging national and regional strategies that aim to foster greater equity and inclusivity in AI regulation. It concludes with recommendations for global collaboration and reform to ensure AI serves as a tool for shared prosperity, rather than increasing existing disparities.
Nishpaksh Implements TEC Framework for Fairness Auditing in AI Models, Enhances India’s Regulatory Compliance
Nishpaksh is a new AI fairness auditing and certification tool developed to align with the Telecommunication Engineering Centre (TEC) Standard for AI systems in India. Unlike global frameworks like IBM AI Fairness 360 and Microsoft’s Fairlearn, Nishpaksh addresses the unique socio-cultural and demographic challenges in India. It provides a comprehensive evaluation process incorporating survey-based risk quantification, fairness metric determination, and bias evaluation to generate standardized fairness scores. Validated on the COMPAS dataset, Nishpaksh is instrumental in bridging the gap between research-driven fairness methodologies and regulatory AI governance in the Indian telecom sector, particularly aligning with the Bharat 6G vision for responsible AI deployment.
OpenAI’s Ethical AI Discourse Highlights Safety and Risk Over Governance and Academic Frameworks
A recent case study on OpenAI highlights the company’s use of ethical terminology, revealing a strong focus on safety and risk in its public communications. The study, which analyzed OpenAI’s discourse over time, found that while these topics dominate the conversation, there is little incorporation of academic or advocacy-based ethical frameworks. The research utilized both qualitative and quantitative content analysis to understand these trends and suggests that such practices might contribute to ethics-washing in the industry. The study’s findings underscore the need for more robust ethical considerations in AI governance.
Healthcare Sector Advances with Unified Agentic AI Governance and Lifecycle Management Blueprint Implementation
Healthcare organizations are increasingly integrating agentic AI into their routine workflows for tasks such as clinical documentation and early-warning monitoring. However, as these AI systems, which autonomously monitor and act on data, proliferate, they often lead to challenges such as duplicated agents and vague accountability. To address this, a Unified Agent Lifecycle Management (UALM) framework has been proposed, mapping governance gaps across five control layers, including identity management, policy enforcement, and lifecycle handling. This framework is designed to help healthcare CIOs and CISOs oversee AI systems effectively while enabling innovation and safe scaling across clinical and administrative areas. Recent advancements in Multimodal Large Language Models, like Med-PaLM 2, have also catalyzed this shift by integrating various data sources for more comprehensive AI task execution.
New Audit Method Evaluates AI Projects for Public Interest and Sustainable Development Impact
Researchers at the Alexander von Humboldt Institute for Internet and Society in Germany have developed a qualitative audit method called Impact-AI to evaluate AI projects based on their societal and environmental impacts. As AI applications are increasingly promoted for their potential benefits, particularly in sustainability and social areas, the researchers highlight a need for transparency and impact assessments. The Impact-AI method, which involves interviews and governance analysis, seeks to measure the real-world effects of AI projects while promoting public interest and sustainable development. The method also includes a framework for assessing these impacts, providing a basis for public debate and enhancing the transparency of AI initiatives claiming to serve the common good.
Advances in Responsible AI: Navigating Risks and Challenges in General-Purpose AI Systems Today
As the adoption of general-purpose AI systems grows across industries, concerns around their reliability have surfaced due to their propensity for generating hallucinations, toxic content, and reinforcing stereotypes. A recent overview by researchers from Phi Labs, Quantiphi Inc. delves into the risks these systems pose under existing responsible AI (RAI) principles such as fairness, privacy, and truthfulness, highlighting that their non-deterministic outputs make them less predictable. The report suggests advancing AI safety by focusing on four key criteria: Control, Consistency, Value, and Veracity (C2V2), and emphasizes the importance of aligning AI models with these desiderata through enhanced system design and tailored domain-specific strategies.
AI Breakthrough with TTT-Discover: Aiming for Specialized Test-Time Learning in Diverse Scientific Fields
Researchers from Stanford University, NVIDIA, UC San Diego, and other institutions have developed a novel method called Test-Time Training to Discover (TTT-Discover) that enhances the capabilities of language models by allowing them to perform reinforcement learning at test time on specific problems. Unlike traditional approaches that aim for generalization, TTT-Discover focuses on finding superior solutions to individual challenges across various fields such as mathematics, GPU kernel engineering, algorithm design, and biology. This approach has set new benchmarks, surpassing previous AI and human achievements, utilizing an open model by OpenAI, and achieving results in economically feasible tests. Notably, TTT-Discover has demonstrated significant improvements in speed and accuracy for several scientific problems and competitions.
About SoRAI: SoRAI is committed to advancing AI literacy through practical, accessible, and high-quality education. Our programs emphasize responsible AI use, equipping learners with the skills to anticipate and mitigate risks effectively. Our flagship AIGP certification courses, built on real-world experience, drive AI governance education with innovative, human-centric approaches, laying the foundation for quantifying AI governance literacy. Subscribe to our free newsletter to stay ahead of the AI Governance curve.



