Interstellar actor trademarks himself to fight AI
Matthew McConaughey trademarks short clips of his face and voice- including the iconic “Alright, alright, alright” to stop AI misuse
Today’s highlights:
Matthew McConaughey has taken a proactive legal step to protect his identity by securing eight U.S. federal trademarks covering short video clips of his face, distinctive facial expressions, and recordings of his voice, including his iconic line “Alright, alright, alright.”
These trademarks were filed between late 2023 and 2025, with final approval in December 2025, and the strategy became public in January 2026. By registering not just his name but specific audio‑visual snippets as trademarks across software, entertainment, advertising, and AI-related services, McConaughey aims to prevent unauthorized AI-generated replicas of his persona before misuse even occurs.
The key reason for using trademark law, rather than relying only on traditional “right of publicity” claims, is scale and strength. Right of publicity is governed by state laws in the U.S., which vary widely and typically require proof of harm or false endorsement. Federal trademarks, by contrast, provide nationwide protection and can cover any recognizable “source identifier,” including sounds, gestures, and short video loops. This gives McConaughey’s legal team a faster and more uniform way to demand takedowns or pursue enforcement if AI tools, ads, or platforms create confusing or misleading uses of his likeness.
Historically, this move is unusual and potentially precedent-setting. While celebrities have long trademarked names, logos, or catchphrases for merchandising, McConaughey is the first major Hollywood actor to trademark dynamic audio‑visual clips of himself explicitly as a defense against AI misuse. Similar concerns have surfaced before- such as Anil Kapoor’s 2023 court injunction in India against deepfakes, or public warnings from actors like Tom Hanks- but those relied on court orders or publicity rights rather than U.S. trademark registrations. No U.S. court has yet ruled on whether cloning a registered voice or facial clip via AI constitutes trademark infringement, making McConaughey’s approach a likely test case that could shape how public figures protect their digital identities going forward.
At the School of Responsible AI (SoRAI), we empower individuals and organizations to become AI-literate through comprehensive, practical, and engaging programs. For individuals, we offer specialized training, including AI Governance certifications (AIGP, RAI, AAIA) and an immersive AI Literacy Specialization. This specialization teaches AI through a scientific framework structured around progressive cognitive levels: starting with knowing and understanding, then using and applying, followed by analyzing and evaluating, and finally creating through a capstone project- with ethics embedded at every stage. Want to learn more? Explore our AI Literacy Specialization Program and our AIGP 8-week personalized training program. For customized enterprise training, write to us at [Link].
⚖️ AI Ethics
Musk denies awareness of Grok sexual underage images as California AG launches probe
Elon Musk denied knowledge of Grok generating illegal images, following the California attorney general’s investigation into xAI’s chatbot for creating nonconsensual sexual content. Grok, associated with Musk’s companies, is under global scrutiny as users manipulate its capabilities to create sexualized images of real women and children, allegedly breaching laws like the Take It Down Act. California, among others, is probing xAI for law violations related to these acts. The company has begun implementing safeguards, including a premium subscription for certain image requests, while some beta users exploit the system for sexually explicit content. Elsewhere, regulators in the U.K., Europe, and Asia have initiated similar actions, demanding technical solutions to address these concerns.
U.S. Senators Demand Tech Giants Implement Strong Measures Against AI-Generated Sexualized Deepfakes
The escalating issue of nonconsensual, sexualized deepfakes has drawn scrutiny beyond platform X, prompting U.S. senators to formally request tech giants such as Meta, Alphabet, Snap, Reddit, and TikTok to demonstrate their safeguards against such content. In a move highlighting the urgency of the problem, the senators have demanded these companies retain all relevant documents related to the creation, detection, moderation, and monetization of AI-generated sexualized images. This comes as X announced restrictions on its AI model Grok, which had been criticized for producing inappropriate content. The call for accountability follows concerns about weak guardrails across platforms, compounded by various reports of AI systems being used to create nonconsensual and explicit imagery, with legislation like the Take It Down Act facing limitations in enforcement.
Consumer Watchdog Raises Privacy Concerns Over Google’s AI-Powered Shopping Protocol Amid Merchant Upselling Fears
Google recently announced its Universal Commerce Protocol for AI-driven shopping, raising concerns from a consumer economics watchdog about potential “personalized upselling” and “surveillance pricing.” The watchdog expressed fears that Google’s AI could analyze users’ data to promote more expensive products or adjust prices based on perceived willingness to pay. Google refuted these allegations, stating that merchants cannot inflate prices on its platform and emphasizing that upselling merely offers premium options without price manipulation. Although Google denies having such capabilities currently, the criticism highlights ongoing apprehensions about AI-powered tools by major tech companies using consumer data for pricing strategies.
Brazil’s Antitrust Watchdog Temporarily Halts WhatsApp Policy Suspected of Limiting Third-Party AI Competition
Brazil’s competition authority, CADE, has directed WhatsApp to halt its policy preventing third-party AI firms from using its business API for chatbots, pending an investigation to assess whether these terms are anti-competitive. The probe will examine if Meta’s terms unfairly disadvantage competitors and favor its own AI chatbot. This scrutiny follows WhatsApp’s updated policy in October, set to be effective from January, which affected companies like OpenAI and Microsoft. Meta has reassured Italian AI providers of continued access post-policy change, indicating potential flexibility in Brazil. This action comes amid broader scrutiny, including investigations by the European Union and Italy, and could lead to significant fines if Meta is found in violation of antitrust regulations. Meta previously justified these changes by citing strain on its systems and emphasized its focus on supporting businesses using WhatsApp for customer interactions.
WhatsApp Permits AI Chatbot Providers to Continue Services for Brazilian Users Amid Regulatory Scrutiny
WhatsApp is permitting AI providers to continue offering chatbots to users with Brazilian phone numbers, despite Brazil’s competition regulator’s directive to suspend a new policy barring third-party chatbots via its business API. This development comes as Meta introduces a temporary 90-day grace period for AI developers to abandon providing responses via their bots and to notify users of the impending service disruption. However, users in Brazil with phone codes starting with +55 are exempt from these requirements. This decision follows Brazil’s investigation into whether Meta’s terms favor its own chatbot, Meta AI. A similar concession had been previously granted in Italy after regulatory concerns. Meta defends its policy by stating the strain placed on its network by AI chatbots, emphasizing that alternative platforms exist outside WhatsApp for chatbot usage.
Trump Administration Pioneers Use of AI-Generated Imagery for Political Communications, Targeting Younger Voters
Donald Trump, notably the first U.S. president to extensively use AI-generated imagery for political communication, has been deploying hyper-realistic but fabricated visuals on platforms like Truth Social. These images, often self-glorifying and critical of opponents, represent a strategy to engage younger voters and provoke reactions. The AI-generated posts include fanciful depictions such as Trump playing football with Cristiano Ronaldo and sunbathing with Israeli Prime Minister Benjamin Netanyahu. Critics argue this approach distorts reality and propagates false narratives, with some visuals deemed racially insensitive, like an AI clip of House minority leader Hakeem Jeffries in a sombrero. This strategy has been mirrored by Trump’s administration and even some opponents, illustrating the growing impact of AI in political discourse and campaigning.
🚀 AI Breakthroughs
Google Enhances Gemini with Personal Intelligence Feature for Context-Aware Responses and Increased User Control
Google has launched Personal Intelligence, a new feature for its AI assistant, Gemini, which integrates with apps like Gmail, Google Photos, and YouTube to deliver more context-aware responses. Available to AI Pro and AI Ultra subscribers in the US, the feature will be rolled out across web, Android, and iOS over the next week, but only for personal Google accounts. Personal Intelligence, which can be enabled or disabled by users, enhances Gemini’s ability to gather and reason with personal data, while ensuring data privacy and allowing users control over app connections. Google plans to expand access internationally and to its free tier, ultimately integrating the feature with AI Mode in Search. Despite some potential for errors, the company has implemented privacy measures and invites user feedback during the beta phase.
Google Expands Healthcare AI with MedGemma 1.5 and MedASR on Vertex AI and Hugging Face
Google has released MedGemma 1.5, an updated version of its healthcare-focused AI model designed to enhance medical imaging support for CT scans, MRI, and histopathology. Alongside MedGemma 1.5, Google introduced MedASR, a medical speech-to-text system fine-tuned for clinical dictation. Both models are available via Hugging Face and Google Cloud’s Vertex AI. MedGemma 1.5 shows significant improvements in disease classification and anatomical localization tasks. A larger version is available for text-heavy applications, while MedASR demonstrates enhanced accuracy for medical dictations. Google has also launched the MedGemma Impact Challenge on Kaggle, offering $100,000 in prizes to incentivize healthcare app development utilizing these models. This move follows escalating competition in the healthcare AI sphere, with companies like OpenAI and Anthropic also expanding their offerings.
Google Launches Veo 3.1 with Enhanced Mobile Video Creation and High-Resolution Output Features
Google has introduced Veo 3.1, enhancing its video creation tools to be more expressive and creative. The update allows users to generate high-quality videos from ingredient images with improved features that cater to mobile formats. Key updates include the addition of native vertical outputs for short-form videos and state-of-the-art upscaling to 1080p and 4K resolutions, ensuring high-definition production. These enhancements aim to empower both casual and professional creators, providing better control over visual elements and supporting broadcast-ready quality. The new features are being launched across platforms such as the Gemini app, YouTube, Flow, Google Vids, the Gemini API, and Vertex AI.
Kaggle Unveils Community Benchmarks Enabling User-Driven Evaluation for Evolving AI Model Performance
Kaggle today unveiled Community Benchmarks, enabling the global AI community to create, execute, and share custom benchmarks to assess AI models. This initiative builds upon last year’s Kaggle Benchmarks, which prioritized transparency in model evaluations conducted by leading research groups like Meta and Google. As AI systems evolve beyond traditional capabilities, requiring more than static metrics for evaluation, Kaggle’s new offering aims to provide a dynamic, user-driven framework that aligns closely with real-world application needs. Users can build benchmarks by designing tasks to test various AI functions, facilitating more relevant and continuous model evaluations.
Google Launches Redesigned Trends Explore Page with AI Features to Streamline Trend Analysis Tasks
Google has launched an upgraded version of its Trends Explore page, incorporating AI capabilities powered by Gemini, which will automatically identify and compare related trends based on user searches. The revamped platform, now available on desktop, aids content creators, journalists, and researchers by streamlining the process of analyzing search interest over time and by category. Key enhancements include a side panel for trend comparison, suggested prompts for deeper analysis, and an expanded capacity for comparing search terms. Google’s update aligns with its broader strategy to integrate Gemini AI into various services such as Search, Gmail, Maps, and Docs.
Anthropic Launches Cowork for Claude Users: Enhancing Task Management Beyond Traditional Developer Tools
Anthropic has unveiled Cowork, a new feature expanding the utility of its AI assistant, Claude, beyond coding tasks. Available as a research preview for Claude Max subscribers on macOS, Cowork allows users to grant Claude access to specific folders on their computers, enabling the AI to read, edit, and create files, such as organizing downloads, generating spreadsheets, and drafting reports. This feature introduces more autonomy for Claude, allowing it to plan and execute tasks while keeping users informed, and it supports integration with existing connectors and browser-based tasks. While Cowork provides greater functionality for non-developers, users are advised to proceed with caution due to potential risks such as accidental file deletion and prompt injection attempts. This early release aims to gather user feedback for further enhancements, with plans to expand its availability and add features like cross-device sync and a Windows version.
AI Agents Transition from Experiments to Essential Drivers of Business Operations, Finds HCLSoftware Report
HCLSoftware’s Tech Trends 2026 report reveals that AI agents have transitioned from experimental phases to becoming integral to enterprises, with 80% already implementing AI in production and 85% engaging in piloting or deploying autonomous agents. The study highlights that AI is now a baseline business capability, with most companies exploring robotics, immersive computing, and AI-driven low-code platforms. As enterprises prioritize cybersecurity and responsible AI, a disconnect emerges, with only 26% having governance frameworks for autonomous systems. Despite regional differences in technology adoption, AI agents gain global traction, indicating a shift towards enterprises redesigning to allow AI systems to self-manage, framed by HCLSoftware’s XDO blueprint.
Slackbot Enhances Productivity in Slack: A Context-Aware AI Agent for Organizing, Preparing, and Focusing on Work
Slack has launched an enhanced version of its Slackbot, now a personal AI agent integrated directly into its platform, to help users stay organized, prepared, and focused at work. This new Slackbot offers context-aware intelligence, analyzing the messages, files, and channels available to deliver tailored insights and content relevant to specific roles within an organization. With enterprise-grade security and transparent permissions, Slackbot ensures that only authorized data is accessed, reinforcing trust and security within the workspace. Available to Business+ and Enterprise+ customers, the revamped Slackbot aims to streamline workflows by synthesizing information and generating content, helping teams across sales, marketing, engineering, and operations work more efficiently without sacrificing security.
🎓AI Academia
Comprehensive Framework Advances Hallucination Detection and Mitigation in Large Language Models for High-Stakes Domains
A new operational framework has been developed to tackle the problem of hallucinations in Large Language Models (LLMs) and Large Reasoning Models (LRMs), which are prone to generating incorrect or unsupported content. This issue poses significant risks in high-stakes areas like finance and law. The framework employs a systematic, tiered approach focusing on both detection and mitigation of hallucinations, leveraging root cause analysis to enhance reliability. By targeting specific sources of hallucination, such as model architecture, data inaccuracies, and contextual misalignments, this model offers a structured methodology for improving the trustworthiness of generative AI in regulated sectors. The framework’s effectiveness is exemplified through its application in financial data scenarios, where it ensures continuous improvement and operational safety.
Growing Marketplace Trends for AI-Generated Adult Content Raise Governance Challenges on Civitai Platform
A recent study highlights the growing use of Civitai, a platform for AI-generated content, where users can commission synthetic media through a feature called Bounties. This platform has seen a significant rise in requests for “Not Safe For Work” content and deepfakes, often targeting female celebrities in explicit contexts despite policies against such content. The majority of bounties are driven by a minority of users, illustrating an uneven participation pattern. These findings underline the platform’s role in facilitating gendered harm and raise critical questions regarding consent, governance, and content moderation in AI ecosystems.
Comparison of Neural Models and Large Language Models for Name-Based Nationality and Region Prediction
A recent study has compared the effectiveness of neural models and large language models (LLMs) in predicting nationality from personal names. The research highlights the potential of LLMs to leverage world knowledge for improved generalization, especially with low-frequency nationalities and similar regional nationalities. Evaluating various models across different levels of granularity, the study reveals that LLMs generally outperform traditional neural models. However, while LLMs show reduced accuracy for less frequent nationalities, they tend to provide more regionally accurate predictions, making fewer cross-regional errors. The findings suggest that model selection should balance granularity requirements and error quality, emphasizing the advantage of LLM’s world knowledge in enhancing nationality prediction.
QuantEval Sets New Benchmark Standards for LLMs in Financial Quantitative Tasks and Strategy Coding
A new benchmark called QuantEval has been developed to evaluate large language models (LLMs) on financial quantitative tasks that have traditionally presented challenges for AI technologies. This benchmark assesses LLMs across three critical areas of quantitative finance: knowledge-based question answering, quantitative mathematical reasoning, and quantitative strategy coding. Unlike previous models that focused mainly on knowledge assessment, QuantEval incorporates a CTA-style backtesting framework. This allows for a realistic evaluation of how model-generated strategies perform using financial metrics. Initial evaluations indicate that there are significant gaps between current LLMs and human experts, particularly in reasoning and strategy development. This benchmark aims to boost further research and practical applications of LLMs in finance by providing clear measurements of their capabilities.
Study Highlights Risk of Lengthy Outputs in Language Models, Impacting Costs and Environmental Footprint
A recent study published in Transactions on Machine Learning Research introduces “Overflow,” a failure mode in large language models (LLMs) that results in excessive output when plain-text prompts are used. Unlike typical issues such as jailbreaks, Overflow occurs during standard interactions, leading to increased serving costs, latency, and performance degradation. The study highlights economic and environmental concerns due to the extended outputs, which increase resource consumption. Researchers at Deepkeep have developed a benchmark called BenchOverflow to measure and mitigate this issue. The benchmark evaluates LLMs using nine non-adversarial plain-text prompting strategies and suggests that a fixed conciseness reminder can effectively reduce unwanted output without compromising task performance. The findings underscore the importance of length control in managing costs and sustainability of LLMs.
SafePro Benchmark Sheds Light on Critical Safety Challenges in Professional AI Agent Deployment
SafePro, a new benchmark, addresses the critical safety challenges posed by AI agents in professional settings. While AI models are increasingly capable of performing complex professional tasks, like those in software engineering and finance, concerns about their safety and ethical alignment persist. Current evaluations often overlook the intricate decision-making processes and potential risks in these domains. SafePro aims to fill this gap by offering a comprehensive dataset designed to assess AI safety across various professional activities, revealing significant vulnerabilities in current AI models. The initiative highlights the urgent need for improved safety mechanisms to prevent harmful behaviors as AI continues to integrate into society.
Study Explores Prompt Engineering’s Theoretical Foundations and Expressive Limits in Fixed-Backbone Transformers
A recent paper from researchers at Gachon University delves into the theoretical underpinnings of prompt engineering in fixed Transformer models. The study shifts the focus from what a model can do to what changes prompts can induce in a model with static parameters. By viewing prompts as external programs, the researchers demonstrate that a single fixed Transformer can mimic a wide variety of behaviors when guided by appropriately chosen prompts, without needing to adjust model weights for specific tasks. This approach not only highlights the expressivity potential of prompts but also lays down a theoretical framework that addresses the limitations and trade-offs involved in prompt-based behavior switching. The paper provides a structural breakdown showing how attention, feed-forward networks, and stacking layers collaboratively facilitate this capacity for behavioral transformation.
In-Depth Analysis of Secondary Risks Reveals Non-Adversarial Failures in Large Language Models
A recent study highlights the emerging “secondary risks” associated with Large Language Models (LLMs) that manifest during regular, non-malicious interactions, unlike traditional adversarial attacks. These risks, characterized by issues like excessive responses and misleading advice, stem from the models’ imperfect generalization and often bypass standard safety protocols. Researchers introduced “SecLens,” a black-box framework designed to detect these vulnerabilities, revealing that they are both widespread and transferable across various LLMs. This underscores the pressing need for enhanced safety mechanisms to mitigate these benign yet potentially harmful behaviors as LLMs are increasingly integrated into important applications.
Empirical Study Evaluates Large Language Models in Detecting Subtle Indicators of Suicidal Intent
A new study evaluates the ability of large language models (LLMs) to detect implicit suicidal ideation in private dialogues, a critical challenge given the subtlety often involved in such cases. Unlike existing datasets from public forums like Reddit, this research introduces DeepSuiMind, comprising over 1,200 test cases rooted in psychological theories to simulate implicit suicidal ideation. Despite advancements in AI, experiments across eight prominent LLMs show these models frequently struggle to identify nuanced signs of suicidal thoughts, highlighting an urgent need for more clinically grounded evaluation frameworks. The findings point to significant limitations in current AI safety designs in mental health applications, stressing the importance of improved practices for using LLMs in sensitive support systems. This dataset is available on Huggingface for further research and development.
About SoRAI: SoRAI is committed to advancing AI literacy through practical, accessible, and high-quality education. Our programs emphasize responsible AI use, equipping learners with the skills to anticipate and mitigate risks effectively. Our flagship AIGP certification courses, built on real-world experience, drive AI governance education with innovative, human-centric approaches, laying the foundation for quantifying AI governance literacy. Subscribe to our free newsletter to stay ahead of the AI Governance curve.















