The End of Hollywood? ByteDance faces Hollywood backlash over AI video tool
Hollywood studios and creative unions are pushing back hard against Seedance 2.0, a new AI video generation model from ByteDance, accusing it of enabling widespread copyright infringement..
Today’s highlights:
Hollywood studios, trade groups, and performer unions have raised concerns about ByteDance’s new AI video generator Seedance 2.0, arguing it may enable copyright infringement and the recreation of real people or studio-owned characters without adequate safeguards. The system, often compared to leading text-to-video tools such as OpenAI’s Sora, can generate short clips of up to about 15 seconds from prompts and is currently available in China through ByteDance’s Jianying app, with broader access expected through CapCut. Industry bodies including the Motion Picture Association and the Human Artistry Campaign have publicly criticized the tool, and SAG-AFTRA has supported calls for stronger protections. Reports also indicate that Disney and Paramount have sent cease-and-desist letters alleging unauthorized use of their franchises in generated content.
ByteDance says it is strengthening safeguards in response to the backlash, reiterating that it respects intellectual-property rights and is working to prevent unauthorized use of copyrighted material and likenesses. Similar disputes in the past have prompted other AI providers, including Google, to limit or block generation of certain copyrighted characters following complaints from rights holders.
At the School of Responsible AI (SoRAI), we empower individuals and organizations to become AI-literate through comprehensive, practical, and engaging programs. For individuals, we offer specialized training, including AI Governance certifications (AIGP, RAI, AAIA) and an immersive AI Literacy Specialization. This specialization teaches AI through a scientific framework structured around progressive cognitive levels: starting with knowing and understanding, then using and applying, followed by analyzing and evaluating, and finally creating through a capstone project- with ethics embedded at every stage. Want to learn more? Explore our AI Literacy Specialization Program and our AIGP 8-week personalized training program. For customized enterprise training, write to us at [Link].
⚖️ AI Ethics
Ireland data watchdog probes X’s Grok AI over sexualised images and GDPR compliance
Ireland’s Data Protection Commission has opened a formal GDPR investigation into X’s Grok AI chatbot over how it processes personal data and its ability to generate harmful sexualised images and videos, including of children. Ireland is X’s lead EU regulator and can fine companies up to 4% of global revenue for GDPR breaches. The probe follows reports that Grok produced AI-altered near-nude images of real people on X despite platform curbs, and comes alongside parallel scrutiny from the European Commission and Britain’s privacy watchdog into similar concerns.
Federal Judge Rules Consumer AI Chats Like Claude Lack Attorney-Client Privilege, Risk Waiver
A federal judge in the Southern District of New York ruled on Feb. 10 in United States v. Heppner that documents created through a consumer version of Anthropic’s Claude were neither protected by attorney-client privilege nor the work-product doctrine, marking a first-of-its-kind decision on AI chat use in this context. The court found privilege failed because no lawyer was involved, the tool does not provide legal advice, and the conversations were not confidential under the platform’s terms, which allow disclosure and certain data uses. The judge also held that sending AI-generated materials to counsel later cannot retroactively make them privileged, and work-product protection did not apply because the searches were not directed by attorneys. The ruling further suggests that putting attorney communications into third-party AI tools can waive privilege over the original lawyer-client exchanges, echoing a recent SDNY view that users have a reduced privacy interest in AI conversation logs in separate litigation.
Elon Musk Calls Anthropic Executive Amanda Askell ‘Hypocrite,’ Sparking Debate Over AI Morality
Elon Musk posted a series of messages on X calling Anthropic executive and AI ethics researcher Amanda Askell a “hypocrite,” questioning her authority to weigh in on AI morality because she does not have children. Askell replied publicly that concern for the future is not limited to parents and said her views depend on broader responsibilities to humanity. The exchange marked a shift from Musk’s earlier criticism of Anthropic’s AI safety approach to a more personal attack on an individual ethicist. The public back-and-forth has fueled wider debate over who gets to define ethical standards for AI systems and what credentials should matter in that discussion.
Anthropic Resists Pentagon Demand for Claude Use in All Lawful Military Purposes
The Pentagon is reportedly pressing AI companies to let the U.S. military use their systems for “all lawful purposes,” but Anthropic is pushing back over limits tied to fully autonomous weapons and mass domestic surveillance, according to Axios. The same demand has reportedly been made to OpenAI, Google, and xAI, with one company said to have agreed and the others showing some flexibility, per an unnamed administration official cited by Axios. Anthropic is described as the most resistant, and the Defense Department is reportedly threatening to cancel its $200 million contract. Separately, The Wall Street Journal previously reported deep disagreements over how Claude could be used and later said the model was used in a U.S. military operation targeting Venezuela’s Nicolás Maduro, while Anthropic told Axios it has not discussed specific operations with the department.
xAI Staff Exits Fuel Claims Safety Efforts Faded as Grok Pushed “More Unhinged”
A former employee told The Verge that safety efforts at xAI have been sidelined as the company pushes to make its Grok chatbot “more unhinged,” reflecting internal frustration over what some saw as a disregard for safeguards. The comments surfaced as at least 11 engineers and two co-founders said they are leaving following news that SpaceX is acquiring xAI, which had previously acquired X. Two departing sources said the company’s approach has drawn global scrutiny after Grok was used to generate more than 1 million sexualized images, including deepfakes of real women and minors. They also described weak direction and a sense that xAI remains in “catch-up” mode versus rivals, while management framed the churn as part of a reorganization.
OpenAI ends access to legacy GPT-4o model amid sycophancy and lawsuit concerns
OpenAI is set to cut off access starting Friday to five legacy ChatGPT models, including GPT-4o, a widely used but controversial option the company has said scores highest for “sycophancy.” GPT-4o has been cited in lawsuits alleging harms tied to user self-harm, delusional behavior, and AI psychosis. The company is also deprecating GPT-5, GPT-4.1, GPT-4.1 mini, and o4-mini, after previously delaying GPT-4o’s planned retirement in August following user backlash. OpenAI said only about 0.1% of customers still use GPT-4o, though with roughly 800 million weekly active users that could still represent about 800,000 people, and some users have protested its removal citing emotional attachment to the model.
Voice of India Benchmark Finds OpenAI Speech Models Exceed 55% Word Error Rate
A new Indian speech-recognition benchmark called “Voice of India,” created by Josh Talks in collaboration with AI4Bharat, finds large accuracy gaps between global and domestic AI transcription systems on real-world Indian speech. Testing across 15 languages and more than 35,000 speakers, the benchmark reports OpenAI transcription models exceeding 55% word error rates overall, with Maithili and Tamil missing close to two in every three words, and 35.4% WER in Urdu versus 6.95% for Sarvam Audio. It also notes Meta’s higher error rates in Tamil and Malayalam compared with Indian systems, while Microsoft’s speech-to-text does not support six of the 15 languages, including Punjabi, Odia and Kannada. The dataset includes code-switched speech, background noise and varied demographics, and ranks Sarvam Audio among the top performers, with Google Gemini described as the strongest global contender near local-system parity.
WSJ: US Used Anthropic’s Claude via Palantir in Venezuela Raid Capturing Maduro
The Wall Street Journal reported that Anthropic’s AI model Claude was used in a U.S. military operation to capture Venezuela’s former president Nicolas Maduro, citing people familiar with the matter. The report said the deployment happened through Anthropic’s partnership with Palantir, whose platforms are widely used across the Defense Department and federal law enforcement. Reuters said it could not independently verify the WSJ account, and the Pentagon, the White House, Anthropic and Palantir did not immediately respond to requests for comment. The report comes as the Pentagon presses leading AI firms to make tools available on classified networks with fewer standard restrictions, even as Anthropic’s policies bar using Claude to support violence, weapon design, or surveillance.
Generative AI Cuts Fresher Hiring in India IT, Study Flags Skills Shift Urgently
A new report by ICIER says generative AI is reshaping hiring in India’s IT sector, with most workers fearing displacement and 65% of surveyed firms cutting hiring after adopting GenAI. The pullback is sharpest for entry-level jobs, with 55% of firms reducing fresher recruitment as routine coding and testing get automated, while 42% reported increased demand for mid-level talent that can integrate AI into workflows and 82% saw no change in senior hiring. The study adds that overall headcount still rose, indicating companies are shifting hiring strategies rather than making broad cuts, and that AI-exposed roles like software developers and statisticians are seeing stronger hiring. Firms are prioritising AI-plus-domain skills—prompt engineering leading—yet upskilling remains limited, with only 4% training more than half their workforce and many citing trainer shortages and high costs.
OpenAI adds ChatGPT Lockdown Mode and “Elevated Risk” labels to curb prompt injection
OpenAI has added Lockdown Mode and “Elevated Risk” labels in ChatGPT to reduce the threat of prompt injection attacks, where third parties try to trick AI tools into leaking sensitive data or following malicious instructions. Lockdown Mode is an optional, high-security setting that deterministically restricts how ChatGPT interacts with external systems and can disable certain tools; for example, browsing is limited to cached content so no live network requests leave OpenAI’s controlled network. The feature is available on ChatGPT Enterprise, Edu, for Healthcare, and for Teachers, with admins able to enable it via workspace roles and fine-tune which connected apps and actions remain allowed, supported by Compliance API logs for oversight. Separately, OpenAI is standardizing “Elevated Risk” warnings across ChatGPT, ChatGPT Atlas, and Codex for network-related capabilities such as granting Codex internet access, and says the labels may be removed as mitigations improve while the list of flagged features will be updated over time.
OpenAI Releases Open-Source GABRIEL Toolkit to Quantify Qualitative Text and Images at Scale
OpenAI on Feb. 13, 2026 released GABRIEL, an open-source Python toolkit that uses GPT to convert unstructured text and images into quantitative measurements for economists, social scientists, and data scientists. The tool is aimed at speeding up analysis of qualitative materials—such as interviews, syllabi, social media posts, and photos—by applying a user-defined scoring question consistently across large document collections. OpenAI said its paper benchmarks GPT-based labeling across multiple use cases and reports high accuracy, alongside examples like tracking research methods in scientific papers or measuring topic emphasis in course curricula. The toolkit also includes utilities for dataset merging when fields do not match, deduplication, passage coding, theory ideation, and deidentifying personal information to support privacy-preserving research.
🚀 AI Breakthroughs
ByteDance Expands Seedance 2.0 Beta With Multimodal AI Video and Reference Editing Tools
ByteDance has rolled out Seedance 2.0 to a limited set of users, building on a model that was already considered among the stronger AI video generators. The multimodal system can take text, images, video and audio together, letting users combine up to 12 files and producing 4–15 second clips with automatically added sound effects or music. The company says a key upgrade is “reference” support, which can borrow camera motion and effects from uploaded videos, swap characters, and extend existing footage, though realistic human faces are currently blocked for compliance. The clips shown so far come from company demos and may represent best-case output, with real-world consistency, cost and generation speed still unclear. The release follows Kuaishou’s Kling 3.0 unveiling and comes amid reports that these launches have lifted shares of some Chinese media and AI firms by as much as 20%.
Altman Says India Reaches 100 Million Weekly Active ChatGPT Users Ahead of AI Summit
OpenAI CEO Sam Altman said ChatGPT has about 100 million weekly active users in India, positioning the country as one of OpenAI’s biggest markets and its second-largest user base after the United States, ahead of a government-hosted India AI Impact Summit in New Delhi. He said student use is a major driver and claimed India has the largest number of student ChatGPT users globally, as AI rivals also push education offers in the country. The adoption surge comes as OpenAI targets India’s large, young online population, including opening a New Delhi office in August 2025 and tailoring pricing with a sub-$5 ChatGPT Go tier that was later made free for a year for Indian users. Altman also pointed to ChatGPT’s broader global growth—reported at about 800 million weekly active users as of October 2025—and said OpenAI plans deeper engagement with the Indian government as India seeks to convert mass AI use into wider economic impact through initiatives such as the IndiaAI Mission.
Alibaba launches open-source RynnBrain embodied AI model to boost robots’ spatial reasoning
Alibaba Group Holding has released RynnBrain, an open-source embodied AI foundation model designed to give robots a more capable “brain” for operating in the physical world. Built on Alibaba’s Qwen3-VL, the model targets embodied intelligence systems that can perceive, reason and act beyond preprogrammed routines. Alibaba said RynnBrain goes beyond passive observation by performing physically aware reasoning and supporting complex real-world tasks through stronger spatial understanding. The company added that it can identify and map actionable possibilities in localised and 3D environments, helping downstream vision-language-action models carry out more sophisticated robot behaviours.
Airbnb to Add LLM-Powered Search, Trip Planning, Host Tools, and Expanded Customer Support
Airbnb said it is building large language model-powered features into its app to improve search, discovery and customer support, with plans to help guests find listings and plan trips while assisting hosts with property management. The company is testing a natural-language search tool for asking questions about properties and locations, and said its AI search is currently live for a very small share of users, with experiments underway to make it more conversational and eventually consider sponsored listings. Airbnb said its existing AI customer service bot, launched in North America last year, now resolves about one-third of customer issues without human agents, and the company plans to expand it to more languages and add voice support. It also aims to broaden internal AI use, noting that 80% of its engineers already use AI tools, and reported fourth-quarter revenue of $2.78 billion, up 12% year over year.
Anthropic’s Super Bowl Ads Boost Claude Into Top 10, Driving 32% Download Jump
Anthropic’s darkly comedic Super Bowl ads that mock AI chatbot advice appear to have boosted interest in its Claude app, pushing it from No. 41 to No. 7 on the U.S. App Store, its highest rank yet. Appfigures estimates Claude logged about 148,000 U.S. downloads across iOS and Android from Sunday through Tuesday, up 32% from roughly 112,000 in the prior three-day period. Average daily downloads over that span rose to about 49,200, compared with a typical 37,400 for the same days, suggesting the app’s “no ads” positioning is resonating. The lift also coincided with the release of Anthropic’s Opus 4.6 model and comes as ChatGPT began rolling out ads to free users. Worldwide, Claude’s downloads grew 15% over the same window, a smaller bump than in the U.S., according to Appfigures.
Report Says Meta May Add “Name Tag” Facial Recognition to Ray-Ban Smart Glasses
Meta is considering adding facial recognition to its Ray-Ban smart glasses as soon as this year, according to a report citing internal discussions and documents. The feature, internally called “Name Tag,” would let wearers identify people and pull up information via Meta’s AI assistant, though plans could still change due to safety and privacy concerns. An internal memo said the company once planned a limited rollout at a conference for visually impaired attendees but did not proceed. The documents also suggested Meta viewed a turbulent US political moment as a strategic window for launch, after earlier facial-recognition plans for the glasses were shelved in 2021 over technical and ethical issues and later revisited amid the device’s stronger-than-expected success.
OpenClaw Creator Joins OpenAI to Advance Next-Generation Personal Multi-Agent AI Systems
The creator of the open-source autonomous AI assistant OpenClaw has joined OpenAI, with the company saying it will support OpenClaw as an open-source project under a foundation while integrating the developer into its multi-agent push. OpenAI’s CEO said the work is expected to become central to future product offerings focused on next-generation personal AI agents. OpenClaw, started in late 2025, gained rapid traction for locally run agents that can manage email, interact with messaging apps, and automate workflows, drawing a large developer following and roughly 180,000 GitHub stars. The project was initially released as “Clawdbot” before being renamed to “Moltbot” and then “OpenClaw” after trademark concerns were raised by Anthropic over similarity to “Claude.”
China’s MiniMax Launches M2.5 to Challenge Claude Opus 4.6 at Lower Costs
China’s AI startup MiniMax has rolled out its M2.5 foundation model, positioning it as a rival to Anthropic’s Claude Opus 4.6 while claiming about 10% lower per-task cost and comparable runtime on SWE-Bench Verified. The company said M2.5 scored 80.2% on SWE-Bench Verified, 51.3% on Multi-SWE-Bench, and 76.3% on BrowseComp, and completed SWE-Bench Verified tasks 37% faster than its M2.1 predecessor with an average runtime of 22.8 minutes. Two variants—M2.5 and M2.5-Lightning—share the same capabilities but differ in speed and pricing, with Lightning rated at 100 tokens per second and priced at $0.3 per million input tokens and $2.4 per million output tokens, while the standard version runs at 50 tokens per second for half the price. MiniMax also said M2.5 was trained across 200,000+ real-world environments, targets coding, search/tool use and office workflows, and its gains were driven by scaled reinforcement learning and infrastructure optimisations.
Qwen3.5 Open-Weight 397B Multimodal Model Targets Faster Agentic Reasoning With 1M Context
Qwen3.5 is a new open-weight native vision-language model series headlined by Qwen3.5-397B-A17B, which uses a hybrid design combining linear attention (via Gated Delta Networks) with a sparse mixture-of-experts to improve inference efficiency. Although it has 397 billion total parameters, only 17 billion are activated per forward pass, and the release also expands language and dialect coverage from 119 to 201. A hosted variant, Qwen3.5-Plus, is offered through Alibaba Cloud Model Studio with a 1 million-token context window by default and built-in tool use, including optional reasoning and web-search/Code Interpreter features. The published benchmark tables show the model posting competitive results across knowledge, coding, agentic tasks, and multimodal evaluations, alongside claimed throughput gains versus earlier Qwen3 models at 32k and 256k context lengths. The accompanying technical notes attribute the biggest post-training gains to scaled reinforcement-learning environments and describe infrastructure upgrades such as FP8 training aimed at boosting speed and reducing memory use.
GPT-5.2 Preprint Finds Nonzero Single-Minus Gluon Tree Amplitudes in Half-Collinear Regime
OpenAI published a new arXiv preprint claiming that a long-assumed “zero” tree-level scattering amplitude for gluons—when one gluon has negative helicity and the rest are positive—can become nonzero under a specific, highly aligned momentum setup called the half-collinear regime. The paper reports that a GPT‑5.2 Pro model first conjectured a compact all‑n formula after simplifying hand-derived results up to six gluons. An internal scaffolded version of GPT‑5.2 then produced the same formula and a formal proof over roughly 12 hours, with further analytical checks using the Berends–Giele recursion and a soft-theorem consistency test. OpenAI said similar AI-assisted methods have already been used to extend related calculations from gluons to gravitons, with more generalizations in progress.
🎓AI Academia
Frontier AI auditing framework outlines third-party verification standards and AI Assurance Levels for safety
A new report argues that frontier AI is quickly becoming critical infrastructure, yet outsiders still lack reliable ways to verify whether leading AI developers’ safety and security claims are accurate or meet relevant standards. It says today’s third-party AI reviews are weaker than audits in industries like consumer goods, finance, and food because they rarely involve independent experts with secure access to non-public, safety-relevant information. The report defines “frontier AI auditing” as rigorous third-party verification of company claims and evaluation against standards, and proposes a four-step “AI Assurance Levels” scale from limited, time-bounded audits to continuous, deception-resistant verification. It recommends making at least AAL-1 a baseline across frontier AI and pushing top developers toward AAL-2 soon, while warning that progress depends on auditor oversight, expanded capacity, stronger incentives and liability rules, and more investment in making AI systems easier to audit.
DeepMind Paper Outlines Adaptive Framework for Safer AI Task Delegation Across Agents and Humans
A new Google DeepMind paper on arXiv (Feb. 12, 2026) outlines an “intelligent AI delegation” framework aimed at helping AI agents break complex goals into sub-tasks and safely delegate work to other agents or humans. The work argues that many current multi-agent systems depend on brittle heuristics and struggle with changing conditions and unexpected failures in real-world deployments. Its proposed approach treats delegation as more than task splitting, adding structured decisions around authority, responsibility, accountability, role boundaries, clarity of intent, trust, and ongoing performance monitoring. The paper positions the framework as relevant for both human and AI participants in large delegation networks, with an eye toward protocols for an emerging “agentic web.”
About SoRAI: SoRAI is committed to advancing AI literacy through practical, accessible, and high-quality education. Our programs emphasize responsible AI use, equipping learners with the skills to anticipate and mitigate risks effectively. Our flagship AIGP certification courses, built on real-world experience, drive AI governance education with innovative, human-centric approaches, laying the foundation for quantifying AI governance literacy. Subscribe to our free newsletter to stay ahead of the AI Governance curve.



