It is confirmed—GPT-5 is coming! Are you ready for what’s next?
Today's highlights:
🚀 AI Breakthroughs
OpenAI's Sam Altman Announces Paid Subscription Tiers for GPT-5 Intelligence Levels
• OpenAI CEO Sam Altman shares that GPT-4.5, internally known as Orion, will be the last non-chain-of-thought model before the more advanced GPT-5 is launched
• GPT-5 will be available to free users with certain limitations, while paid subscribers can access higher intelligence levels, with added features like voice and deep research
• OpenAI plans to combine o-series and GPT-series models, aiming for a unified system that simplifies AI use across a wide range of tasks, moving away from distinct model usage.
Bytedance AI Models Transform Video Generation with "OmniHuman"
• Bytedance, owner of TikTok, unveiled OmniHuman-1, a cutting-edge multimodal video generation AI, producing high-quality videos with nearly flawless lip-syncing from a single image and audio file.
• The company's second model, Goku, targets the advertising sector with impressive text-to-video capabilities, utilizing just 8 billion parameters, reflecting Bytedance's strategic use of its extensive media library.
• Industry experts foresee AI's disruptive impact on video and movie production, projecting its increasing role in creating digital characters and reshaping content creation landscapes.
Adobe Debuts Firefly App with First Commercially Safe Generative AI Video Model
• Adobe unveils the Firefly app, featuring a groundbreaking generative AI video model, enabling creative professionals to transition smoothly from ideation to production within the Adobe Creative Cloud ecosystem
• Firefly Standard and Pro plans offer creators extensive access to premium video and audio features, supporting up to 70 five-second video generations monthly, with pricing starting at $9.99 USD
• Renowned brands like Deloitte Digital and PepsiCo/Gatorade embrace Adobe's Firefly Video Model, commending its IP-friendly and commercially safe capabilities for large-scale content production.
Saudi Arabia Invests $1.5 Billion in Groq for AI Infrastructure Expansion
• Groq has secured a $1.5 billion investment from Saudi Arabia to expand its AI inference infrastructure, aligning with Saudi Vision 2030's goal of an AI-driven economy;
• Groq showcased its advanced AI capabilities, including reasoning LLMs and multilingual text-to-speech models, at LEAP 2025, reinforcing its leadership in AI computing infrastructure;
• GroqCloud™ provides high-speed AI inference solutions from a Dammam data center, supporting global demand and marking a milestone in rapid infrastructure deployment within just eight days.
EU launches InvestAI initiative to mobilise €200 billion of investment in artificial intelligence
• The EU launched the InvestAI initiative at the AI Action Summit in Paris to mobilize €200 billion for AI investment, including a €20 billion European fund for AI gigafactories.
•The initiative aims to develop large-scale AI infrastructure to support open and collaborative AI model development, reinforcing Europe's position as a leader in AI.
•Commission President Ursula von der Leyen emphasized that InvestAI is a public-private partnership designed to accelerate AI advancements, ensuring that European scientists and companies, not just the biggest players, can build cutting-edge AI models.
Elon Musk’s Grok 3 Nears Launch as AI Competition Intensifies with OpenAI
• Elon Musk announced that Grok 3, his ChatGPT competitor, is nearing completion and expected to launch within one to two weeks, showcasing powerful reasoning abilities;
• Aiming to counter OpenAI's developments, Musk's investor group proposed a $97.4 billion offer for OpenAI's nonprofit assets, amidst ongoing legal tensions about its profit transition;
• Musk suggested potential U.S. government spending cuts of $1 trillion, projecting economic growth and inflation management in 2025-2026 during his appearance at the World Governments Summit in Dubai.
Apple Collaborates with Alibaba to Introduce AI Features for Chinese iPhone Users
• Apple partners with Alibaba to introduce AI features for iPhone users in China, aiming to strengthen its position against domestic competitors like Huawei
• Apple's decision was influenced by Alibaba's vast data holdings, which are pivotal for training AI models to deliver personalized services
• The collaboration comes as Apple faces a reported dip in iPhone sales, with the company expecting AI enhancements to regenerate demand in the Chinese market.
⚖️ AI Ethics
Delhi High Court Labels AI Tools Like DeepSeek Dangerous and Demands Government Review
• The Delhi High Court labeled artificial intelligence as a "dangerous tool," urging caution regardless of whether it's controlled by Chinese or American entities, during a hearing on banning DeepSeek;
• The petitioner cited various vulnerabilities discovered within a month of DeepSeek's launch, prompting concerns over data leaks, including personal and government information, and a plea for action;
• The court granted time for the Centre’s counsel to gather instructions, while the petitioner called for urgent measures to protect privacy amid fears of cyber attacks and data breaches.
Condé Nast, Other Publishers Accuse Cohere of Copyright Infringement in AI Training Suit
• Condé Nast and other publishers filed a lawsuit against Cohere, alleging copyright and trademark infringement in its AI training datasets and outputs, which purportedly compete with publisher offerings;
• The lawsuit claims Cohere used over 4,000 articles without permission, producing inaccurate "hallucinations" purportedly attributed to legitimate publishers, thus misleading the public and damaging brand reputations;
• Cohere, valued over $5 billion, boldly defends its AI training protocols, dismissing the lawsuit as frivolous while stating its commitment to intellectual property rights and offering indemnification to enterprise customers.
Eric Schmidt: AI Misuse by Rogue Nations Poses "Extreme Risk" Without Strong Oversight
• Eric Schmidt, former CEO of Google, warns of "extreme risk" posed by AI misuse, highlighting potential harm by rogue nations such as North Korea, Iran, and Russia during the AI Action Summit
• Schmidt emphasizes the need for balanced oversight to prevent AI's weaponization, while cautioning against heavy regulation that could stifle innovation in the fast-evolving field
• Despite a global push for inclusive AI development, Schmidt notes differing international approaches, with the EU's stringent policies potentially limiting its pioneering role in the AI revolution.
Statement from CEO of Anthropic on the Paris AI Action Summit
• At the Paris AI Action Summit, Anthropic stressed the importance of democratic leadership in AI, highlighting concerns of authoritarian misuse for global military dominance
• The company spotlighted the necessity of addressing AI's security risks, citing potential dangers like misuse by non-state actors and the autonomous challenges posed by advanced AI systems
• Anthropic introduced the Anthropic Economic Index to monitor AI's impact on the labor market, urging governments to measure and enact policies to equitably distribute AI-generated economic benefits.
EU Shelves AI Liability Directive Amid Industry Pressure and Calls for Deregulation
• The European Commission has decided to withdraw the proposed AI Liability Directive, citing no foreseeable agreement on liability rules as industry lobbyists push for simpler regulations;
• Critics argue the EU's move to abandon the directive undermines potential accountability for AI-caused harms, questioning its impact on victim recourse across member states and different national regulations;
• The decision marks a shift towards reducing regulatory complexity, as the European Commission focuses on streamlining compliance and fostering growth, aligning with broader criticisms of overregulation. Read more
🎓AI Academia
Goku Foundation Models Set New Benchmarks in Image and Video Generation
• Goku leverages rectified flow Transformers to excel in joint image and video generation, setting new benchmarks in text-to-image and text-to-video tasks with impressive evaluation scores;
• The Goku models benefit from meticulous data curation, innovative model design, enhanced flow formulation, and a robust infrastructure, enabling high-quality, efficient large-scale training;
• With transformative applications in media, advertising, and gaming, Goku's advancements offer valuable insights and practical progress for future development in the generative model research community;
Survey Reveals Evolution of AI-Generated Media Detection with MLLM Integration
• The rapid expansion of AI-generated media has heightened the need for reliable detection methods, with challenges related to misinformation and authenticity in social trust becoming more prominent.
• Detection methods for AI-generated content are divided into domain-specific Non-MLLM-based approaches and versatile MLLM-based strategies capable of integrating multimodal information.
• MLLM-based methods provide enhanced scalability and adaptability, supporting tasks like real-time misinformation monitoring and complex multimodal forgery detection amid growing regulatory concerns.
ICLR 2025 Paper Investigates Conformity in Large Language Model Multi-Agent Systems
• A new study highlights the conformity of large language models in multi-agent systems, drawing parallels to human cognitive biases like groupthink and conformity bias
• The research introduces BENCHFORM, a benchmark designed to assess the impact of conformity on LLMs through reasoning-intensive tasks and diverse interaction protocols
• Mitigation strategies, including enhanced personas and reflection mechanisms, are proposed to address conformity, aiming to improve ethical alignment and reliability in AI systems.
New Multi-Agent Framework Aims to Mitigate Bias in Large Language Models
• Researchers propose a multi-objective approach called MOMA to address social bias in large language models (LLMs) without degrading their performance significantly
• MOMA uses multiple agents for causal interventions on bias-related content, ensuring accuracy while substantially reducing bias scores by up to 87.7%
• In experiments, MOMA demonstrated significant improvements in multi-objective metrics, enhancing performance in the StereoSet dataset by up to 58.1% with only slight performance degradation in other datasets.
Survey Analyzes Claim Verification Strategies Using Large Language Models
• The survey highlights the shift towards using Large Language Models (LLMs) for automated claim verification, noting their superior performance and integration of novel methods like Retrieval Augmented Generation (RAG);
• A detailed examination of claim verification pipelines is provided, focusing on components such as retrieval, prompting, and fine-tuning, reflecting the growing sophistication of LLM-based frameworks;
• Despite their capabilities, LLMs are prone to generating misinformation and hallucinations, posing challenges in ensuring accurate claim verification, as they are pre-trained on extensive textual datasets.
Researchers Investigate Prompt Leakage Threats and Defense in Customized Language Models
• Prompt leakage in customized large language models poses a major threat to intellectual property, allowing malicious users to replicate services with leaked prompt data
• Research identifies model size, prompt length, and text type as key factors in the vulnerability of prompt leakage, challenging the security of current LLM alignments
• Even advanced LLMs with safety measures, like GPT-4, remain susceptible to straightforward prompt extraction attacks, necessitating new defensive strategies for robust protection.
Expanding Abstention in Large Language Models: A Framework for Safer AI Interaction
• Abstention in large language models can significantly reduce hallucinations and unsafe outputs by enabling the models to refuse answers in uncertain situations.
• A structured framework was developed to examine abstention in LLMs through the lenses of query characteristics, model capabilities, and alignment with human values.
• Future research opportunities include optimizing abstention strategies as a cross-task, meta-capability, enhancing LLM safety and reliability across various domains and contexts.
Public Views on AI Art Copyright Highlight Role of Ego and Competition
• Recent studies reveal that public opinion varies regarding copyright for AI-generated art, with debates focusing on the author and rights-holder amidst rising claims from AI users and original artists.
• Participants in an AI art competition believed creativity and effort are necessary for AI-generated art, while skill was deemed less significant in the process.
• Evidence points to egocentric bias among participants, who rated their own creations higher in quality, creativity, and effort, affecting monetary awards in the study.
Study Reveals Language Bias in Multilingual AI Solutions, Favoring Dominant Cultures
• A study explores the information disparity in multilingual large language models (LLMs), revealing a systemic bias towards preferring information in the query's language during document retrieval and answer generation;
• Findings suggest LLMs tend to default to high-resource languages when no information exists in the query’s language, reinforcing dominant cultural perspectives and potentially marginalizing low-resource languages;
• The research highlights a critical issue where LLMs’ multilingual capabilities may inadvertently strengthen language-specific information bubbles, posing challenges to achieving true information parity across diverse linguistic contexts.
ICLR 2025 Study Reveals Ineffectiveness of Adversarial Protections for Artists Against AI
• A recent study presented at ICLR 2025 reveals that adversarial perturbations fail to effectively protect artists from style mimicry by generative AI models;
• Researchers found that simple techniques like image upscaling, combined with existing image processing methods, can bypass these protective measures, making them ineffective against determined attempts;
• The study warns that current protection methods based on adversarial perturbations offer only a false sense of security, urging the development of alternative solutions to safeguard artists' styles.
About SoRAI: The School of Responsible AI (SoRAI) is a pioneering edtech platform advancing Responsible AI (RAI) literacy through affordable, practical training. Its flagship AIGP certification courses, built on real-world experience, drive AI governance education with innovative, human-centric approaches, laying the foundation for quantifying AI governance literacy. Subscribe to our free newsletter to stay ahead of the AI Governance curve.