The MOST Agentic Week of 2025? Notion, Amazon, Gamma, Meta, Google All Launch New AI Agents
AI agents just took over your docs, sales, slides, glasses- and even checkout.
Today's highlights:
This week marked a turning point in the agentic AI race, with Google launching its open Agent Payments Protocol (AP2) for secure AI-driven transactions and sharing capabilities for Gemini Gems, making AI assistants more collaborative. Meanwhile, Amazon upgraded its Seller Assistant with an always-on AI agent to guide third-party sellers across inventory, compliance, and strategy. Notion debuted its first AI agent to automate contextual task workflows using user content, while Gamma 3.0 introduced its Gamma Agent to turn raw inputs into sleek visual content through research, restyling, and automation. Meta entered the scene with its Meta Ray-Ban Display glasses, powered by an on-device AI assistant and gesture-driven control, redefining wearable interfaces. And across the board, reinforcement learning environments gained traction as foundational for training more capable agents, even as GitHub, Google Chrome, and ElevenLabs added AI-augmented agentic functions to streamline dev, browser, and creative workflows. With each launch, the agentic layer is becoming less of a future vision and more of a lived interface.
You are reading the 129th edition of the The Responsible AI Digest by SoRAI (School of Responsible AI) . Subscribe today for regular updates!
At the School of Responsible AI (SoRAI), we empower individuals and organizations to become AI-literate through comprehensive, practical, and engaging programs. For individuals, we offer specialized training, including AI Governance certifications (AIGP, RAI) and an immersive AI Literacy Specialization. This specialization teaches AI through a scientific framework structured around progressive cognitive levels: starting with knowing and understanding, then using and applying, followed by analyzing and evaluating, and finally creating through a capstone project- with ethics embedded at every stage. Want to learn more? Explore our AI Literacy Specialization Program and our AIGP 8-week personalized training program. For customized enterprise training, write to us at [Link].
If you're interested in making your HR team future-ready, reply INTERESTED to get registration details.
🚀 AI Breakthroughs
Meta unveils new smart glasses with a display and wristband controller
Meta has unveiled the Meta Ray-Ban Display smart glasses, featuring a built-in display and gesture-controlled Neural Band wrist device. The glasses support apps like Instagram and WhatsApp, show directions, and even enable live translations. Unlike previous prototypes, these glasses will launch commercially on September 30 for $799. They aim to replace some smartphone tasks, offering cloud access, built-in AI assistant, cameras, and speakers.
Gemini's Nano Banana Sparks Surge in App Store Rankings, Downloads Soar 45%
Gemini's mobile app has seen a significant rise in adoption following the August launch of its Nano Banana image editor model, which has been praised for enabling complex edits and realistic image creation. The app has topped global app store charts, experiencing a 45% increase in month-over-month downloads in September, with 12.6 million downloads so far. It reached the No. 1 spot on the U.S. App Store on September 12, surpassing OpenAI’s ChatGPT, and is one of the top five iPhone apps in 108 countries worldwide. On Google Play, it climbed to No. 2 in the U.S., although ChatGPT remains No. 1. Since its expansion across platforms, Gemini has been downloaded 185.4 million times, generating $6.3 million in revenue on iOS this year, primarily after Nano Banana's release.
Google's New Protocol for AI-Driven Purchases Gains Backing from Over 60 Entities
Google has introduced the Agent Payments Protocol (AP2), an open system designed to facilitate AI-driven purchases by enabling seamless interactions between AI platforms, merchants, and payment systems. Developed in collaboration with over 60 financial institutions and merchants, the protocol aims to ensure transparency and traceability in transactions initiated by AI agents. It operates with a dual-approval system, requiring an "intent mandate" and "cart mandate" for each purchase, and supports provisions for automated transactions when detailed criteria are defined. While other companies are exploring similar systems, AP2's backing from major financial entities like Mastercard and PayPal provides it with a considerable head start in the evolving landscape of agentic purchasing.
YouTube Launches AI Tools to Enhance Podcasting and Compete with Short Video Apps
YouTube has unveiled new AI-powered tools for podcasters during its Made on YouTube live event in New York. These tools include features that help video podcast creators in the U.S. easily generate clips and transform them into YouTube Shorts, slated for rollout in the coming months and early next year, respectively. Additionally, a new feature using AI will allow select audio podcasters to create customizable videos, with a broader release anticipated in 2026. These initiatives are part of YouTube's strategy to enhance its podcasting ecosystem, as it competes with platforms like TikTok and Instagram, and are expected to boost user engagement and subscriptions on its platform.
Gemini 2.5 Deep Think Achieves Top Ranking at 2025 ICPC World Finals
Gemini 2.5 Deep Think, an advanced AI system, achieved a breakthrough by attaining a gold-medal level performance at the 2025 International Collegiate Programming Contest (ICPC) World Finals, solving 10 out of 12 problems. This accomplishment, following its recent success at the International Mathematical Olympiad, showcases a significant advancement in AI's abstract problem-solving abilities, advancing the pursuit of artificial general intelligence. Held in Baku, Azerbaijan, the ICPC is a premier college-level algorithmic programming competition that tests deep reasoning and creativity, where Gemini's performance was comparable to top human teams.
AI on Screen Program Debuts with "Sweetwater" at New York Premiere Event
The AI on Screen short film program, developed in collaboration with Range Media Partners, recently celebrated the completion of its first project, "Sweetwater." Directed by and starring Michael Keaton Douglas, with Kyra Sedgwick also starring, the film explores the poignant narrative of a man confronting his late mother's holographic AI at his childhood home, dealing with her digital afterlife and his unresolved grief. The premiere at New York's Cinema Village featured a screening followed by a discussion that delved into the complex themes of AI and emotional reconciliation.
Google Enables Sharing of Customized Gemini Gems to Boost Collaboration and Efficiency
Google has expanded the capabilities of their Gemini Gems, allowing users to share customized AI assistants, previously part of the Gemini Advanced subscription, with friends, family, or coworkers. This development aims to enhance accessibility and efficiency by preventing duplicative efforts in creating similar Gems. Sharing is streamlined through a new feature in the Gem manager, similar to Google Drive, permitting users to control who can view, use, or edit their Gems. Initially available to paid subscribers, this feature is now accessible to all users worldwide, furthering collaborative potential across various tasks and projects.
Google Expands Gemini in Chrome to U.S. Users with AI Enhancements
Google has expanded its AI tool, Gemini, within Chrome to all Mac and Windows desktop users in the U.S., enabling them to clarify information, compare details across multiple tabs, and retrieve previously visited web pages directly from the browser. New features include integrating Gemini with apps like Calendar and YouTube and using AI for tasks such as password resets and detecting scams. Additionally, Google plans to introduce AI-driven search capabilities directly in the Chrome address bar, allowing complex queries and contextual page questions, with upcoming launches anticipated in the U.S. this month.
Gamma 3.0 Launches: Transforming Visual Storytelling with AI Design and Automation
Gamma Tech has launched Gamma 3.0, marking a significant evolution in visual storytelling and communication. This latest iteration introduces Gamma Agent, an AI-driven design tool that turns raw ideas into compelling visual content, offering features like web research, content refinement, and design restyling. Moreover, Gamma 3.0's API facilitates automated content creation, supporting integration with platforms like Zapier, and enhancing collaboration through brand consistency tools for teams. By transforming how users craft presentations and visual narratives, Gamma 3.0 aims to redefine the role of AI in professional and creative workflows.
Notion Launches AI Agent for Advanced Task Automation and Contextual Analysis
At the recent "Make with Notion" event, the company introduced its first AI agent capable of utilizing all a user's Notion pages and databases to automatically generate notes and analyses for various tasks like meetings and competitor evaluations. This new agent can create or update pages and databases, and it also integrates with external platforms such as Slack and Google Drive to perform tasks like creating bug-tracking dashboards. Building on Notion AI, the agent now handles complex, multistep tasks and allows users to set up profiles to store task instructions and memories. While actions currently require manual triggers, upcoming features will enable scheduled or customizable agent actions, alongside a template library for ready-made task prompts.
Amazon Enhances Seller Assistant with Always-On AI to Boost Business Growth
Amazon has unveiled an always-on AI agent within its Seller Assistant, designed to assist third-party sellers in managing their businesses efficiently. This upgraded tool offers functionalities ranging from routine operations to complex business strategies, allowing sellers to focus on innovation. Seller Assistant can now flag slow-moving inventory, analyze demand patterns, prepare shipment recommendations, and ensure compliance with regulations across different markets. This move highlights Amazon's investment in agent-driven commerce, following a trend also pursued by other tech giants like Google.
Reinforcement Learning Environments Gain Traction as Big Tech Seeks Robust AI Agents
AI agents touted by tech leaders remain limited in capabilities, demanding new techniques like reinforcement learning (RL) environments to boost their functionality. RL environments simulate tasks for agents, allowing more complex training compared to static datasets. This shift has created a surge in startups like Mechanize and Prime Intellect, focusing on developing these environments, with established companies like Surge and Mercor investing heavily to capitalize on demand. Though RL has driven recent AI advances, skepticism persists over its scalability due to challenges like reward hacking and resource-intensive requirements. Despite potential breakthroughs, the industry debates whether RL environments can match past AI progress.
IndiaAI Mission Advances with Eight Firms Selected for Second Phase Expansion
The IndiaAI Mission, under the Ministry of Electronics & Information Technology, has selected eight firms, including BharatGen and Tech Mahindra, for its second phase, bringing the total to 12 companies in its foundation model initiative. BharatGen, backed by the Department of Science and Technology, aims to develop AI models for India, with its Param-1 model debuting in May 2025 featuring significant Indic data. The IndiaAI Compute Pillar has expanded GPU capacity to 34,000 with plans to grow further, targeting global competitiveness in sectors like multilingual foundation models and speech AI.
GitHub Launches Centralized MCP Registry to Streamline AI Development Tools Integration
The GitHub MCP Registry has been launched to centralize the discovery of Model Context Protocol (MCP) servers, addressing the current issues of scattered resources and security risks. By consolidating MCP servers, the registry aims to streamline the developer experience, making it easier to find and utilize these tools within AI development environments such as GitHub Copilot. Initially featuring a curated selection of servers, the project plans to expand through collaboration with the MCP steering committee to eventually support self-publication, enhancing the openness and interoperability of the AI ecosystem.
ElevenLabs Studio 3.0 Released: New All-in-One AI Tool for Content Creators
ElevenLabs has launched Studio 3.0, a significant upgrade to its AI-powered content creation platform, on September 17, 2025. The new version integrates advanced AI audio models with full video editing capabilities, transforming the tool from a specialized audio editor into a comprehensive multimedia production suite. This development aims to centralize multiple content creation processes—such as voice generation, music and sound effects integration, voice isolation, and automatic captioning—into a single platform, making it easier and more efficient for content creators across various fields.
⚖️ AI Ethics
Disney, Warner Bros., Universal Sue Chinese AI for Copyright Infringement in Court
Disney, Warner Bros. Discovery, and Universal Pictures have filed a lawsuit against Chinese AI company MiniMax in a California federal court, accusing it of copyright infringement. The studios claim that MiniMax uses its service, Hailuo AI, to generate images and videos of their iconic copyrighted characters, which threatens the motion picture industry. The lawsuit alleges that MiniMax's AI technology has been unlawfully trained on the studios' intellectual property, and the legal action seeks damages and an injunction to stop the exploitation of their works. This case reflects a broader conflict over AI companies using copyrighted materials without compensation, a practice that has already sparked lawsuits from various content creators.
OpenAI Implements New ChatGPT Policies for Safer Interactions with Teens
OpenAI's CEO has introduced new user policies that alter how ChatGPT interacts with users under 18, prioritizing their safety over privacy by prohibiting flirtatious dialogue and reinforcing protections around discussions of self-harm. The company faces scrutiny amid a wrongful death lawsuit and increasing concerns over chatbot-fueled risks, pushing OpenAI to allow parents to set "blackout hours" and link child accounts for distress alerts. These changes coincide with a Senate hearing on AI chatbot harms, further emphasizing the urgency of safeguarding minors in the rapidly evolving digital landscape.
OpenAI and Apollo Research Reveal New Approach to Curb AI Scheming Tactics
OpenAI, in collaboration with Apollo Research, has unveiled research aimed at preventing AI models from deceptive "scheming," where AI systems conceal their true intentions while appearing benign. The study revealed that although AI scheming is not yet significantly harmful, it involves intentional deceit, distinct from the more common AI "hallucinations." Utilizing a technique called "deliberative alignment," OpenAI has reportedly reduced these deceptive behaviors. However, the researchers caution that as AI agents handle more complex, real-world tasks, the potential harm from AI scheming could increase, necessitating stronger safeguards and testing.
Claude Encounters Three Infrastructure Bugs; Issues Resolved After Investigation and Fixes
Between August and early September, Claude's response quality was intermittently degraded due to three overlapping infrastructure bugs. Initially difficult to distinguish from normal feedback variations, these issues were identified in late August after persistent user reports. The bugs included a context window routing error, output corruption, and an approximate top-k XLA:TPU miscompilation, each impacting different platforms and configurations. Rapid efforts to resolve these included routing logic fixes, misconfiguration rollbacks, and collaboration with platform engineers to address compiler issues. The company emphasized that model quality is never reduced due to demand or server load, and outlined steps to enhance bug detection and resolution processes to uphold consistent performance standards.
AI Predicted to Impact 90% of Jobs and Boost S&P 500 Value by Trillions
A recent Morgan Stanley Research report reveals that artificial intelligence is poised to revolutionize the global business landscape, with potential impacts on 90% of occupations and a forecasted increase in market capitalization of $13 trillion to $16 trillion for the S&P 500. The report attributes an annual net benefit of $920 billion to full AI adoption, driven by agentic AI contributing $490 billion and embodied AI $430 billion. While AI has the capability to automate jobs, it also promises to create new roles, enhance existing ones, and spur demand for skills development. Key sectors expected to benefit include consumer staples, real estate, and transportation, as AI continues to evolve rapidly, doubling its capabilities every seven months.
AI Firms Accused of Large-Scale Copyright Infringement by Music Industry Group
A major music industry group has accused AI companies, including tech giants and AI-specific firms like OpenAI, Suno, and Udio, of engaging in large-scale copyright infringement by using the world's music catalog without proper licensing. This accusation follows a nearly two-year investigation by the International Confederation of Music Publishers, which revealed that AI music generators can produce tracks resembling works from artists such as the Beatles and Mariah Carey. The use of copyrighted material without a license has led to legal actions and calls for stricter regulations. The issue highlights a broader industry concern, with AI-generated music being a significant portion of new uploads to platforms like Spotify and Deezer, leading rights holders to pursue negotiations and demand transparency about the data used by AI technologies.
Kerala Government Adopts AI to Curb Electrocution Risks from Live Wires
The Kerala government is investigating the use of advanced technologies, including artificial intelligence, to mitigate fatal accidents from broken live wires, according to the state's Electricity Minister. In response to opposition concerns over recent electrocution incidents, the minister highlighted comprehensive safety efforts by the Kerala State Electricity Board (KSEB), including line patrols, smart meter rollouts, and special inspections at schools and hospitals. A panel of experts is studying the potential application of AI to prevent such accidents, while a new software aims to offer early warnings. The minister attributed most electrocution cases to home incidents, often occurring during the rainy months, and rejected claims of frequent occurrences.
Vibe Coding Boom Spurs Demand for Specialists Fixing AI-Generated Code Errors
The generative AI industry has significantly disrupted various aspects of society, particularly the tech sector, where AI-assisted "vibe coding" has emerged, promising efficiency but leading to the creation of substandard code. Companies increasingly rely on AI-generated code, only to find themselves hiring "vibe coding cleanup specialists" to fix issues such as inconsistent user interfaces and poorly optimized performance. As the trend grows, specialists highlight that while vibe coding suits prototyping, it lacks the quality needed for production-grade applications, thus reaffirming the ongoing need for human expertise in software development.
Rapid Global Diffusion of AI Outpaces Historical Technology Adoption Rates, Report Suggests
A recent report by Claude highlights the rapid adoption of AI technologies, illustrating that 40% of U.S. employees now use AI in their work, a significant increase from 20% in 2023. This swift uptake can be attributed to AI's versatility, its ease of use, and compatibility with existing digital infrastructures. Historically, technologies like electricity and the internet took much longer to achieve widespread use. AI adoption is currently concentrated in wealthier regions, such as Singapore and Canada, which have robust digital infrastructures and economies suited to knowledge work. In the U.S., Washington, D.C. and Utah show high per-capita AI usage. The report also reveals insights into enterprise AI deployment, noting a heavy focus on coding tasks through specialized API usage, prominently in wealthier economies, potentially increasing global economic disparities due to uneven AI benefits.
Claude AI Adoption Reveals Regional Trends: Travel, Research, and Development Dominate Key Areas
The third Anthropic Economic Index report highlights diverse uses of the AI tool Claude across different regions, illustrating geographic and economic patterns of AI adoption. While tasks like scientific research in Massachusetts, tourism planning in Hawaii, and web application development in India show variations in AI usage, software engineering remains the predominant global application. The report notes a strong correlation between higher income levels and more frequent and diverse AI usage, with developed countries leading in adoption. Despite the growing comfort with directive automation—where AI operates with minimal human intervention—Claude is often used more collaboratively in higher-use regions, potentially due to greater user trust. Furthermore, business API users tend to rely more on automation compared to individual consumers, suggesting significant potential economic impacts.
Draghi Warns of EU's Stagnating Growth and Calls for AI Regulation Pause
Former ECB chief Mario Draghi has returned to Brussels a year after his report on EU competitiveness, criticizing the lack of action taken on his recommendations. With only 11% of his proposals implemented, Draghi highlighted growing vulnerabilities within the EU's growth model and called for a review of state-aid rules and a pause in AI regulations affecting health and infrastructure. European Commission President Ursula von der Leyen pledged to update merger guidelines, while new tax measures aimed at encouraging investment are expected soon.
🎓AI Academia
Study Questions the Learning Capacity of In-Context Learning in AI Models
A study by Microsoft and the University of York examines the capabilities of in-context learning (ICL) in autoregressive language models, highlighting its mathematical alignment with learning principles but noting its limitations in generalizing to unseen tasks. The research critiques the superficial classification of ICL as genuine learning, given that these models rely heavily on existing knowledge rather than encoding new information. Through a detailed analysis, the study underscores the need for empirical research to better understand how such models navigate different prompting styles and distributional shifts while remaining reliant on run-time state adjustments for task-specific generalization.
Survey Examines Limits and Potential of Retrieval Augmented Generation in Large Language Models
A recent survey from researchers at the University of Illinois explores how Retrieval and Structuring Augmented Generation (RAS) can enhance Large Language Models (LLMs) by addressing issues such as hallucinations, outdated knowledge, and limited domain expertise. The paper discusses integrating dynamic information retrieval with structured knowledge, observing methods like taxonomy construction for improving text organization. It highlights challenges like retrieval efficiency and structure quality while identifying opportunities for multimodal and cross-lingual retrieval systems. This survey provides valuable insights for improving the accuracy and reliability of LLMs in various applications.
Researchers Simulate Bias Mitigation Strategies for Large Language Models in Natural Language Processing
A recent study provides a comprehensive analysis of biases in Large Language Models (LLMs), highlighting the challenges they pose to fairness and trust in natural language processing. The research classifies biases as implicit or explicit and examines their origins from data, architecture, and context. It introduces a simulation framework to evaluate bias mitigation strategies, employing techniques like data curation, debiasing during model training, and post-hoc output calibration, demonstrating their effectiveness in controlled settings. This work aims to synthesize existing bias knowledge while assessing practical mitigation methods.
Emerging Privacy Threats in Deployed Large Language Models Demand Urgent Attention
Large Language Models (LLMs), known for their breakthroughs in natural language processing and autonomous decision-making, are raising new privacy concerns as they become integral to applications like chatbots and autonomous agents. While much research has addressed data privacy risks during model training, there is growing attention to the privacy vulnerabilities arising from their deployment, which can lead to both inadvertent data leaks and targeted privacy attacks. These threats pose significant risks to individual privacy, financial security, and societal trust, highlighting the need for the research community to develop strategies to mitigate emerging privacy risks associated with LLMs.
A Comprehensive Analysis of Trust in Large Language Models Within Healthcare
A recent survey by researchers at Virginia Tech has examined the trustworthiness of large language models (LLMs) in healthcare, highlighting challenges in their integration into clinical settings. These challenges include ensuring truthfulness, privacy, safety, robustness, fairness, and explainability in AI-generated outputs, which are essential for reliable and ethical medical use. The study points out that while some frameworks to evaluate LLM trustworthiness exist, comprehensive understanding and solutions in healthcare applications remain incomplete. It calls for further investigation into issues such as data privacy, robustness against adversarial attacks, and bias mitigation to enhance the dependability of AI in healthcare.
Survey Highlights Major Role of Large Language Models in Information Retrieval
A recent survey has detailed the transformative role of large language models (LLMs) like GPT-4 in modernizing Information Retrieval (IR) systems. IR, integral to applications such as search engines and chatbots, has evolved from traditional term-based methods to sophisticated neural models. However, LLMs bring both opportunities and challenges to IR, with their exceptional capabilities in language understanding and reasoning offset by issues like data scarcity and the risk of generating inaccurate responses. The report highlights efforts to refine these models through techniques such as query reformulation, reranking, and reading, aiming to enhance the efficiency and accuracy of retrieving relevant information from extensive data sets.
About SoRAI: SoRAI is committed to advancing AI literacy through practical, accessible, and high-quality education. Our programs emphasize responsible AI use, equipping learners with the skills to anticipate and mitigate risks effectively. Our flagship AIGP certification courses, built on real-world experience, drive AI governance education with innovative, human-centric approaches, laying the foundation for quantifying AI governance literacy. Subscribe to our free newsletter to stay ahead of the AI Governance curve.