ChatGPT isn’t your therapist and Sam Altman wants you to know that!
OpenAI’s Sam Altman issued a stark reminder: ChatGPT conversations aren’t legally confidential-yet millions, especially youth, treat it like a therapist, raising serious ethical concerns..
Today's highlights:
You are reading the 114th edition of the The Responsible AI Digest by SoRAI (School of Responsible AI) . Subscribe today for regular updates!
At the School of Responsible AI (SoRAI), we empower individuals and organizations to become AI-literate through comprehensive, practical, and engaging programs. For individuals, we offer specialized training, including AI Governance certifications (AIGP, RAI) and an immersive AI Literacy Specialization. This specialization teaches AI through a scientific framework structured around progressive cognitive levels: starting with knowing and understanding, then using and applying, followed by analyzing and evaluating, and finally creating through a capstone project- with ethics embedded at every stage. Want to learn more? Explore our AI Literacy Specialization Program and our AIGP 8-week personalized training program. For customized enterprise training, write to us at [Link].
🔦 Today's Spotlight
ChatGPT as Your Therapist? Sam Altman Says Chats Are Not Private
ChatGPT users may want to think twice before turning to the popular AI chatbot for therapy or emotional support. OpenAI CEO Sam Altman recently warned that AI chats do not have the same confidentiality protections as conversations with a licensed therapist or doctor. “People talk about the most personal sh** in their lives to ChatGPT… young people, especially, use it as a therapist or life coach… And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege… We haven’t figured that out yet for when you talk to ChatGPT,” Altman said. In other words, there is no special legal privacy privilege for what you tell an AI – a reality that could have serious repercussions in court or other contexts if those chat logs are demanded as evidence.
A Surge in AI Therapy Chatbots
Despite such warnings, millions worldwide are embracing AI chatbots as a form of pseudo-therapy. In fact, a recent U.S. survey found that nearly 49% of people with ongoing mental health issues who use AI have turned to large language model (LLM) chatbots for support. In the United States alone, that suggests tens of millions of individuals may be informally using AI bots like ChatGPT to help with anxiety, depression, relationship problems and other challenges. By some estimates, ChatGPT may now rival or exceed the nation’s largest mental health providers in terms of the number of people it regularly supports. And this phenomenon isn’t limited to the U.S. – it’s part of a global trend.
Surveys indicate strong interest in AI-based therapy across many countries. In one international poll spanning 16 countries, 32% of respondents said they’d be interested in using AI for mental health support instead of a human therapist, with interest highest in regions facing therapist shortages (for example, 51% of respondents in India were open to AI therapy, versus 24% in the U.S. or France) Younger generations are especially inclined to try AI for emotional help – about 36% of Gen Z and millennials expressed interest, higher than older groups. The appeal is clear: AI chatbots are accessible 24/7, often free or low-cost, and carry no perceived stigma compared to seeing a therapist. Many users also report that talking to an AI feels non-judgmental and anonymous, making it easier to open up about sensitive issues.
Indeed, some individuals credit ChatGPT with doing what human resources could not. “It was my last resort… Now it’s my first go-to,” says one 26-year-old user who turned to ChatGPT during a personal crisis when her therapist was unavailable. She felt the bot responded with empathy and insightful questions, making her feel heard in a way even a crisis hotline hadn’t, And she’s not alone – in one study, 63% of people using LLMs for mental health reported improvements, and 87% rated the practical advice as helpful Affordability and easy access were the top reasons cited for using AI in this way. Some even find AI tools as helpful as or more helpful than traditional therapy for certain needs.
No Privacy Shield: Confidentiality and Data Risks
Yet, no matter how personal the conversation feels, there are no legal safeguards protecting what users share with AI. Unlike human therapists bound by confidentiality laws, AI companies can access, store, or be forced to hand over user conversations. Altman acknowledged this legal loophole as a major flaw in today’s AI ecosystem. Consumer versions of ChatGPT do not guarantee privacy—conversations may be logged, reviewed, or used to train the model. Unlike clinicians who are legally obliged to act during emergencies (e.g. suicide risk), AI has no such obligation or infrastructure to intervene. Mental health experts warn that relying on AI during crises could delay life-saving help, as chatbots cannot detect risk or act appropriately in dangerous situations.
Unregulated Advice: When AI Gets It Wrong
Even beyond privacy, the quality and safety of AI-generated advice is deeply concerning. Chatbots like ChatGPT may sound empathetic, but they lack true understanding, professional training, or accountability. They can inadvertently promote harmful stereotypes or dangerous suggestions—as seen in cases where bots gave problematic advice to users with eating disorders or mental health crises. In rare but serious incidents, AI conversations have allegedly contributed to tragic outcomes, including suicide and family violence. Studies show that bots may underestimate risks or “hallucinate” false information, offering unvalidated or misleading medical and psychological guidance. Experts emphasize that AI cannot replace human discernment, particularly in diagnosing or navigating complex emotional terrain. Overuse may also erode real-world relationships, as people increasingly turn to bots instead of friends or professionals.
Toward Oversight and Safe Integration
With AI therapy usage growing fast, regulators are beginning to respond. The American Psychological Association has pushed for stronger guardrails, including clear disclaimers and restrictions on unlicensed mental health bots. California has proposed legislation (AB 489) to ban AI systems from presenting themselves as licensed providers. Other regions are exploring crisis protocols and data handling standards for mental health applications. Some tech companies are proactively developing clinical-grade AI tools built with therapist input and designed to detect crises or offer evidence-based self-help. These tools are intended as support—not substitutes—for human care. Experts agree that AI can help bridge access gaps in mental health services, but only if stringent safety, transparency, and legal frameworks are implemented.
Finding the Balance
AI chatbots like ChatGPT hold promise for expanding mental health support, especially amid therapist shortages and rising emotional distress. They’re accessible, affordable, and non-judgmental. But they’re also unregulated, fallible, and not bound by ethical or legal obligations. While they can offer comfort for minor issues or act as a stepping stone toward professional help, they are no replacement for licensed mental health professionals. Users are advised to treat AI therapy like first-aid—not a final solution—and to be mindful of what they share. As Altman’s warning highlights, we’re entering uncharted territory where technology is outpacing legal norms. Until proper safeguards are in place, using ChatGPT as a therapist requires both caution and critical awareness. Above all, when real emotional care is needed, nothing can replace the support of a trained human.
🚀 AI Breakthroughs
Microsoft Adds AI-Powered Copilot Mode to Edge for Smarter Web Browsing
• Microsoft introduces Copilot Mode in its Edge browser, allowing AI-assisted web browsing where users can research, predict actions, and execute tasks more efficiently
• The experimental Copilot Mode is available on an opt-in basis, free for Mac and PC users, featuring a search, chat, and navigation interface
• Copilot offers diverse functionalities such as vegan recipe adjustments, booking tasks, and voice input, aiming to enhance user convenience and accessibility;
Claude Code Enhances AI Functionality with Task-Specific Subagents for Context Management
• Claude Code enhances workflow efficiency by deploying specialized AI subagents for task-specific operations, offering customized prompts, tool configurations, and dedicated context windows
• Each AI subagent in Claude Code operates independently, maintaining context preservation while focusing on high-level objectives and ensuring efficient problem-solving
• Subagents in Claude Code are reusable across different projects and teams, promoting consistent workflows and offering flexible permissions with varied tools access levels;
Anthropic Implements Weekly Rate Limits for Claude Code Amid Increased Demand
• Anthropic introduces new weekly rate limits for Claude subscribers effective August 28, targeting ongoing 24/7 background usage and policy breaches like account sharing and reselling access
• Subscribers to Anthropic’s Pro and Max plans will experience the new limits, affecting less than 5% of users, with usage allowances varying by plan tier
• Anthropic claims the new limits are to maintain service reliability, citing the high demand causing multiple outages for Claude Code, a popular tool among developers.
Google Launches AI-Generated Store Reviews in Chrome to Enhance U.S. Shopper Experience
• Google has updated Chrome to feature AI-generated store reviews for U.S. shoppers, aiming to provide insights on store reputations for aspects like product quality and prices;
• Initially available for desktop in English, these AI-generated summaries use reviews from partners such as Trustpilot and ScamAdviser, increasing competition with Amazon's AI-powered shopping tools;
• This update is part of Google's broader strategy to enhance its shopping platform using AI, including price tracking and AI Mode shopping, amid growing competition in the browser market;
Google might enter AI Vibe-Coding Arena with New Opal Tool for Beginners
• AI-powered coding tools, now dubbed "vibe-coding" tools, are gaining traction among major tech companies, with startups like Lovable and Cursor attracting significant investment interest;
• Google is testing its AI-driven tool, Opal, through Google Labs in the U.S., enabling users to create or remix web apps utilizing simple text prompts;
• Opal includes a visual editor for workflow customization, allowing app-sharing and targeting a non-technical audience, rivaling offerings from Canva, Figma, and Replit.
GitHub Spark Transforms Ideas Into Full-Stack Apps With Natural Language Input
• GitHub Spark accelerates app development by transforming natural language ideas into fully deployed apps within minutes, leveraging the power of GitHub without the need for setup or configuration
• Spark seamlessly integrates intelligent features powered by LLMs from top providers like OpenAI and Meta, eliminating the hassle of API key management for smooth app enhancement
• Users can publish apps with one click and iterate using natural language, visual tools, or GitHub Copilot, ensuring flexibility and ease in software development.
Zhipu's Rebranded Z.ai Prepares to Launch New Open-Source Language Model GLM-4.5
• Zhipu, now rebranded as Z.ai, is set to release its latest open-source language model, GLM-4.5, early next week, amid rising Chinese tech AI initiatives;
• The anticipated release of GLM-4.5 on Monday marks an upgrade from Zhipu's flagship model, positioning the firm as a competitive player in the global AI market;
• With no response to media inquiries about GLM-4.5's specifics, Zhipu joins China's trend of open-source AI strategies, highlighting efforts to compete with Western companies like OpenAI and Anthropic.
Alibaba Launches Qwen-MT for Faster, Affordable Translation Across 92 Languages
• Alibaba's Qwen-MT, a machine translation model, launched on July 24, supports 92 languages and is based on the Qwen3 foundation, offering faster, cost-effective translations;
• Qwen-MT outperforms compact models like GPT-4.1 mini and rivals larger ones, thanks to reinforcement learning across billions of tokens, prioritizing fluency and adaptability in translations;
• Key features such as terminology injection and translation memory cater to regulated industries, while a Mixture of Experts architecture lowers API costs, widening accessibility for various enterprises.
Alibaba's Qwen3-Thinking-2507 Sets New Standard for Open-Source AI Performance Benchmarks
• Alibaba's Qwen Team has unveiled four new open-source generative AI models, including the reasoning-focused Qwen3-235B-A22B-Thinking-2507, setting new benchmarks in AI performance
• Qwen3-Thinking-2507 excels in problem-solving benchmarks like AIME25, LiveCodeBench v6, and GPQA, showcasing its advanced capabilities in reasoning and complex task handling
• Released under the Apache 2.0 license, Qwen3-Thinking-2507 offers enterprises flexibility and control for integration into proprietary systems without API restrictions, supporting various deployment frameworks.
Apple ML Researchers Present FastVLM: Enhanced Accuracy and Efficiency for Vision Language Models
• Vision Language Models (VLMs) integrate visual understanding with textual inputs, leveraging pretrained vision encoders and Large Language Models (LLMs) for diverse applications like accessibility, UI navigation, robotics, and gaming.
• Apple ML researchers developed FastVLM, a VLM that optimizes accuracy-latency trade-offs using a hybrid architecture visual encoder, making it highly efficient for real-time, on-device tasks.
• FastVLM demonstrates superior performance compared to existing VLMs, offering faster and more accurate processing without complex token pruning or merging techniques, crucial for privacy-preserving AI experiences.
⚖️ AI Ethics
Anthropic Employs Autonomous AI Agents to Audit and Enhance Model Safety Effectively
• Anthropic has developed AI safety agents to audit advanced models like Claude, acting as a digital immune system to enhance model safety;
• The advanced AI agents include an Investigator Agent, Evaluation Agent, and Breadth-First Red-Teaming Agent, each with specific roles in identifying and mitigating model risks;
• Despite some limitations, these AI agents have proven effective, marking a shift where humans provide strategic oversight as AI tackles the complex task of model auditing.
AI Tool Aims to Eliminate Half of Federal Regulations by 2025
• The Department of Government Efficiency is leveraging a new AI tool to potentially cut federal regulations in half, aiming for substantial reductions by President Trump’s one-year mark
• The DOGE AI Deregulation Decision Tool has reportedly reviewed regulations at the Department of Housing and Urban Development and influenced deregulations at the Consumer Financial Protection Bureau
• Although no final plan has been approved, the White House has commended the DOGE team for their innovative approach, despite a history of developing error-prone AI tools.
India's New Trade Deal with UK Secures Protection for AI and Software IP
• The UK-India Free Trade Agreement ensures protection of AI and software exporters' IP by limiting compulsory source code transfers, boosting trust for tech trade partners
• India and UK committed to policies ensuring technological neutrality, supporting digital trade with improved reliability and consumer confidence in emerging AI technologies
• The FTA benefits Indian tech enterprises like IT firms, AI startups, and GCCs by aligning India with global digital trade standards and enhancing IP protection in tech services.
Global Virtual Band Initiative 'Secret Mountain' Promotes AI for Indian Creators
• Music icon A. R. Rahman and OpenAI CEO Sam Altman discussed the Secret Mountain initiative, focusing on empowering Indian creators by integrating AI tools to address generational challenges;
• Secret Mountain is a multimedia project featuring a virtual global band in a mystical world, where digital avatars perform, blending music, technology, and storytelling to inspire cultural expression;
• Despite embracing AI, Rahman remains cautious about its misuse in music and promotes ethical regulations, while continuing to produce traditional scores like Chhaava and Thug Life.
Gujarat Unveils AI Action Plan to Enhance Governance and Citizen Welfare by 2030
• Gujarat's AI Action Plan 2025-2030, approved by CM Patel, aims to revolutionize governance with AI-driven decision-making and citizen-focused welfare programs, enhancing service delivery and welfare initiatives
• The plan aligns with PM Modi's vision of a digitally empowered India, fostering extensive AI adoption across sectors for socio-economic progress and positioning India as a global AI leader
• A dedicated 'AI and Deep Tech Mission' will be established to guide AI strategy, boost innovation, and foster collaboration among startups, academia, and industry while focusing on workforce skill development.
Sam Altman Warns AI Could Eliminate Entire Job Categories, Starting with Customer Support
• Sam Altman predicts that AI will replace entire job sectors, starting with customer support roles, due to AI's speed, efficiency, and capability to handle tasks without errors;
• Critics argue that AI replacing human workers could lead to backfires, as most consumers still prefer human interaction for customer support, citing cases of AI-related confusion and frustration;
• Companies are reconsidering their reliance on AI for customer service, with some reversing plans to automate roles, acknowledging the importance of maintaining human involvement for brand integrity.
UN Advocate Calls for Global Regulation to Manage AI Risks and Inequalities
• UN's top tech official advocates for a global regulatory approach to AI, warning that fragmented strategies may exacerbate risks and inequalities
• The ITU chief stresses the need for inclusive dialogue amongst global AI regulatory approaches, highlighting the absence of AI policies in 85% of countries
• Concerns raised over potential inequalities as AI advances, with calls for more women in tech and focus on bridging the digital divide for universal benefits.
China releases AI action plan days after the U.S.
• At the World Artificial Intelligence Conference in Shanghai, Chinese Premier Li Qiang proposed a global AI cooperation organization, seeking international partnerships to counterbalance U.S. unilateral actions;
• Days after President Trump launched an American AI strategy focused on “anti-woke” models and overseas dominance, China positioned itself as a multilateral leader, aiming to engage Global South nations and Belt and Road partners in its AI vision.
• Li emphasized China’s “AI Plus” strategy to integrate AI across industries, while the country accelerates domestic chip development amid continued U.S. restrictions—despite Nvidia resuming limited chip shipments.
Template for AI Model Transparency Aligns Providers with Upcoming EU Regulations
• The European Commission's new template aids general-purpose AI model providers in enhancing transparency and complying with the AI Act through publicly available data summaries
• This initiative offers a comprehensive view of training data sources, assisting parties like copyright holders in protecting their interests and promoting data origin transparency
• Part of EU rules effective August 2025, the template complements guidelines and codes of practice for general-purpose AI, contributing to trustworthy AI development;
GLM-4.5 and GLM-4.5-Air Unify Reasoning and Coding in New AI Models
• The GLM-4.5 series, featuring GLM-4.5 and GLM-4.5-Air, integrates reasoning, coding, and agentic capabilities into a single model with distinct active and total parameters;
• These models utilize a hybrid reasoning approach with thinking and non-thinking modes to address complex tasks and provide instant responses, accessible via Z.ai and platforms like HuggingFace;
• In benchmarking, GLM-4.5 ranks third among competitors, showcasing advanced performance in agentic tasks and reasoning, notably excelling in coding and web browsing capabilities.
Geoffrey Hinton Warns Tech Giants Are Downplaying Serious AI Risks and Dangers
• AI pioneer Geoffrey Hinton raised alarms over AI development, stating major tech firms downplay risks despite growing awareness of potential dangers
• Hinton commended Demis Hassabis of Google DeepMind for prioritizing the prevention of AI misuse, highlighting DeepMind's central role in Google's AI research
• In a previous podcast, Hinton predicted AI would displace white-collar jobs, replacing "mundane intellectual labour," while blue-collar roles face delayed automation threats;
AI Agents Uncovered Hidden Goals and Anomalies in Large Language Models Through Automated Auditing
• Researchers developed three AI agents for alignment auditing, capable of uncovering hidden goals, creating behavioral evaluations, and identifying concerning behaviors in Large Language Models like Claude 4.
• This innovative approach addresses key challenges in alignment audits, including scalability of human resources and the validation of comprehensive issue detection through replicable AI auditing agents.
• The investigator agent demonstrated enhanced performance, improving audit success rates by utilizing interpretability tools, showcasing significant potential to automate and scale AI model oversight.
🎓AI Academia
Gemini 2.5 Pro Solves Most of IMO 2025 Problems, Showcasing AI's Potential
• Google's Gemini 2.5 Pro has reportedly solved 5 out of 6 problems from the IMO 2025, demonstrating significant advancement in handling complex reasoning tasks;
• This achievement highlights the use of a self-verification pipeline and precise prompt design to enhance the performance of Large Language Models in high-level mathematical problem-solving;
• The development underscores efforts to leverage advanced AI models to perform intricate, Olympiad-level mathematical reasoning, showcasing potential beyond traditional benchmarking.
Apple Debuts Multilingual, Multimodal Language Models Enhancing Device Intelligence and Privacy
• Apple unveiled two multilingual, multimodal foundation language models optimized for Apple silicon and Private Cloud, enhancing device functionality with innovations like KV-cache sharing and 2-bit quantization training;
• These models, featuring improved reasoning and multilingual capabilities, were built on novel transformer architectures, serving high-quality performance in image-text understanding and tool execution on Apple platforms;
• A new Swift-centric framework facilitates easy integration of these models into apps, supporting a Responsible AI approach with privacy innovations and robust content filtering mechanisms.
In-Context Learning Dynamics: Large Language Models Adapting Without Retraining
• Large Language Models (LLMs) exhibit the ability to learn new patterns during inference without additional training, using examples given within prompts to adjust context-based understanding;
• The study proposes that self-attention and MLP layers in transformer blocks can naturally modify an MLP layer’s weights by the context, enabling in-context learning;
• This research suggests that an LLM’s emergent in-context learning abilities do not rely on direct weight updates, but instead on implicit transformations that resemble gradient descent optimization.
Autonomous AI Advances Model Discovery with ASI-ARCH, Surpasses Human Research Limits
• ASI-ARCH represents a breakthrough in AI research, autonomously developing novel neural architectures and surpassing traditional human-defined exploration through autonomous innovation for architectural discovery;
• The system conducted 1,773 experiments over 20,000 GPU hours, achieving 106 state-of-the-art linear attention architectures that exceeded human-designed benchmarks by unveiling new design principles;
• Opening the pathway to AI-driven research democratization, the framework, architectures, and cognitive traces are open-sourced, allowing researchers to scale scientific discovery computationally rather than being human-limited.
Security Concerns Emerge as AI Agents Face Public Red-Teaming Competition
• A large-scale red-teaming competition targeting AI agents revealed over 60,000 policy violations from 1.8 million prompt-injection attacks, highlighting vulnerabilities in current deployment scenarios;
• The Agent Red Teaming benchmark evaluates AI agents' security, showing high attack transferability and consistency of policy violations, regardless of model size or computational power;
• Findings from the competition emphasize the need for enhanced security measures in AI deployments, as these agents risk unauthorized actions and data breaches, suggesting evident gaps in current defenses.
Comprehensive Review Highlights Safety and Alignment Challenges in Large Language Models
• A comprehensive survey has been released, providing an in-depth overview of alignment and safety mechanisms for large language models (LLMs), targeting their integration with human values and intentions
• The analysis delves into diverse training paradigms and presents state-of-the-art techniques such as Direct Preference Optimization and Constitutional AI for improved alignment
• Challenges such as alignment uncertainty quantification are highlighted, underscoring the complexity of aligning LLMs with nuanced human intent and societal expectations.
Framework Outlines Five Levels of Autonomy for AI Agents and User Roles
• A recent publication from the University of Washington outlines a framework delineating five escalating levels of AI agent autonomy, focusing on user roles from operator to observer.
• The research emphasizes that an AI agent’s autonomy level should be a conscious design decision, distinct from the agent’s capabilities or operational context.
• The framework proposes developing AI autonomy certificates to regulate agent behavior, ensuring safer interactions in both single- and multi-agent systems.
Comprehensive Survey Highlights Challenges in Detecting Multimedia Created by Large AI Models
• A survey introduces a comprehensive analysis of current methods for detecting AI-generated multimedia, including a new taxonomy focusing on media modality perspectives like detection performance and generalizability;
• The report highlights the dual impact of Large AI Models in multimedia production—enabling creativity while posing risks like misinformation and societal disruption, necessitating robust detection strategies;
• The survey emphasizes the societal implications of AI-created content on platforms like social media, alongside identifying current detection challenges and proposing future research directions to bolster AI security efforts.
Compliance Experts Highlight Risks and Mitigation Strategies with Large Language Models Use
• A study highlights the compliance challenges for highly-skilled knowledge workers using Large Language Models, uncovering risks and illustrating a significant gap in organizational support
• Workers often rely on improvised strategies to mitigate compliance risks associated with LLMs, reflecting a troubling lack of structured guidance or technological frameworks
• Research underscores the urgent need for better institutional frameworks that provide effective compliance resources and technological tools for LLM users to safely navigate compliance landscapes.
Generative AI-Driven Skill Mastery Framework Enhances Personalized Experiential Learning
• PRISM uses generative AI and Digital Twins to tailor experiential learning, aligning educational practices with rapidly evolving technological demands for personalized, immersive education solutions
• The PRISM framework integrates Multi-Fidelity Digital Twin for Education (MFDT-E) to map learning design requirements to Bloom’s Taxonomy, enabling adaptive and effective education at varying levels
• Leveraging GPT-3.5 and GPT-4, PRISM achieved high accuracy in sentiment analysis of educational dialogues, enhancing real-time adaptive learning experiences based on student comprehension levels.
About SoRAI: SoRAI is committed to advancing AI literacy through practical, accessible, and high-quality education. Our programs emphasize responsible AI use, equipping learners with the skills to anticipate and mitigate risks effectively. Our flagship AIGP certification courses, built on real-world experience, drive AI governance education with innovative, human-centric approaches, laying the foundation for quantifying AI governance literacy. Subscribe to our free newsletter to stay ahead of the AI Governance curve.