Three Nations Now Have National AI Laws: EU, South Korea, and Japan!
Global momentum around AI regulation continues to surge with Japan opting for soft, innovation-first governance..
Today's highlights:
You are reading the 99th edition of the The Responsible AI Digest by SoRAI (School of Responsible AI) . Subscribe today for regular updates!
At the School of Responsible AI (SoRAI), we empower individuals and organizations to become AI-literate through comprehensive, practical, and engaging programs. For individuals, we offer specialized training such as AI Governance certifications (AIGP, RAI) and an immersive AI Literacy Specialization. This specialization teaches AI using a scientific framework structured around four levels of cognitive skills. Our first course is now live and focuses on the foundational cognitive skills of Remembering and Understanding. Want to learn more? Explore all courses: [Link] Write to us for customized enterprise training: [Link]
🔦 Today's Spotlight
Governments worldwide have begun enacting national-level AI laws. However, most of these laws are not yet fully enforced, with only a few jurisdictions having active, binding provisions as of June 2025. Here's a snapshot of the four leading countries:
European Union: The AI Act (Regulation 2024/1689), effective from August 2024, introduces a comprehensive, risk-based regulatory regime. It bans unacceptable AI uses, imposes strict controls on high-risk systems (e.g. in healthcare, law enforcement), and mandates transparency for limited-risk AI (like chatbots or deepfakes). Non-compliance can result in fines up to €35 million or 7% of global turnover, with enforcement by national regulators and the new European AI Office.
Japan: AI Promotion Act (Bill on promoting research, development and utilization of artificial intelligence-related technologies (2025), passed in May 2025, is a principles-based framework. It aims to foster innovation and ethical AI use without creating risk categories or new penalties. It establishes a Cabinet-level AI Strategy Headquarters, encourages voluntary compliance, and focuses on international cooperation and trust-building. However, it has no enforcement teeth or mandatory obligations, making it a policy guide rather than a regulatory tool.
China: The Interim Measures for Generative AI Services (2023), effective August 2023, are the world’s first binding national rules focused on generative AI. They require providers to label AI-generated content, uphold Chinese legal/political norms, and protect user rights (under PIPL). Though no new penalties are defined, existing data and cyber laws are used for enforcement. The Cyberspace Administration of China (CAC) leads compliance efforts.
South Korea: The AI Basic Act (2024), enacted in December 2024 and effective January 2026, introduces a risk-based, rights-sensitive framework. It mandates impact assessments for high-risk systems and generative AI labeling, and requires foreign providers to designate local representatives. Fines (up to ₩30 million) and even criminal penalties may apply. It establishes a National AI Committee and AI Safety Institute, under the Ministry of Science and ICT (MSIT).
Comparative Analysis
The EU and South Korea represent the most structured, risk-based AI regimes, with clear rules, high-risk categories, and enforcement mechanisms. The EU leads in regulatory strictness, including extraterritorial scope and steep penalties. South Korea's approach mirrors the EU but with relatively softer penalties, emphasizing AI safety and national innovation.
Japan, by contrast, promotes a non-binding, cooperation-first model, focusing on innovation leadership without creating legal obligations or penalties. It uses existing laws to address AI harms and prioritizes long-term governance structures.
China stands out for its content-centric regulation. Rather than categorizing AI by risk or sector, it controls generative outputs through value-based rules and existing legal infrastructure, targeting alignment with national ideology and public order. Also, these are often excluded from lists of national AI laws because they are not a formal statute passed by the legislature but rather an administrative regulation issued by government agencies, making them lower in China’s legal hierarchy. Additionally, the Measures apply only to public-facing generative AI services-focusing on content control, labeling, and political alignment-unlike the EU, South Korea, or Japan’s laws, which establish broader governance frameworks covering all AI systems.
Together, these laws illustrate divergent regulatory philosophies: the EU/Korea model of structured accountability, Japan’s soft-law coordination, and China’s control-through-content strategy. All four stress transparency, oversight, and responsibility—but vary in enforcement intensity, legal scope, and the balance between promoting AI and mitigating harm.
🚀 AI Breakthroughs
Perplexity Labs Launches for Pro Users, Transforming Ideas into Action in Minutes
• Perplexity Labs is now available, offering Pro subscribers tools to expedite project development with features like deep web browsing, code execution, and image creation
• Labs transforms ideas into actionable projects, handling tasks such as crafting dashboards, spreadsheets, and simple web apps, enhancing productivity in just 10 minutes
• Labs complements existing Perplexity features by enabling a more thorough creation process, while Deep Research remains essential for fast, comprehensive analysis;
ChatGPT reportedly hits 1 billion searches a day and is 5.5 times quicker than Google
• ChatGPT has a massive growth of eight times in 2.5 years to 800 million monthly users. That’s a rapid growth figure for a new technology platform altogether.
• While it took Google a total of nine years to hit the benchmark of 365 billion searches, ChatGPT took just two years to reach that benchmark. ChatGPT is said to have reached 1 billion searches a day, 5.5 times faster than what Google did.
• Compared to 21 months ago, ChatGPT subscribers reportedly spend 3x more time on the app. This points to an increasing popularity as well as usefulness of the AI chatbot amongst users from various age groups.
• As a result, ChatGPT’s paid subscribers have increased by a whopping 153 per cent per year. The overall contribution to OpenAI’s earnings from ChatGPT amounts to $3.7 billion, thus marking a 10 times increase in a year.
The study also revealed ChatGPT’s popularity region-wise and it was India that topped the chart with a share of 13.5 per cent. This was closely followed by the USA with a total share of 8.9 per cent and Indonesia with a share of 5.7 per cent.
Conversational AI 2.0 Enhances Natural Interaction Flow With Advanced Turn-Taking Models
• Conversational AI 2.0 introduces a revolutionary natural turn-taking model, enhancing user experience with fluid dialogues by analyzing real-time conversational cues like “um” and “ah”
• Seamless multilingual capabilities are now enabled through automatic language detection, allowing businesses to engage in smooth cross-lingual interactions without manual settings, enhancing global customer service
• ElevenLabs’ AI platform supports multimodality, allowing agents to interact via text and voice concurrently, reducing workload and enhancing operational efficiency for enterprises;
Google DeepMind's New SignGemma Model Bridges Gap Between Sign Language and Text
• Google DeepMind's SignGemma aims to enhance sign language accessibility by translating various sign languages into spoken text, with a primary focus on American Sign Language and English;
• SignGemma expands DeepMind's Gemma model family, reinforcing the company's commitment to creating inclusive AI technologies by supporting diverse communication needs and breaking down language barriers;
• DeepMind's advancements in generative AI, including DolphinGemma and MedGemma, demonstrate the potential of AI to drive innovation across fields like animal communication analysis and medical applications.
Google Releases App to Run Hugging Face AI Models Offline on Phones
• Google released the Google AI Edge Gallery app, which lets users run AI models from Hugging Face on their phones offline, starting with Android and soon for iOS
• The app allows users to run locally downloaded AI models for tasks like generating images and answering questions, avoiding cloud-based data concerns and connection dependencies
• Available as an experimental Alpha on GitHub, Google AI Edge Gallery includes features like "Prompt Lab" with task templates and invites developer feedback under an Apache 2.0 license;
Hugging Face Launches Open-Source Humanoids HopeJR and Reachy Mini for Affordable Robotics
• Hugging Face expands into robotics with the release of two open-source humanoid robots, HopeJR and Reachy Mini, building on its acquisition of Pollen Robotics in April
• HopeJR, a full-sized humanoid robot, boasts 66 actuated degrees of freedom, while Reachy Mini serves as a desktop unit for AI application testing, both emphasizing open-source and affordability
• The company aims to ship the initial units by year's end, pricing HopeJR around $3,000 and Reachy Mini at $300, fostering accessibility and innovation in robotics;
India's AI Sector to Require One Million Professionals by 2026, Report Indicates
• A report by India's Ministry of Electronics & IT projects demand for AI professionals will reach 1 million by 2026, emphasizing the growing tech sector needs.
• Indian engineering education is evolving, with a 16% increase in B. Tech seats and over 50% growth in AI-related fields, reflecting surging industry demand.
• India's AI industry is set to reach $28.8 billion by 2025, with a 45% CAGR, making it one of the fastest-growing AI talent hubs globally.
Microsoft Layoffs Impact Software Engineers; AI Sparks Debate Over Coding Future
• Microsoft's layoff of 6,000 employees raised industry concerns, particularly because over 40% of affected roles were software engineers in Washington state, Bloomberg reported;
• Microsoft's $80 billion investment in AI infrastructure aligns with their vision to leverage AI, responsible for up to 30% of code in some projects, reshaping developer roles
• Despite AI's rise, Microsoft's CPO argues that coding remains essential, predicting a shift toward roles like "software operators," balancing AI-generated code with technical expertise.
IBM and Roche Develop AI-Powered App to Enhance Diabetes Management and Collaboration
• IBM and Roche have launched the AI-enabled Accu-Chek SmartGuide Predict app, designed to improve daily diabetes management by predicting potential glycemic events using real-time glucose data
• The collaboration features predictive functions like Glucose Predict and Night Low Predict, providing proactive support for diabetes patients, thus emphasizing the power of cross-industry partnerships in healthcare
• IBM and Roche developed a tool to enhance clinical study analysis, showcasing how AI can expedite data processing and link continuous glucose monitoring data with patient activities efficiently.
Samsung Eyes Perplexity AI Integration as Major Mobile Partnership Expands Worldwide
• Samsung Electronics is nearing a significant investment deal with Perplexity AI to integrate its technology into Samsung devices, potentially reshaping user experiences
• The deal may involve preloading Perplexity's AI features on Samsung's upcoming S26 phones, and incorporating technologies into the Samsung web browser and Bixby assistant
• Samsung could emerge as a major investor in Perplexity’s new funding round, signaling a shift away from Google's search engine towards a more diverse AI partnership approach;
Anthropic Releases Open-Source Tools to Decipher Language Model Thought Processes
• Anthropic has open-sourced a method to trace the reasoning of large language models by creating attribution graphs, enhancing transparency and understanding of model outputs;
• Neuronpedia hosts an interactive frontend where users can generate, visualize, and share attribution graphs, facilitating research on model behaviors like multi-step reasoning and multilingual understanding;
• This initiative aims to improve AI interpretability, inviting the community to explore and enhance these tools, ensuring safer and more comprehensible AI development while encouraging collaborative advancements.
Bing Launches Free AI Video Creator for Mobile and Desktop Users Worldwide
• Bing introduces Bing Video Creator, allowing users to transform text prompts into 5-second videos through the Bing Mobile App and soon on desktop, for free;
• Users can generate videos by typing descriptive prompts and accessing tools within the app, as Bing Video Creator aims to democratize AI video generation with simplicity and accessibility;
• The platform includes responsible AI measures to prevent harmful content, ensuring a safe user experience while empowering creativity through dynamic video creation.
Meta and Anduril Develop High-Tech AI Helmet for U.S. Military Enhancement
• Meta is partnering with defense tech start-up Anduril to develop a high-tech headset known as EagleEye for the U.S. military, blending AI and augmented reality.
• The collaboration marks a significant shift for Meta, transitioning from consumer-centric services to military tech solutions, reflecting a new direction under recent U.S. political dynamics.
• EagleEye aims to enhance soldiers' capabilities by integrating advanced AI, improving battlefield situational awareness, and leveraging Anduril's past work on augmented reality military applications.
Character.AI Expands with Multimedia Features Amid Concerns Over Potential Misuse
• Character.AI introduces AvatarFX, Scenes, and Streams, enabling users to create and share videos of AI-generated characters on a new community-focused social feed
• Users can create up to five AvatarFX videos daily, uploading photos, choosing voices, and scripting dialogues for their characters however, some audio features remain in testing
• Security concerns persist as Character.AI expands multimedia options allegations of past platform abuse heighten worries over potential misuse of the evolving features.
⚖️ AI Ethics
A.I. Replacing Entry-Level Jobs, Recent Graduates Face Unemployment Challenges and Uncertainty
• Recent graduates face higher unemployment rates as companies opt for A.I. solutions over entry-level workers, signaling a shift towards automation in industries.
• A.I. advancements in technical fields like software engineering are leading firms to automate entry-level roles, impacting job availability for fresh college graduates.
• The accelerated use of A.I. to replace junior roles may reduce investment in job training and mentorship, potentially affecting career development for new workers.
DeepSeek's New R1-0528 AI Model Excels in Tests Yet Faces Censorship Issues
• DeepSeek’s latest AI model, R1-0528, delivers remarkable performance on benchmarks for coding, math, and general knowledge, nearly matching OpenAI’s o3;
• The R1-0528 model displays increased censorship, particularly around subjects controversial to the Chinese government, limiting responses to sensitive topics like criticisms of China;
• Stringent Chinese regulations necessitate AI models like R1-0528 to avoid generating content that threatens national unity, leading to restricted discussion on contentious political narratives.
OpenAI Plans to Transform ChatGPT into an Intuitive Super Assistant by 2025
• OpenAI plans to transform ChatGPT into a "super assistant" using smarter models and tools like multimodal interaction, aiming for a 2025 launch, despite competition from companies like Meta and Google;
• The "super assistant" version of ChatGPT will possess T-shaped skills, combining deep expertise and a broad understanding across various tasks, from coding to managing everyday activities;
• OpenAI stresses the importance of providing users with choices in AI assistants and search engines, advocating against tech giants promoting only their own AI solutions without fair alternatives.
Major Labels in Talks with AI Startups to License Music and Settle Lawsuits
• Universal, Warner, and Sony Music are negotiating with AI startups Udio and Suno to license music for AI training, potentially settling high-stakes copyright lawsuits
• Talks involve majors receiving fees and equity, echoing Spotify's past deals, potentially influencing future music industry-AI partnerships and legal frameworks
• Despite ongoing legal disputes, these discussions may indicate a shift towards collaboration between AI firms and creative industries, amid broader industry concerns over copyright use in AI training;
Japan Enacts AI Law to Foster Development While Managing Technological Risks
• A new AI-focused law was enacted in Japan to advance AI development while mitigating associated risks, with wide support from multiple political parties, signifying a bipartisan approach;
• The law empowers the government to publicly identify malicious businesses using AI for criminal activities, addressing concerns over false information and crime linked to AI misuse;
• Without imposing penalties that might hinder AI innovation, the government will apply existing laws and establish a strategic AI team to draft a national AI policy and combat deepfakes;
Sadhguru Wins Court Order Protecting Personality Rights Against Rogue Websites
• The Court recognized Sadhguru as a global spiritual figure and emphasized that misusing his likeness damages his reputation and public trust
• An interim injunction was issued against online platforms infringing Sadhguru’s personality rights through URL-redirection and identity masking for unlawful gains
• Authorities were instructed to ensure ISP and social media compliance with the Court's order, while rogue websites received summons for response by October 14.
Chinese-Linked Hackers Exploit Google Calendar to Steal Sensitive Data from Individuals
• Google’s Threat Intelligence Group revealed a cyber campaign by Chinese hacker group APT41, exploiting Google Calendar for data theft through spear phishing tactics;
• The attack involved emails with links to a ZIP file on a compromised site, triggering malware disguised as PDFs and image files on victims' devices;
• Google Calendar events were utilized to send encrypted commands and extract stolen data, leading to malware operations and data theft until Google dismantled the hackers' infrastructure in October 2024;
Shiv Sena-UBT Utilizes AI for Civic Polls, Launches AI News Anchor in Saamana
• Shiv Sena-UBT leverages AI to recreate Bal Thackeray's voice, engaging tech-savvy youth ahead of Maharashtra's civic polls, amid political challenges and a split in the party;
• The party launches 'Tejasvi AI', Marathi media's first AI news anchor on Saamana's YouTube channel, aiming to enhance its digital outreach and modernize communication strategies;
• Shiv Sena-UBT leaders, including MP Anil Desai, actively integrate AI for data analysis, with Aaditya Thackeray championing its use in the party's operational activities.
🎓AI Academia
User Reviews Highlight Key Gaps and Needs in Current AI Governance Frameworks
• A study analyzing over 100,000 user reviews of AI products reveals governance-relevant themes through a bottom-up approach, highlighting practical concerns across organizational and operational settings;
• The analysis uncovers topics related to AI governance, including organizational processes, deployment infrastructure, data handling, and analytics, reflecting both technical and non-technical domains;
• Findings show overlap with existing AI governance frameworks on issues like privacy but also spotlight overlooked areas such as project management and customer interaction, advocating for user-centered governance approaches.
Watermarking as AI Governance Tool Falls Short of Effective Oversight Standards
• A forthcoming paper argues that current AI watermarking methods are more symbolic than substantial, failing to provide effective oversight as regulatory demands surge
• Researchers highlight a discrepancy between regulatory expectations and the technical shortcomings of existing AI watermarking schemes, which lack robustness and transparency
• A proposed framework aims to strengthen watermarking's role in AI governance through standardized techniques, audit systems, and enforcement, tackling the issue of widespread misinformation.
Investigating the Role of Generative AI in Enhancing Cybercrime Activities Worldwide
• Recent investigative research highlights the increasing use of generative AI in cybercriminal activities, illustrating its capacity to automate large-scale phishing attacks and fraudulent schemes
• The study from a U.S. university underscores the challenges faced by cybersecurity professionals in combating AI-driven threats, urging enhanced collaboration and robust AI defenses
• Experts warn that generative AI tools are being exploited on the dark web, significantly lowering barriers for cybercriminals and accelerating the spread of sophisticated cyber attacks.
Tertiary Review Identifies Gaps in AI Governance Frameworks and Stakeholder Engagement
• An AI governance review emphasizes the EU AI Act and NIST RMF but highlights a gap in translating these frameworks into actionable governance and stakeholder mechanisms;
• The review identifies transparency and accountability as predominant AI governance principles, underscoring a consensus on foundational responsible AI tenets such as fairness and privacy;
• The study calls attention to the need for empirical validation and inclusion in AI governance, offering insights for academia and organizations on refining effective governance strategies.
Security Challenges and Threats Identified in Survey on Large Language Models
• A recent survey has identified major security threats in large language models, including prompt injection, adversarial attacks, misuse by malicious actors, and risks from autonomous agents;
• Advancements in large language models like GPT-4, Gemini, and Claude, while revolutionary, are increasingly susceptible to threats due to uncurated datasets and open-ended user interactions;
• Experts stress the need for robust, multi-layered security strategies to manage emergent risks, emphasizing issues like goal misalignment, strategic deception, and malicious behavior in autonomous agents.
Large Language Models Enhance DNA, RNA, and Protein Analysis in Bioinformatics
• A recent survey highlights how Large Language Models (LLMs) are transforming bioinformatics, particularly in DNA, RNA, protein analysis, and single-cell data processing
• Key challenges identified for LLMs in bioinformatics include data scarcity, computational complexity, and the integration of cross-omics, with calls for innovative solutions
• Future directions for LLMs in bioinformatics emphasize multimodal learning, hybrid AI models, and potential clinical applications, underscoring their pivotal role in precision medicine advancements.
A Survey Highlights Machine Unlearning Approaches for Large Language Models Issues Handling
• A recent survey investigates machine unlearning methods in large language models (LLMs), detailing challenges in erasing undesirable data while maintaining overall utility without full retraining
• The study categorizes existing LLM unlearning approaches, assessing their strengths and limitations, and reviews evaluation methods to offer a comprehensive framework for the field's current state
• Future research directions are highlighted, focusing on overcoming technical hurdles like high retraining costs and ensuring compliance with privacy laws like the GDPR and the EU AI Act.
About SoRAI: SoRAI is committed to advancing AI literacy through practical, accessible, and high-quality education. Our programs emphasize responsible AI use, equipping learners with the skills to anticipate and mitigate risks effectively. Our flagship AIGP certification courses, built on real-world experience, drive AI governance education with innovative, human-centric approaches, laying the foundation for quantifying AI governance literacy. Subscribe to our free newsletter to stay ahead of the AI Governance curve.