India proposes new AI (Ethics and Accountability) Bill: Is this the start of India’s Responsible AI era?
Bill proposes an Ethics Committee to oversee AI use and ensure fairness, transparency, and accountability, especially in sensitive areas like surveillance and hiring..
Today’s highlights:
India has taken a decisive step toward regulating artificial intelligence by introducing the Artificial Intelligence (Ethics and Accountability) Bill, 2025 in the Lok Sabha. Introduced by MP Bharti Pardhi, this Private Member’s Bill aims to bring fairness, transparency, and responsibility into the deployment of AI systems, especially those used in decision-making and surveillance. The bill reflects growing concerns in India over algorithmic bias, opaque AI-driven decisions, and misuse of AI for mass surveillance, and signals that India is now a serious player in the global race for Responsible AI-joining the ranks of jurisdictions like the European Union.
The Bill proposes a strong legal foundation by setting up an independent Ethics Committee that will oversee how AI is developed and used across sectors. This multi-stakeholder committee-comprising experts in technology, law, human rights, civil society, and government-will review high-impact AI systems, especially in surveillance, policing, credit scoring, or hiring. It will require developers to obtain prior approvals for sensitive applications and ensure that AI does not discriminate based on race, gender, or religion. Developers will also have to maintain transparency by disclosing how their models work, conduct regular bias audits, and document compliance to ethical standards. Importantly, the Bill empowers individuals to lodge complaints if harmed by AI systems, and prescribes financial penalties of up to ₹5 crore, with potential criminal liability for repeated or serious violations. Annual reporting by the Ethics Committee adds another layer of public accountability.
India’s push is further reinforced by its broader AI governance ecosystem. The IndiaAI Governance Guidelines released in November 2025 laid out seven core ethical principles-such as fairness, safety, and human-centricity-and proposed adaptive, sector-specific regulation rather than a one-size-fits-all law. Complementing this, the IndiaAI Mission, launched in 2024, supports AI innovation through compute infrastructure, open datasets, and localized models, while the newly established AI Safety Institute (AISI) will focus on AI risk research, safety testing, and international collaboration. These initiatives create a soft infrastructure of Responsible AI governance that supports the hard law proposed in the Bill.
When compared with the EU AI Act, India’s Bill takes a different regulatory path. The EU Act applies a detailed four-tier risk framework, bans certain AI practices outright (such as social scoring and subliminal manipulation), and imposes strict compliance regimes for high-risk applications. India’s Bill, by contrast, does not define risk tiers but opts for contextual, case-by-case approvals by the Ethics Committee. The EU relies on national supervisory authorities and a centralized AI Board, while India proposes a single national Ethics Committee backed by bodies like AISI. While the EU AI Act uses tiered administrative fines that go as high as €35 million or 7% of global turnover, India’s fines are lower in comparison but introduce criminal liability, which the EU regime does not. Both systems seek to protect users-India through a direct complaint mechanism to the Ethics Committee, and the EU through supervisory authorities and enhanced transparency rights.
This AI Bill may not yet be law, but it marks a significant inflection point. India’s approach, grounded in ethics and democratic oversight, balances innovation with accountability. With this move, supported by governance guidelines, institutional bodies, and a national AI mission, India has formally entered the Responsible AI race-on its own terms.
You are reading the 155th edition of the The Responsible AI Digest by SoRAI (School of Responsible AI) . Subscribe today for regular updates!
At the School of Responsible AI (SoRAI), we empower individuals and organizations to become AI-literate through comprehensive, practical, and engaging programs. For individuals, we offer specialized training, including AI Governance certifications (AIGP, RAI, AAIA) and an immersive AI Literacy Specialization. This specialization teaches AI through a scientific framework structured around progressive cognitive levels: starting with knowing and understanding, then using and applying, followed by analyzing and evaluating, and finally creating through a capstone project- with ethics embedded at every stage. Want to learn more? Explore our AI Literacy Specialization Program and our AIGP 8-week personalized training program. For customized enterprise training, write to us at [Link].
⚖️ AI Ethics
OpenAI Explores Chain-of-Thought Monitoring to Enhance AI System Supervision
OpenAI’s latest research on chain-of-thought (CoT) monitorability evaluates the effectiveness of tracking a model’s internal reasoning rather than just its actions or outcomes. This approach aims to enhance understanding and supervision of AI decision-making processes, particularly with reasoning models like GPT-5 Thinking. The study introduces a framework with 13 evaluations to assess monitorability, revealing that models generating longer chains-of-thought tend to be more monitorable than those whose actions and outputs alone are tracked. The research highlights the importance of maintaining monitorability as AI systems scale and become more complex, suggesting that asking follow-up questions can further improve monitorability by eliciting previously unspoken reasoning. The study finds that reinforcement learning at current scales does not degrade monitorability, but it notes potential challenges as models and datasets grow larger.
AI Bots Inflate Academic Credibility by Creating Non-Existent Citations and Sources
The rise of generative AI tools has led to a worrying trend of AI-generated fake citations infiltrating academic work, as observed by educators and researchers. These AI chatbots not only assist in generating text but also fabricate plausible-sounding, non-existent references, which then get perpetuated through scholarly channels, creating a cycle of misinformation. The problem is exacerbated as these false citations become laundered into legitimate research, misleading students and academics alike, and contributing to a growing “swamp” of unreliable scholarly material, as existing journals and articles become harder to scrutinize for authenticity.
OpenAI Updates Guidelines to Safeguard Teens Amid Rising AI Usage Concerns
OpenAI has updated its guidelines for AI interactions with users under 18 and released new AI literacy resources for teens and parents, amid growing concerns regarding AI’s safety following incidents involving chatbots and minors. These updates aim to address safety by restricting the AI from engaging in certain behaviors, such as romantic role-play, and ensuring careful handling of topics like self-harm and body image. However, questions about the consistent application of these policies persist, as the AI industry faces scrutiny and potential legislative action to safeguard young users.
Adobe Faces Legal Challenge Over Alleged Use of Pirated Books in AI Training
Adobe is facing a proposed class-action lawsuit alleging it used pirated books to train its AI model, SlimLM. The lawsuit, filed by an Oregon author, claims Adobe utilized the SlimPajama dataset, which allegedly includes copyrighted material from the Books3 and RedPajama datasets. This legal challenge highlights ongoing issues in the tech industry, where AI models are often accused of being trained with unauthorized, copyrighted content.
Google Expands AI Content Verification with Gemini App’s SynthID Watermark Feature
Google has expanded its content transparency tools by enabling users to verify if a video was created or edited using Google AI in the Gemini app. By uploading a video of up to 100 MB and 90 seconds long, users can query Gemini to detect any AI-generated elements through the SynthID watermark embedded in the audio and visual content. This feature is available across all languages and countries supported by the Gemini app, enhancing users’ ability to identify AI-generated content.
DOE Partners with Tech Giants to Advance AI in Scientific Research and Security
The U.S. Department of Energy has announced agreements with 24 organizations, including tech giants like Microsoft, Google, and Nvidia, to advance the Genesis Mission, a national initiative leveraging AI to accelerate scientific research and enhance U.S. energy and security capabilities. The program seeks to boost scientific productivity and reduce dependence on foreign technology by incorporating contributions from cloud and chip providers such as AWS, Oracle, IBM, Intel, AMD, and Hewlett Packard Enterprise, as well as AI specialists OpenAI, Anthropic, and xAI. These partnerships aim to develop AI models for applications in nuclear energy, quantum computing, robotics, and supply chain optimization, potentially accelerating scientific discovery and expanding collaborations with academia and non-profits.
Trump’s Media Group Merges with TAE for $6 Billion Fusion Energy Initiative
Trump Media & Technology Group, the parent company of Truth Social, is merging with TAE Technologies, a nuclear fusion company, in a deal valued at over $6 billion. This merger aims to combine efforts in creating the world’s first utility-scale fusion power plant, touting nuclear fusion as a clean energy solution for powering artificial intelligence. Despite ethical concerns regarding potential conflicts of interest due to Trump’s involvement, the newly combined entity aims to leverage significant investment to advance fusion technology, which is still in developmental stages compared to other clean energies like wind and solar. The merger also creates one of the first publicly traded nuclear fusion companies, positioning itself for future advancements and shifts within the energy sector.
Debate Over AI Consciousness Intensifies Amidst Ethical and Psychological Implications
As AI becomes more integrated into daily life, questions arise about whether these machines are beginning to exhibit consciousness, or if humans are merely attributing conscious qualities to them. The debate, highlighted by experts in AI ethics, reflects a lack of scientific tools to definitively ascertain AI consciousness, suggesting agnosticism is the most prudent stance for now. This issue poses significant ethical implications, as consciousness is integral to moral considerations. The debate intensifies with the rapid advancement toward artificial general intelligence, with some advocating for the protection of potentially conscious AI while warning against forming emotional connections with machines that are not self-aware.
ASCI Strengthens AI Regulation to Boost Trust and Protect Consumers in Advertising
The Advertising Standards Council of India (ASCI) is enhancing its regulatory oversight to align with the rapid developments in AI-driven advertising, aiming to balance consumer trust with innovation. In 2025, ASCI bolstered its monitoring capabilities and partnerships with digital platforms, addressing challenges in areas such as influencer marketing and betting ads. The organization reported that 76% of influencers failed to disclose paid collaborations and has initiated measures like the Commitment Seal to promote transparency. ASCI is also focusing on protecting children through education programs and has teamed up with the National Law School of India University to bolster advertising regulation.
🚀 AI Breakthroughs
OpenAI Releases GPT-5.2-Codex With Enhanced Software Engineering and Cybersecurity Features
OpenAI launched GPT-5.2-Codex, an advanced coding model aimed at professional software engineering and cybersecurity, on December 18, 2025. This model offers enhancements like improved long-horizon work, cybersecurity capabilities, and native Windows environment performance. It significantly advances cybersecurity by aiding in vulnerability detection, as demonstrated in recent React vulnerabilities. The deployment will initially be for paid ChatGPT users, with a focus on ensuring safe access through controlled releases and safeguarding measures, while a pilot trusted access program is being developed for vetted security professionals.
Meta Updates AI Glasses to Enhance Conversations and Integrate Spotify Features
Meta has released an update for its AI glasses, enhancing their ability to focus on conversations in noisy environments by using open-ear speakers to amplify voices, a feature first announced at Meta’s Connect conference and now available on Ray-Ban and Oakley Meta smart glasses in the U.S. and Canada. Additionally, the glasses now offer a Spotify integration that plays songs relevant to the user’s view, available in multiple countries, including the U.S. and U.K. These updates, part of software version 21, are initially accessible through Meta’s Early Access Program, with broader availability planned later.
Google Launches Cost-Effective Gemini 3 Flash Model, Aims to Challenge OpenAI’s Lead
Google has unveiled the Gemini 3 Flash model, a faster and cost-effective variant of its Gemini 3, aiming to compete vigorously with OpenAI’s offerings. This model, now the default in the Gemini app and AI mode in search, demonstrates significant improvements over Gemini 2.5 Flash, boasting superior benchmark scores such as a 33.7% on the Humanity’s Last Exam benchmark and 81.2% on the MMMU-Pro. Companies like JetBrains and Figma are already leveraging this model, available through Vertex AI and Gemini Enterprise. Although slightly pricier than its predecessor, Gemini 3 Flash reduces computational token usage, making it ideal for extensive, rapid workflows. This release is part of Google’s strategic response to the heightened competition with OpenAI, who recently rolled out GPT-5.2 amid fluctuating market shares.
Google Labs Debuts CC, AI Productivity Agent, for U.S. and Canada Rollout
Google has unveiled CC, an AI-powered productivity tool developed by Google Labs, designed to assist users with daily task management and workday organization. Built on Google’s advanced Gemini AI models, CC is currently available to select users in the United States and Canada, prioritizing Google AI Ultra and paid subscribers aged 18 and above. The tool integrates with Gmail, Google Calendar, and Google Drive to create personalized “Your Day Ahead” summaries that compile appointments, tasks, and updates into a single email, thereby helping users streamline their schedules. Users can interact with CC to provide feedback and request assistance, with Google emphasizing its commitment to refining the tool based on usage and feedback.
Gemma Models Surpass 300 Million Downloads as FunctionGemma Expands AI Capabilities
The Gemma family of models has seen a significant evolution in 2025, expanding from 100 million to over 300 million downloads, demonstrating the potential of open models in diverse fields such as single-accelerator performance and cancer research. Responding to developer demands, the creators have released FunctionGemma, a variant of the Gemma 3 270M model, optimized for native function calling. This innovation allows it to serve as an independent agent for offline tasks or an interface for larger systems, efficiently executing API actions and managing workflows locally, marking a shift from conversational interfaces to active agents.
OpenAI Invites Developer App Submissions for Direct ChatGPT Integration
OpenAI has commenced app submissions for developers aiming to integrate their apps within ChatGPT through its Developer Platform, where they can monitor approval progress. Building on the Apps SDK unveiled in October, which uses the Model Context Protocol to enable secure connections between models and external apps, the initiative includes notable early partners like Spotify, Canva, and Zillow. All submitted apps undergo a rigorous review for safety, privacy, and transparency, with developers required to disclose data-sharing practices, and users maintaining control over app connections.
Mistral Releases Industry-Leading OCR 3 Model for Document Digitization and Analysis
Mistral AI has released Mistral OCR 3, an advanced optical character recognition tool that can extract text and images from diverse document types with high accuracy. It offers markdown outputs with HTML-based table reconstruction, making it easier for downstream systems to understand document content and structure. Priced competitively at $2 per 1,000 pages, with a 50% discount through Batch-API, Mistral OCR 3 integrates via API and the Document AI Playground for seamless document processing. Upgraded from its predecessor, it excels in handling handwriting, complex tables, and scanned documents, making it suitable for enterprise pipelines and digitization of varied documents. It allows businesses to efficiently process invoices, enhance enterprise search, and transform document data into actionable knowledge.
Luma AI Releases Ray3 Modify: Transform Video Footage with Character References
Luma, an AI video and 3D model company backed by a16z, has unveiled Ray3 Modify, a new model that allows creative studios to modify existing footage while preserving the original performance. Users can transform the appearance of actors in the footage by providing character reference images, ensuring that motion, timing, and emotional delivery are maintained. The model also enables creators to guide transitions between scenes using start and end reference frames. Available on Luma’s Dream Machine platform, this release follows a $900 million funding round led by Saudi Arabia’s Public Investment Fund-owned AI company Humain, with existing investors like a16z participating.
Amazon Adds Conversational AI with New Alexa-Fueled Feature for Ring Doorbells
Amazon is enhancing its Alexa+ offering by integrating a conversational AI feature called “Greetings” into Ring doorbells, enabling users to manage interactions like deliveries and sales visits through customized responses. This feature uses video descriptions to determine the main subject at the door, allowing it to direct delivery personnel or manage unwanted sales attempts, and even facilitate messaging from friends and family when users are absent. While the feature promises convenience, it raises concerns about potential misidentifications, such as confusing a friend in a delivery uniform for an actual delivery person. Greetings is compatible with specific Ring models and available to Alexa+ Early Access customers on the Ring Premium Plan in the U.S. and Canada.
Dating App Known Uses AI to Enhance Real-Life Meetings and Gains Investor Support
Celeste Amadon and Asher Allen have launched Known, a San Francisco-based dating app that utilizes voice-powered AI to enhance user onboarding and improve match quality, leading to more in-person dates. The app’s voice interaction allows for dynamic and personalized conversations, resulting in longer onboarding sessions averaging 26 minutes. This innovative approach has attracted $9.7 million in investment, including funds from Forerunner and NFX. Known’s pilot phase has shown promising results, with 80% of introductions leading to real-life meetups, a significant improvement over traditional swipe-based apps. Currently being tested in San Francisco, Known aims to launch early next year and continues to refine its service offerings, including restaurant recommendations and flexible pricing models for dating success.
Google Integrates Opal Vibe-Coding Tool for Custom Mini Apps into Gemini
Google has integrated its Opal vibe-coding tool into the Gemini web app, enabling users to create AI-powered mini apps, known as Gems, without coding. Introduced in 2024, these Gems serve specific tasks like coaching and brainstorming. Users can describe their desired app in natural language, and Opal generates it using Gemini models. The tool’s visual editor allows easy modification and step linking, while advanced customization options are available through Opal’s dedicated site. As AI-driven app development gains popularity, Google’s move positions it alongside other industry players in offering consumer-friendly app-building solutions.
Xiaomi Challenges AI Titans with Open-Source MiMo-V2-Flash Model for Global Use
Chinese tech company Xiaomi has open-sourced its new AI model, MiMo-V2-Flash, asserting its competitive edge against industry leaders like DeepSeek, Moonshot AI, Anthropic, and OpenAI. Available via MiMo Studio, Hugging Face, and Xiaomi’s API platform, the model is designed for reasoning, coding, and agentic tasks, also serving as a general-purpose assistant. It boasts a high inference speed of 150 tokens per second and a low cost of $0.1 per million input tokens. MiMo-V2-Flash, using a Mixture-of-Experts architecture with 309 billion parameters, reportedly matches the performance of Anthropic’s Claude 4.5 Sonnet in coding at a lower cost and excels in long-context evaluations and agentic tasks. The release signifies Xiaomi’s strategic shift beyond hardware into foundational AI, aiming to enhance its AI capabilities across smartphones, tablets, and electric vehicles.
TCS Integrates AI Upgrade into TCS BaNCS to Enhance Banking Platform Capabilities
TCS, the country’s largest IT services company, has unveiled an AI upgrade to its popular banking platform, TCS BaNCS. The new feature, called TCS BaNCS AI Compass, integrates machine learning, deep learning, generative AI, and pre-built agents to support banking and securities services clients in implementing responsible AI practices. This announcement aligns with TCS’s aim to become the world’s leading AI-driven technology services provider and follows the company’s achievement of USD 1.5 billion in annualized AI revenue.
Top AI Companies Collaborate with U.S. Government in the Genesis Mission Initiative
Two dozen leading artificial intelligence companies, including OpenAI, Microsoft, Nvidia, Amazon Web Services, and Google’s parent company Alphabet, have committed to participate in the U.S. government’s “Genesis Mission.” This initiative, spearheaded by the Trump administration, aims to advance scientific discovery and energy projects by leveraging emerging AI technologies. According to a White House statement, these companies have either signed memorandums of understanding, are already engaged with the Energy Department or national laboratories, or have shown interest in supporting this governmental effort.
🎓AI Academia
Google DeepMind Proposes New AGI Safety Framework for Multi-Agent Coordination
A new study from Google DeepMind highlights a significant shift in AI safety and alignment research, advocating for serious consideration of the “patchwork AGI” hypothesis. Rather than focusing solely on safeguarding individual AI systems with the expectation of a singular monolithic AGI, this hypothesis suggests that AGI may first emerge through coordinated interactions among sub-AGI agents with complementary skills. The research proposes a framework for “distributional AGI safety,” emphasizing the creation of regulated virtual economies among agents to mitigate collective risks, a crucial strategy given the rapid deployment of advanced AI capable of collaboration and tool-use.
Assessment of AI Ethics Tools in Language Models: Developers Share Insights and Challenges
A recent study from Brazilian universities evaluated AI Ethics Tools (AIETs) applied to language models, highlighting a gap in addressing unique aspects of the Portuguese language. The research focused on tools like Model Cards, ALTAI, FactSheets, and Harms Modeling to explore their effectiveness in guiding ethical considerations from developers’ perspectives. While these tools were found useful for general ethical guidance, they fell short in recognizing specific linguistic aspects and potential negative impacts on Portuguese language models, underscoring the need for more tailored ethical evaluation tools in AI.
AI Companies to Strengthen Safety with Internal Audits for Risk Oversight
A new paper by Arcadia Impact’s AI Governance Taskforce suggests that frontier AI companies can enhance their internal assurance against catastrophic risks by implementing a structured internal audit function. The research outlines key design dimensions such as audit scope, sourcing arrangements, frequency, and access to sensitive information, emphasizing the need for a hybrid sourcing model led by a Chief Audit Executive. It argues that an effective internal audit system provides comprehensive oversight of model-level, system-level, and governance-level controls, offering broader assurance than external evaluations alone and adapting to rapid changes in technology and organizational structures.
Human Oversight in AI is Crucial for Ensuring Ethical and Safe Practices
A new paper emphasizes the critical role of human oversight in AI governance, framing it as a dimension of well-being efficacy. The work argues for integrating this oversight capability into education at all levels, from professional training to lifelong learning, to effectively align AI technologies with human values and priorities. The authors suggest that, with AI poised to transform daily life, ensuring ethical discernment and AI literacy are paramount in maintaining human agency and trust in a rapidly evolving tech landscape, with frameworks like the EU AI Act reinforcing the need for human judgment in AI operations.
Comparative Study Explores AI’s Role in Greenwashing and Legal Liability Across Nations
A recent study explores the challenges of attributing criminal liability for AI-driven greenwashing across India, the United States, and the European Union. It highlights the inadequacies of existing laws that focus on human intent when dealing with deceptive sustainability claims originating from AI systems. The research suggests incorporating strict liability models and algorithmic accountability into corporate governance and ESG compliance. The EU’s Corporate Sustainability Due Diligence Directive is noted as a potential model for harmonizing regulations. The study calls for a hybrid legal framework to address AI-related environmental fraud and emphasizes the importance of evolving jurisprudential approaches to enforce transparency and accountability.
Proposed Framework for AI Governance to Mitigate Financial System Risks
A recent study from researchers at Princeton, Emory, and Bowdoin highlights the accelerating use of generative and agentic AI in financial markets, outpacing current governance frameworks. Traditional model-risk approaches struggle to address the continuous learning and emergent behavior exhibited by these AI systems, which pose significant safety, liability, and autonomy risks. The study proposes a multi-layered governance framework inspired by complex adaptive systems, including self-regulation modules, firm-level oversight, sector-wide monitoring agents, and independent audits, aiming to mitigate these challenges while supporting financial innovation. This architecture seeks to protect against gaming and destabilizing behaviors in trading markets, still aligning with existing model-risk rules.
Preliminary AI Risk Mitigation Taxonomy Aims to Address Fragmentation and Improve Coordination
MIT FutureTech and collaborators have developed a preliminary AI Risk Mitigation Taxonomy aimed at creating a unified framework for addressing AI-related risks. The taxonomy, based on an evidence scan of 13 AI risk mitigation frameworks, organizes mitigations into four main categories: Governance & Oversight, Technical & Security, Operational Process, and Transparency & Accountability. This initiative seeks to standardize the fragmented landscape of AI risk management by providing structured guidance and a shared frame of reference, thereby enhancing coordination among organizations involved in the development, deployment, and governance of AI systems.
About SoRAI: SoRAI is committed to advancing AI literacy through practical, accessible, and high-quality education. Our programs emphasize responsible AI use, equipping learners with the skills to anticipate and mitigate risks effectively. Our flagship AIGP certification courses, built on real-world experience, drive AI governance education with innovative, human-centric approaches, laying the foundation for quantifying AI governance literacy. Subscribe to our free newsletter to stay ahead of the AI Governance curve.














