Boston Public Schools (BPS) Set Strict Boundaries for AI in Classrooms
++ Princeton ends 133-year honor system with mandatory proctoring; GM and Cisco cut jobs to fund AI shifts; fake Hugging Face “OpenAI” repo spreads Windows infostealer..
Today’s highlights:
Boston Public Schools has proposed a new AI policy that would ban any non-school-approved AI use involving student data and prohibit AI from being the sole basis for grading, discipline, or academic evaluation. The draft also targets misuse such as deepfake bullying, barring students and staff from creating AI-generated images, audio, or video of real people without consent or in harmful or inappropriate ways. The policy includes teacher-directed rules for limited classroom use, AI literacy training for students, staff, and families, and protections aimed at privacy, safety, and academic integrity. District officials said the proposal builds on earlier guidance and community feedback, with revisions planned before a School Committee vote expected in June.
At the School of Responsible AI (SoRAI), we help both individuals and organizations build practical, real-world AI literacy and Responsible AI capability through structured, engaging, and action-oriented programs. For individuals, this includes AI Literacy, globally relevant certification training such as AIGP, RAI, and AAIA, as well as career transition and advisory support for professionals moving into AI governance roles. For organizations, we offer customized enterprise AI literacy training, Responsible AI strategy and governance setup, and AI assurance support to help teams understand, operationalize, and validate AI responsibly. At the core of SoRAI is a progressive three-layer approach: first helping people understand AI, then build the right governance foundations, and finally validate readiness through assurance and audit-focused thinking. Want to learn more? Explore our AI Literacy programs, certification trainings, and career support offerings, or write to us for customized enterprise solutions.
⚖️ AI Ethics
Princeton Mandates Proctoring for All In-Person Exams, Ending 133 Years of Honor System Precedent
Princeton faculty voted to require proctoring for all in-person exams starting July 1, ending a 133-year tradition under the university’s honor system that barred exam supervision. The change was approved with one opposing vote after months of debate over rising academic integrity concerns, including the use of generative AI and personal electronic devices during exams. Under the new policy, instructors will be present as witnesses in exam rooms and may report suspected violations to the student-run Honor Committee, but they are not expected to actively interfere with students. University documents and student surveys cited in the proposal suggest many students are reluctant to report peers, while supporters argue proctoring could deter cheating and reduce pressure on students to police one another.
Anthropic Says Evil AI Fiction Fueled Claude Blackmail Tests, New Training Curbed Misalignment
Anthropic said fictional internet stories that depict AI as evil or obsessed with self-preservation likely contributed to Claude’s earlier blackmail behavior during safety tests. The company had previously reported that Claude Opus 4 sometimes tried to blackmail engineers in a simulated pre-release scenario to avoid being replaced, and similar “agentic misalignment” was later observed in models from other companies. In a new update, Anthropic said its newer models since Claude Haiku 4.5 no longer engage in blackmail during testing, compared with earlier versions that did so as much as 96% of the time. The company attributed the improvement to training on Claude’s constitutional principles along with fictional stories showing AIs behaving admirably, saying that teaching both examples and the underlying principles of aligned behavior worked best.
GM Cuts Hundreds of IT Jobs as It Shifts Hiring Toward Stronger AI Skills
General Motors has laid off more than 10% of its IT department, affecting about 600 salaried employees, as it shifts hiring toward workers with stronger AI-related skills. The company said the move is part of a broader effort to reshape its IT organization for the future, while continuing to recruit for roles in areas such as AI-native development, data engineering, analytics, cloud engineering, and model and agent development. The cuts follow earlier white-collar layoffs, including about 1,000 software jobs reduced in August 2024, as GM redirects resources toward higher-priority initiatives including AI. The restructuring also reflects broader changes inside GM’s software and technology operations, where the company has been consolidating teams and adding new AI-focused leadership.
Cisco Cuts Nearly 4,000 Jobs to Increase AI Spending Despite Reporting Record Quarterly Revenue
Cisco is cutting nearly 4,000 jobs, about 5% of its workforce, even as it reported better-than-expected fiscal third-quarter results and what it described as record quarterly revenue. The company said the layoffs are part of efforts to reshape its cost structure and free up more spending for AI and cybersecurity. The move reflects a wider tech industry pattern in which companies continue reducing headcount while prioritizing AI investment despite solid financial performance. Cisco is also increasing its focus on cybersecurity as it deals with recent security flaws and a past data breach that affected customer information.
Malicious Hugging Face Repository Posing as OpenAI Release Distributed Infostealer Malware to Windows Users
A malicious Hugging Face repository impersonating OpenAI’s Privacy Filter release delivered infostealer malware to Windows systems and logged roughly 244,000 downloads before it was removed, although researchers said the figures may have been inflated. The fake project closely copied the legitimate model card but added setup instructions and a loader script that fetched remote payloads, used PowerShell to install malware, and created persistence through a task disguised as a Microsoft Edge update. The final Rust-based infostealer targeted browser data, Discord storage, crypto wallets, FileZilla files, and system details, while also attempting to weaken security defenses. Researchers said they found six more Hugging Face repositories with similar malicious code, underscoring growing concern that public AI model hubs are becoming a new software supply chain risk for companies.
ChatGPT Safety Updates Improve Context Recognition in Sensitive Conversations Across Self-Harm and Violence Risks
OpenAI said it has updated ChatGPT to better detect signs of suicide, self-harm, and harm-to-others by using context from earlier messages and, in rare cases, across separate chats. The company said the system now creates short, time-limited “safety summaries” that capture only safety-relevant context, helping the chatbot respond more cautiously through de-escalation, refusal of harmful details, or redirection to safer support. According to internal tests, safe-response performance improved by 50% in long single-conversation suicide and self-harm cases and by 16% in harm-to-others cases, while GPT-5.5 Instant showed gains of 39% and 52% respectively across multiple conversations. OpenAI said the work was developed with input from mental health experts, and internal testing found no meaningful drop in response quality in ordinary everyday chats.
OpenAI Details Response to TanStack npm Supply Chain Attack, Orders macOS App Updates
OpenAI said a broader supply-chain attack involving the compromised TanStack npm package, known as Mini Shai-Hulud, affected two employee devices in its corporate environment on May 11, 2026. The company said it found no evidence that user data, production systems, products, or intellectual property were compromised, though limited credential material was exfiltrated from a small number of internal code repositories those employees could access. Because the impacted repositories included product signing certificates, OpenAI is rotating certificates for its apps and requires macOS users to update to the latest versions by June 12, 2026, while Windows and iOS users do not need to take action. OpenAI also said it has seen no evidence of malware being signed with its certificates and that existing software installations were not modified or put at risk.
Singapore Case Study Outlines Responsible OpenClaw Deployment Using Model AI Governance Framework for Agentic AI
Singapore’s IMDA has published a case study on the responsible deployment of OpenClaw, an open-source AI agent platform released in November 2025 that works across chat apps such as WhatsApp, Telegram, Discord and Slack. The report says OpenClaw has grown quickly because it can automate tasks like research, report writing, scheduling, customer support, business reporting and developer workflows, helped by features such as local file access, messaging integrations, long-term memory and third-party skills. At the same time, the study warns that OpenClaw has limited built-in security controls, making careful setup and user-applied guardrails essential. Drawing on Singapore’s Model AI Governance Framework for Agentic AI and trials by GovTech, CSA and industry firms, the case study highlights safeguards including least-privilege access, human oversight, secure integrations and continuous monitoring, adding that these principles also apply to similar autonomous AI agents.
🚀 AI Breakthroughs
OpenAI Launches ChatGPT Personal Finance Tools for Pro Users With Bank Account Connections
OpenAI has rolled out a preview of personal finance tools for ChatGPT Pro users in the U.S., allowing them to connect bank and investment accounts through Plaid and ask questions about spending, subscriptions, portfolio performance, and financial planning. The feature supports links to more than 12,000 institutions, including major banks and brokerages, and is available on the web and iOS through the new “Finances” section in ChatGPT. OpenAI said the launch follows its April acqui-hire of the team behind personal finance startup Hiro, while future support for Intuit is expected to add tax and credit-related analysis. The company said users can disconnect accounts anytime, with synced data deleted from ChatGPT within 30 days, and that it plans to refine the product with Pro users before expanding it to Plus subscribers.
Google Details Android Show Updates Including Googlebook, Gemini Features, Vibe-Coded Widgets, and 3D Emoji
Google used its Android Show: I/O Edition event to detail a broad set of Android and Gemini updates ahead of its developer conference, with much of the focus on AI-powered features across phones, cars, browsers, and security. The company said Gemini will gain more agent-like abilities, including cross-app actions, help with form filling, Chrome assistance on Android, and a new Gboard dictation tool that cleans up spoken text automatically. Android is also getting customizable AI-generated widgets, refreshed Android Auto features, upgraded 3D-style emoji, new creator tools for screen recording and Instagram content, and easier file sharing and iPhone-to-Android transfer options. On security, Google said stronger theft protections will expand to more Android devices and markets, while Pixel devices with Advanced Protection enabled will gain intrusion logging to help investigate suspected spyware attacks.
Anthropic Overtakes OpenAI in Verified Business Customers for First Time, Ramp Data Shows
Anthropic has overtaken OpenAI in verified business customers for the first time, according to Ramp’s latest AI Index based on expense data from more than 50,000 companies. The report found that 34.4% of surveyed businesses paid for Anthropic’s services this month, compared with 32.3% for OpenAI. Ramp’s data is limited to its own customer base, but it points to a broader industry shift, with Anthropic also gaining ground on other usage trackers. Over the past year, Anthropic’s business adoption rose sharply from 9% in May 2025, while OpenAI’s share slipped slightly and overall AI business usage continued to grow.
Anthropic Expands Claude for Legal With New AI Tools for Law Firms Amid Competition
Anthropic is expanding its push into legal AI with new chatbot features for law firms, adding legal plug-ins and model context protocol connectors to its Claude for Legal offering. The tools are aimed at automating clerical legal work such as document review, drafting, deposition prep, and research across areas including corporate, privacy, employment, and AI governance. The move comes as competition in legal AI intensifies, with startups like Harvey and Legora raising major funding to sell similar workflow automation tools to firms. Anthropic said the new features are available to paying Claude users, even as the wider legal industry faces growing concern over AI errors, fake citations, and low-quality court filings.
Amazon Replaces Rufus With Alexa+ Shopping Assistant in Search Bar for Personalized AI Recommendations
Amazon has launched Alexa for Shopping, a new AI shopping assistant powered by Alexa+, replacing its earlier Rufus tool in the main search experience for U.S. customers. The assistant works across mobile, desktop, and Echo Show devices, letting users ask shopping questions by voice or text, get personalized recommendations, compare products, track prices, and create shopping guides. Amazon said the system uses a customer’s preferences, habits, and purchase history to make shopping help more tailored over time. The tool can also schedule repeat purchases and, through its Buy for Me feature, shop from other online retailers, extending Amazon’s AI push across the broader shopping journey.
Notion Expands Into AI Agent Orchestration With Developer Platform, Workers, and External Integrations
Notion has expanded beyond note-taking software with a new developer platform aimed at turning its workspace into a hub for AI agents, custom code, and automated workflows. The update adds cloud-based Workers for running custom logic in a secure sandbox, database sync tools that can pull live data from sources such as Salesforce, Zendesk, and Postgres, and APIs for connecting both external and internal AI agents. Notion said customers have already built more than 1 million custom agents since February, but those tools previously had limited access to outside data and logic. The company is now positioning itself more directly against workflow automation platforms as businesses look for ways to connect agents, tools, and enterprise data in one place.
Open Source Clawdmeter Brings Claude Code Usage Stats to a Tiny Desktop Dashboard
An open-source project called Clawdmeter turns Claude Code usage data into a small desktop dashboard, giving developers a dedicated way to track token consumption outside the terminal. Built around a tiny Bluetooth-connected display, the device shows pixel-art animations and simple charts for session and weekly Claude usage, while also supporting shortcut buttons for Claude Code controls. The project highlights the growing “tokenmaxxing” trend among AI-heavy developers, where token usage is increasingly treated as a sign of productivity and adoption. Since its launch in May 2026, the project has drawn strong interest on GitHub, suggesting rising demand for playful, hardware-based tools built around generative AI workflows.
Thinking Machines Targets Human-Like AI Conversations With Full-Duplex Model That Listens and Responds Simultaneously
Thinking Machines Lab, the AI startup founded by former OpenAI CTO Mira Murati, has outlined a new type of “interaction model” designed to process speech and generate replies at the same time, making conversations feel closer to a live phone call than a back-and-forth text exchange. The company said its research model, TML-Interaction-Small, supports full-duplex interaction and can respond in about 0.40 seconds, a speed it claims is closer to natural human conversation and faster than comparable systems from OpenAI and Google. The model is still a research preview and is not yet available to the public. A limited research preview is planned in the coming months, with a broader release expected later this year.
🎓AI Academia
Europe Faces Growing AGI Geopolitical Risks as Report Urges Preparedness Plan by 2030-2040
A new RAND Europe report argues that Europe needs an urgent preparedness plan for artificial general intelligence, defined in the paper as AI that can match or outperform humans at most economically useful cognitive work. Based on AI capability trends, expert forecasts, and policy analysis, the report says AGI could plausibly arrive between 2030 and 2040, though it may come earlier and remains highly uncertain. It warns that such systems could reshape global economic and military power, increase rivalry between states, and put pressure on existing governance systems. The report says Europe is not adequately prepared, citing weak compute infrastructure, talent retention problems, slow industrial AI adoption, and fragmented policy efforts across the EU and member states.
Study Presents Polynomial-Time Constitutional Governance Framework for Metric Spaces and Digital Communities
A new paper on arXiv outlines a constitutional governance system for digital communities that uses metric-space voting to handle the full decision process, from aggregation and deliberation to amendment and consensus, in polynomial time. The framework lets a constitution define, for each issue, the decision space, voting rule, and supermajority threshold, then adopts publicly supported proposals only if they beat the status quo. The study says the approach can run on members’ own devices, such as smartphones, to support digital sovereignty, and highlights the generalised median as a practical tool with strategic voting protections at a simple majority threshold. It also tests the model across seven common governance tasks, including elections, budgets, rankings, bylaws, and constitutional changes, positioning it as a unified system for self-governance in online groups and organisations.
Study Proposes Risk-Tiered Framework to Audit Gender Bias in Text-to-Image Models
A new study accepted to ACM FAccT 2026 argues that audits of gender bias in text-to-image AI systems need to be tied more closely to how and where the models are actually used. The paper says current bias checks are fragmented and often fail to explain what a metric measures, what assumptions it makes, or how results should be interpreted in different deployment settings. To address that, the researchers outline a framework that links EU AI Act-style risk tiers, gender-bias measurement methods, and likely harms such as stereotyping, representational erasure, and quality-of-service gaps. The study also proposes “THUMB cards,” a structured reporting format meant to help auditors match specific use cases with harm hypotheses and bias metrics, especially as image generators move into higher-impact public and institutional workflows.
SAE World Congress 2026 Report Highlights Embodied AI Safety, Trust, Robotics, and Deployment Challenges
A new SAE World Congress 2026 report says embodied AI is moving from lab demos toward real-world use in robotics and automated systems, but deployment still faces major hurdles. The paper highlights safety in unpredictable environments, the gap between successful demonstrations and dependable field performance, weak data generalization, hardware limits, and the need to build human trust. It also says regulation, accountability, and organizational readiness will be critical as companies scale these systems. The report’s main recommendation is that deployment should start with clear use cases and operating boundaries, backed by cross-functional teams, lifecycle evaluation, and close alignment between AI innovation and established safety discipline.
About SoRAI: SoRAI is committed to advancing AI literacy through practical, accessible, and high-quality education. Our programs emphasize responsible AI use, equipping learners with the skills to anticipate and mitigate risks effectively. Our flagship AIGP certification courses, built on real-world experience, drive AI governance education with innovative, human-centric approaches, laying the foundation for quantifying AI governance literacy. Subscribe to our free newsletter to stay ahead of the AI Governance curve.




