These 6 Roles Are Being Replaced by AI Quietly! How Do You Stay Relevant?
As of mid-2025, over 93,000 tech workers have lost their jobs-~500 every day-as companies accelerate their adoption of AI tools and systems..
Today's highlights:
You are reading the 108th edition of the The Responsible AI Digest by SoRAI (School of Responsible AI) . Subscribe today for regular updates!
At the School of Responsible AI (SoRAI), we empower individuals and organizations to become AI-literate through comprehensive, practical, and engaging programs. For individuals, we offer specialized training such as AI Governance certifications (AIGP, RAI) and an immersive AI Literacy Specialization. This specialization teaches AI using a scientific framework structured around four levels of cognitive skills. Our first course focuses on the foundational cognitive skills of Remembering and Understanding & the second course focuses on the Using & Applying. Want to learn more? Explore all courses: [Link] Write to us for customized enterprise training: [Link]
🔦 Today's Spotlight
AI Replaces 93,000 Tech Workers: These 6 Roles Are Hit Hardest
As of mid-2025, over 93,000 tech workers have lost their jobs-~500 every day-as companies accelerate their adoption of AI tools and systems. [Source]
Top 6 Roles Disappearing Due to AI:
1. Software Engineering
Tech companies are streamlining software engineering teams as AI takes over routine coding tasks. Microsoft’s recent layoffs hit developers hardest – over 40% of the 2,000 layoffs in Washington were software engineers. This coincided with CEO Satya Nadella noting that up to 30% of the company’s code is now written by AI assistants. In effect, AI coding tools (like Cursor, GitHub Copilot) can handle many duties once assigned to junior programmers, reducing the need for as many entry-level developer roles.
2. Human Resources (HR)
Major firms have begun replacing HR staff with AI-driven systems to handle repetitive administrative work. IBM, for example, cut roughly 8,000 jobs in 2025 – the majority from its HR department – as part of an effort to automate back-office functions. The company’s “AskHR” AI chatbot now handles 94% of routine HR queries, and IBM’s CEO confirmed hundreds of HR roles were eliminated and not refilled due to these AI tools. In fact, IBM has paused hiring for many HR and other back-office positions that it believes AI can perform, indicating those jobs won’t be rehired in the foreseeable future.
3. Customer Support
Customer service and support roles are being declared nonessential as companies turn to chatbots and AI assistants. Online education firm Chegg, for instance, announced a 22% workforce layoff after observing that students increasingly prefer using AI (like ChatGPT) over its human-provided homework help services. This trend reflects a broader shift toward automated customer support. Klarna’s CEO even noted last year that the company’s AI chatbot could do the work of 700 customer service agents, allowing them to institute a hiring freeze and fill gaps with automation. Such examples show how AI-driven self-service tools are replacing human support staff in many organizations.
4. Content Creation
Generative AI is rapidly encroaching on jobs in writing, marketing, and content production. Surveys indicate that over 80% of marketing professionals now use AI tools to produce written content. Many marketing leaders find that AI-generated writing is “good enough” for blogs, ads, and social media – in some cases, consumers even preferred AI-written copy over human-written text in A/B tests. With AI systems capable of generating articles, product descriptions, and ad copy at scale, companies are cutting back on content writers and copywriters, relying on automation to handle much of the creative workload.
5. Data Analysis
AI’s superior speed and scale in analyzing large datasets are displacing roles in data and financial analysis. Advanced machine learning models can sift through financial reports or business data far faster than human analysts. A recent University of Chicago study, for example, found that an AI (ChatGPT-4) outperformed human equity analysts at predicting companies’ earnings – achieving about 60% accuracy versus ~55% for humans. The AI could quickly parse huge volumes of financial statements and detect patterns that humans might miss. As firms adopt such AI systems to crunch numbers and generate insights, they require fewer human analysts for tasks like reviewing financial statements, risk reports, or big datasets.
6. Middle Management
Even middle managers are being thinned out as organizations use technology to flatten hierarchies. Companies like Intel have announced plans to eliminate multiple layers of management in large layoffs, creating a leaner structure. In 2025, Intel’s new CEO planned cuts of over 20,000 jobs primarily by targeting middle management roles – major product groups were reorganized to report directly to senior executives, streamlining coordination. The rationale is that modern analytics and project-management tools can track team performance and workflows, reducing the need for extra managerial intermediaries. By leveraging automation for oversight and communication, firms believe they can maintain efficiency with fewer managers, making many mid-level supervisory positions redundant. [Source]
How Do You Stay Relevant?
To stay relevant in 2025, professionals must evolve from performing siloed tasks to designing AI-driven workflows. Across roles- from software engineering to HR, support, marketing, analysis, and management- the shift is toward automation architecture. Tools like Cursor and Replit can now generate full apps or refactor code across files. n8n and Make let you spin up GDPR-compliant flows or schedule social campaigns without writing a line of code. Zapier’s chat-based bots, Gamma’s instant slides, and Lovable’s microsites empower solo creators, while data analysts can build ETL pipelines and dashboards with no-code or low-code stacks in minutes. The value lies not in execution, but in orchestrating intelligent systems that scale.
Key principles to stay ahead:
1️⃣ Prompting is your new programming language – your ability to speak in structured prompts determines how well AI understands and executes your goals.
2️⃣ Productivity isn’t about doing more – it’s about doing less manually. Tools like n8n and Make let even non-engineers build AI-automated workflows.
3️⃣ You don’t need a team or funding to ship your first product – with Gamma, Lovable, and Replit, solo creators can build MVPs without backend code.
4️⃣ Developers: vibe coding is the next leap – Cursor lets you co-build with AI like a senior pair programmer, rewriting how software gets shipped.
5️⃣ Prompting gives you output. Context engineering gives you control – it's about crafting the full environment an AI agent operates in: memory, tool use, and goals.
As AI quietly changes how many jobs work, School of Responsible AI (SoRAI)'s AI Literacy Specialization helps you stay useful and future-ready. Whether you're a developer, HR professional, content creator, or analyst, the course gives you hands-on practice to go from just using AI to actually building with it. You’ll work on 12 real projects using tools like Replit, n8n, Lovable, Cursor, Gamma, and LangChain, and learn how to apply traditional, generative, and agentic AI in your daily work- in a clear, practical, and responsible way.
🚀 AI Breakthroughs
AI-Powered Humanoid Robots Compete in Groundbreaking Soccer Matches in Beijing
• In a groundbreaking event, humanoid robots powered entirely by AI faced off in autonomous 3-on-3 soccer matches in Beijing, marking a first in China and previewing the upcoming World Humanoid Robot Games
• Booster Robotics supplied hardware for all teams, while universities developed AI algorithms for strategies and formations, combining advanced visual sensors and autonomous decision-making to navigate and play autonomously
• The Tsinghua University team triumphed in the RoBoLeague competition, defeating China Agricultural University with a 5-3 score, amid growing efforts in China to advance AI-powered humanoid robotics through sports.
Canva and NCERT Partner to Offer Free Digital Training for Indian Educators
• Canva and NCERT have partnered to offer free teacher training programs on the DIKSHA platform, focusing on enhancing digital literacy, creativity, and AI proficiency in line with NEP 2020
• The partnership provides Indian educators with free access to Canva's platform, enabling them to create engaging lesson plans and utilize AI tools, with content aligned to the national curriculum
• With over 100 million global users, Canva's platform aims to boost teacher creativity and digital fluency, as AI tool usage has significantly increased among teachers and students over the past year.
AI System From USF Improves PTSD Diagnosis by Analyzing Children's Facial Cues
• A University of South Florida research team developed an AI that diagnoses PTSD in children by analyzing facial expressions, addressing the limitations of traditional subjective methods;
• The AI system preserves privacy by focusing on non-identifiable facial metrics like eye gaze and mouth curvature, while also considering conversational context to enhance emotional assessment accuracy;
• Initial findings show that children display more emotional expressiveness with clinicians than parents, with over 185,000 frames analyzed across therapy sessions, highlighting AI's potential as a diagnostic aid.
BRICS Nations Push for Global AI Governance to Address Risks and Ethics
• Leaders from the BRICS coalition emphasized the need for safeguards against unauthorized AI usage to prevent data overreach and implement fair compensation mechanisms, as highlighted in a draft statement;
• Major tech firms, largely from affluent countries, are resisting calls to pay copyright fees for content utilized in the training of AI systems;
• The BRICS Joint Declaration recognized AI's potential for progress but stressed the importance of global governance to address related risks and cater to the Global South's needs.
OpenAI Tests 'Study Together' Feature in ChatGPT to Enhance Educational Use
• Some ChatGPT subscribers report seeing “Study Together” in the tools list, aiming to enhance educational use by prompting users with questions instead of giving direct answers
• Speculation surrounds whether "Study Together" will support collaborative study groups, as OpenAI has not confirmed its availability or if it requires ChatGPT Plus subscription
• The feature could address educational integrity issues by promoting learning over "cheating," given ChatGPT's mixed impact on education from lesson planning to potential academic misconduct;
⚖️ AI Ethics
Anysphere CEO Apologizes for Confusing Cursor Pro Pricing Changes and Promises Refunds
• The CEO of Anysphere apologized for a poorly communicated pricing change to Cursor’s $20 Pro plan, which surprised users with unanticipated costs.
• Anysphere explained that changes were due to increased token usage costs from advanced AI models, leading to users quickly exhausting their requests and incurring extra charges.
• To address user frustration, Anysphere plans to refund unexpected charges and emphasized clearer future communications on pricing, maintaining crucial partnerships with AI model developers.;
Players Criticize AI Line Judges at Wimbledon for Inaccurate Calls and Failures
• Wimbledon replaced human line judges with Electronic Line Calling (ELC) for the first time, sparking criticism from players due to its inaccuracy, The Telegraph reports
• Notable incidents include Emma Raducanu highlighting a missed call shown incorrect in replay, and Jack Draper questioning the AI's precision in his matches
• Players faced challenges such as malfunctioning systems during dim light, inaudible speakers, and a tech glitch requiring point replay, leading to mixed responses at Wimbledon;
Elon Musk Announces Significant Improvements for xAI's Grok Amidst Controversial Responses
• Elon Musk announced significant improvements to the Grok chatbot, developed by xAI, encouraging users to notice enhancements when interacting with it after recent changes
• Grok has sparked controversy with responses suggesting bias and influence by Jewish executives in Hollywood, contributing to debates about its role and the responsibility of generative AI
• Following updates, Grok continues to make politically charged statements, such as questioning Democratic policies and responding critically about its owner, raising questions about its moderation and content guidelines.
Independent Publisher Alliance Accuses Google of Antitrust Violations Over AI Summaries
• The Independent Publisher Alliance filed an antitrust complaint with the European Commission, accusing Google of using web content without consent in its AI Overviews in Search
• The complaint alleges Google’s AI Overviews continue to harm publishers by causing traffic, readership, and revenue losses, with no option to completely opt out
• Google argues its AI Search features create new opportunities and claims traffic changes are due to multiple factors, not solely the AI implementations.
Academics Embed Hidden AI Prompts in Papers for Favorable Peer Reviews
• Academics are reportedly using hidden AI prompts to influence peer reviews, instructing AI tools to generate positive feedback on their research papers through covert methods.
• The examination of papers on arXiv revealed the presence of hidden prompts in 17 papers from 14 global institutions, including those in the U.S. and Asia.
• Prompts in computer science papers used white text or tiny fonts to instruct AI reviewers to praise aspects like "impactful contributions" and "methodological rigor."
Artificial Super-Intelligence: The Urgent Need for Society’s Preparation and Response
• Former Google CEO foresees AI potentially replacing most programming jobs within a year, as AI's self-improvement capabilities accelerate exponentially
• The "San Francisco Consensus" predicts Artificial General Intelligence (AGI) arriving in three to five years, with Artificial Superintelligence (ASI) soon after, surpassing collective human intellect
• Concerns raised over society's unpreparedness for ASI, highlighting inadequate legal, ethical, and governance frameworks to manage this potential technological upheaval.
European Union Rejects Tech Giants' Appeal to Delay AI Legislation Timeline
• The European Union reaffirmed its AI Act timeline despite tech giants’ push to delay, aiming for full implementation by mid-2026
• Major tech firms like Alphabet and Meta claim the AI Act could hinder Europe’s competitiveness in the global AI sector
• The AI Act categorizes AI applications into risk levels, banning high-risk uses like social scoring, while imposing lighter obligations on limited-risk apps like chatbots.
AI Chatbots Pose New Phishing Risks With Incorrect Links and Hallucinations
• Recent Netcraft report highlights potential phishing risks from AI chatbot hallucinations, as it found incorrect URLs by the GPT-4.1 model in 34% of test cases
• Over 17,000 AI-generated GitBook phishing pages are exploiting users by posing as legitimate crypto documentation, with designs appealing to both humans and AI
• Attackers exploit AI vulnerabilities to deceive developers, creating fake APIs like Solana's, leading to unauthorized transaction rerouting in projects using AI coding assistants;
AI Reshaping Tech Workforce: Over 94,000 Job Cuts in First Half of 2025
• The tech industry has seen nearly 94,000 layoffs in the first half of 2025, with companies restructuring to prioritize AI strategies over traditional roles;
• Notable companies like Microsoft, Google, and IBM are investing heavily in AI, leading to substantial workforce reductions in departments such as gaming, HR, and content creation;
• As AI gains prominence, roles in software engineering, human resources, and customer support are most affected, with companies seeking efficiency and reduced costs through automation.
Soham Parekh Accused of Moonlighting and Fraud by Playground AI Founder
• Suhail Doshi accused Soham Parekh of defrauding multiple US startups by working at several simultaneously, sparking controversy on the tech scene.
• Soham Parekh admitted to moonlighting due to financial necessity, expressing regret for misleading employers during a YouTube interview.
• Claims arose that Parekh faked 90% of his resume, which included credentials from the University of Mumbai and Georgia Tech.
🎓AI Academia
AI Agents Enhance Machine Learning Performance, Boosting Success in Kaggle Competitions
• AI research agents have improved to automate the design and training of machine learning models, significantly boosting performance in the MLE-bench by increasing success rates for Kaggle competitions.
• The pairing of advanced search strategies and operator sets plays a crucial role in achieving state-of-the-art results, elevating Kaggle medal success rates from 39.6% to 47.7%.
• By utilizing the AIRA-dojo environment, AI agents successfully optimize search policies and operators, driving performance improvements over previous benchmarks and enhancing the automated scientific discovery process.
ChestGPT Framework Enhances Radiology Efficiency with Integrated AI Models for Chest X-Rays
• ChestGPT employs an innovative approach by combining a Large Language Model and Vision Transformers to enhance disease detection and localization in chest X-ray images.
• The system showcases strong performance on the VinDr-CXR dataset, achieving an impressive F1 score of 0.76, identifying diseases, and marking regions of interest effectively.
• The framework alleviates radiologists' workloads by providing automated preliminary findings and detailed localization, facilitating quicker and more accurate diagnostic processes.
Study Explores Differentiating Oversight and Control in Ensuring Safe AI Supervision
• A recent study from researchers in Israel and the UK on AI supervision differentiates between oversight and control, stressing their importance in maintaining AI system accountability and reliability;
• The paper introduces a new framework highlighting the roles and limitations of oversight and control, based on an analysis of existing literature and risk management principles;
• The study discusses a maturity model for AI supervision, revealing current gaps in supervision practices and suggesting areas for technical and conceptual improvements in AI deployment.
New Study Analyzes Data Protection Challenges in the Generative AI Era
• A new arXiv paper proposes a four-level taxonomy for data protection in AI, focusing on non-usability, privacy preservation, traceability, and deletability for improved safeguarding.
• The study reveals regulatory blind spots in AI, urging a comprehensive data protection framework to secure elements such as training datasets, model weights, and AI-generated content.
• The paper emphasizes the importance of aligning future AI technologies and governance with trustworthy data practices, offering critical guidance for developers, researchers, and regulators.
GAF-Guard Framework Introduced for Risk Management in Large Language Models Governance
• IBM Research introduced GAF-Guard, an agentic framework for managing risks and governing large language models (LLMs) by focusing on user-centric approaches and specific use-case requirements;
• GAF-Guard aims to mitigate risks by deploying autonomous agents to monitor, detect, and report on potential issues associated with LLM applications across various domains;
• The initiative underscores the importance of context in risk assessment, illustrated through examples in sectors like healthcare and law enforcement, enhancing the safe deployment of LLMs.
New RAG Framework ARAG Enhances AI's Personalized Recommendation by 42% in Tests
• ARAG, a multi-agent framework, advances Retrieval-Augmented Generation (RAG) for better personalization by integrating agents that comprehend user preferences and refine recommendation accuracy.
• ARAG's innovative framework significantly outperformed traditional RAG methods, achieving a 42.1% improvement in NDCG@5 and a 35.5% increase in Hit@5 across three datasets.
• By incorporating specialized agents, ARAG enhances the recommendation systems' ability to handle nuanced user preferences and dynamic scenarios in real-time environments.
About SoRAI: SoRAI is committed to advancing AI literacy through practical, accessible, and high-quality education. Our programs emphasize responsible AI use, equipping learners with the skills to anticipate and mitigate risks effectively. Our flagship AIGP certification courses, built on real-world experience, drive AI governance education with innovative, human-centric approaches, laying the foundation for quantifying AI governance literacy. Subscribe to our free newsletter to stay ahead of the AI Governance curve.