"Godfathers of AI"- Geoffrey Hinton & John Hopfield receive "Nobel Prize 2024" in Physics
Today's highlights:
🚀 AI Breakthroughs
Nobel Physics Prize 2024 Awarded for Breakthroughs in AI and Neural Networks
• Geoffrey E. Hinton and John J. Hopfield awarded the 2024 Nobel Prize in Physics for breakthroughs that shaped today's machine learning landscape
• Hinton's 'Boltzmann machine' and its influence pivotal for current AI technologies in speech and image recognition
• Hopfield's network, by utilizing physics principles, revolutionizes AI’s ability to recreate and recognize patterns from distorted inputs.
Meta AI's Movie Gen Model Sets New Standards in Video and Audio Generation
• Meta AI's Movie Gen, a 13B parameter model, is creating buzz with abilities like video generation from text and adding sound effects
• Mark Zuckerberg demonstrated Movie Gen on Instagram, hinting at a potential 2025 feature integration for the platform
• Competition heats up as Runway partners with Lionsgate and Luma's Dream Machine becomes freely available, amidst new advancements from Kling in China.
OpenAI Rolls Out New API Features Amidst Executive Departures and Competitive Pressure
• OpenAI introduced Realtime API (beta), vision fine-tuning, and efficiency tools like prompt caching and model distillation at its Dev Day event
• Despite no new models being presented, developers will find the added API features beneficial for creating advanced applications
• OpenAI's unveiling competes with cheaper, open-source alternatives, amid recent high-profile departures including CTO Mira Murati.
OpenAI Releases Swarm: A New Open Source Framework for Multi-Agent System Development
• OpenAI has released an open-source framework called Swarm for orchestrating multi-agent AI systems
• Swarm framework facilitates lightweight and controllable agent coordination, still in experimental phase, not for production
• Alongside Swarm, OpenAI also launched MLE-bench, a benchmark for evaluating AI agents on machine learning tasks.
China Telecom Develops Large Language Models Using Domestic Chips Amid US Restrictions
• China Telecom has developed two large language models, TeleChat2-115B and an unnamed model, utilizing domestically-produced chips
• These AI models, with TeleChat2-115B featuring 100 billion parameters, underline China's push for technological self-sufficiency amid US export restrictions
• Partnerships with local firms like Huawei and Cambricon show China's efforts to forge a robust, independent AI ecosystem free from reliance on foreign technologies.
OpenAI Receives Nvidia's Latest DGX B200 for Enhanced AI Training and Inference Speeds
• OpenAI receives Nvidia's DGX B200 engineering builds, boasting up to three times faster training speeds and fifteen times greater inference efficiency
• Shift in protocol as OpenAI's technical team, not its founders, receives Nvidiaâs first GPU chip delivery amid leadership changes suggesting a new company era
• Microsoft reveals Azure as the first cloud service utilizing Nvidiaâs Blackwell system, enhancing AI capabilities and deepening the longstanding Nvidia partnership.
OpenAI Launches Canvas: A New Interface for Enhanced Writing and Coding Collaboration
• OpenAI introduces Canvas, enhancing ChatGPT with capabilities for side-by-side writing and coding projects, offering edits and suggestions in real time
• Canvas is currently available to ChatGPT Plus and Team users, with plans to extend access to Enterprise, Edu, and Free users post-beta phase
• Features include user-controlled editing, ability to focus on specific sections for feedback, code debugging, and adapting code across multiple programming languages.
OpenAI Releases Whisper V3 Turbo for Faster, Efficient Speech Transcription
• OpenAI's new Whisper V3 Turbo significantly boosts transcription speed, being 8x faster than the previous large-v3 model
• Despite its smaller size, roughly half of its predecessor, Whisper V3 Turbo maintains high accuracy, simplifying deployment across multiple platforms
• Available via OpenAI's GitHub, the Whisper V3 Turbo supports transcription in over 99 languages, offering robust performance in diverse linguistic environments.
⚖️ AI Ethics
Adobe Launches Free Web App to Safeguard Digital Content with Content Credentials
• Adobe unveils new Content Authenticity web app, enabling creators to apply protective Content Credentials to their digital works
• The web app supports integration with Adobe Creative Cloud tools like Photoshop and Lightroom, enhancing digital content security
• Adobe's initiative supports transparency, allowing creators to specify if their content can be used to train AI models.
European Commission Launches AI Code of Practice Development Project
• The European Commission initiates the General-Purpose AI Code of Practice, aligning with the EU AI Act's stringent regulations
• Nearly 1,000 experts including industry leaders and academics collaborate to draft AI guidelines focused on transparency and risk management
• The final Code slated for April 2025 aims to influence global AI safety and ethics standards, positioning EU as a potential leader in AI regulation.
California AI Companies Relieved as Governor Vetoes Controversial SB 1047 Safety Bill
• Governor Gavin Newsom vetoed California's SB 1047 AI safety bill, citing concerns over its broad regulatory scope and potential to stifle innovation
• Newsom argues that the bill's focus on large-scale AI models might overlook risks from smaller, specialized systems and promote a false sense of security
• The veto sparked varied reactions, with significant disappointment from Senator Scott Weiner and expressions of gratitude from tech leaders like Yann LeCun and Marc Andreesen.
German Court Rules in Favor of LAION in Copyright Infringement Case
• German court rules in favor of LAION in copyright infringement case involving dataset usage for AI research
• The decision highlights that dataset creation for AI training falls under scientific research, covered by section 60d of the German Copyright Act
• Discussion about whether text and data mining for training AI models should be covered under the DSM Directive, with implications for future cases.
Harvard Students Hack Meta's Smart Glasses for Instant Personal Data Access
• New podcast launched discussing seamless app development with AI, featuring insights from Sherry Jiang, CEO of Peak, available on major streaming platforms
• Harvard students enhance Meta's Ray-Ban smart glasses with AI to recognize people and fetch personal information, raising privacy concerns
• Invention of 'I-XRAY' platform by students showcases future risks, revealing personal details through smart wearable technology, intended to ignite security discussions.
OpenAI Releases MLE-bench to Evaluate AI in Machine Learning Engineering
• OpenAI unveiled MLE-bench, a new benchmark aimed at evaluating AI agents' abilities in machine learning engineering tasks sourced from 75 Kaggle competitions
• The benchmark features competitions testing key ML engineering skills, with OpenAI's o1-preview paired with AIDE scaffolding outperforming other models in 16.9% of cases
• OpenAI has made the MLE-bench code publicly available, encouraging further research into AI capabilities in machine learning engineering.
🎓AI Academia
New Study Analyzes Copyright Risks with Generative AI Using Probabilistic Methods
• The paper introduces a probabilistic framework to analyze copyright infringement disputes in the context of generative AI and evolving legal standards
• It critically examines the "inverse ratio rule" and challenges its effectiveness, presenting a mathematically grounded argument supporting a redefined approach
• The study assesses the Near Access-Free (NAF) condition as a strategy to mitigate copyright infringement risks inherent to generative AI technologies.
Assessing Security of ML-Based Digital Watermarking Against Copy and Removal Attacks
• The study critically evaluates security concerns in ML-based digital watermarking, specifically focusing on copy and removal attacks
• Experiments demonstrate vulnerabilities in foundation models' latent space watermarking, revealing potential areas for future enhancement
• Findings push for broader research on secure digital watermarking techniques in the era of AI-generated content and deep learning advancements.
Study Reveals Inefficacy of Reporting Non-Consensual Deepfake Content on Twitter
• The audit study revealed that using copyright infringement claims was far more effective, with a 100% success rate in removing non-consensual intimate media (NCIM) within 25 hours
• Reports filed under non-consensual nudity did not result in any takedown of reported content even after three weeks, highlighting significant gaps in platform enforcement
• The findings underscore the urgency for legislative changes to ensure faster and more reliable removal processes for non-consensual intimate content on social platforms.
Assessing AI: Society's Threats, Countermeasures, and Needed Governance
• A+AI paper emphasizes the dual nature of technology: its potential benefits and inherent threats to society
• Proposed measures include mandatory user verification on social media and stringent labeling requirements for AI-enhanced products
• The document advocates for significant government-led initiatives, including funding for threat mitigation research and public awareness campaigns.
Microsoft Releases PyRIT: Enhancing Security in Generative AI with Red Teaming Tools
• Microsoft releases PyRIT, an open-source toolkit for enhancing security red teaming in Generative AI systems
• PyRIT, designed to be model- and platform-agnostic, targets identification of novel risks in multimodal GenAI environments
• The toolkit has been utilized in over 100 red team operations, proving its efficacy in real-world AI security scenarios.
Comprehensive Guide on Red Teaming Challenges in Generative AI Released by IBM Research
• Attack Atlas introduces a practitioner-focused framework for assessing adversarial threats to generative AI, emphasizing practical security strategies
• The study highlights the increasing role of red-teaming in identifying and mitigating new vulnerabilities in AI systems due to expanded attack surfaces
• Despite academic focus on adversarial risks, there remains a significant gap in real-world applications, which this research aims to address by providing actionable insights.
COMPL-AI Framework Analyzes EU AI Act Compliance, Showcases LLM Benchmarking Challenges
• COMPL-AI Framework developed to interpret the EU's Artificial Intelligence Act specifically for large language models (LLMs)
• Researchers introduce an open-source, Act-centered benchmarking suite to evaluate LLMs against EU AI Act compliance standards
• Framework assessment reveals prevalent issues in LLMs including lack of robustness, diversity, safety, and fairness.
Strategic AI Governance Examined Across Leading Nations with EPIC Framework
• Dian W. Tjondronegoro's research at Griffith University maps out key AI governance strategies from leading nations using the EPIC framework
• The study highlights the importance of education, infrastructure, partnerships, and community involvement in AI deployment for public good
• Findings stress the need for future AI governance to include special considerations for developing countries and specific sectors.
Strategies for Integrating AI's Carbon Footprint into Banking Risk Management Frameworks
• Nataliya Tkachenko discusses how banking sector can integrate AI's carbon footprint into risk management, complying with new sustainability directives like EU AI Act
• Paper highlights energy-efficient AI models like OLMoE and Agentic RAG framework to reduce environmental impact without sacrificing performance
• Emphasizes cross-departmental collaboration and tools for carbon accounting and AI fairness, aligning with global sustainability standards like IFRS and ESRS.
Assessing Measurement Challenges in AI Catastrophic Risk Governance and Safety Frameworks
• Safety frameworks developed by major AI companies aim to address catastrophic risks by guiding scaling decisions in AI development
• Measurement challenges identified hinder the effectiveness of these frameworks, impacting their reliability and validity in real-world applications
• Three policy recommendations were proposed to enhance the measurement standards of AI safety frameworks, aiming to strengthen governance structures.
University of California, Berkeley Highlights Limits of Model-Focused AI Governance Policies
• Current AI governance focuses on "frontier" and "foundation" models but lacks clear, consistent definitions, which undermines effective regulations
• Despite policy emphasis on large AI models' parameter and compute capacities, smaller models can equally pose risks if trained on specific datasets
• The study advocates for regulations that include quantitative evaluations of AI capabilities to ensure a balanced, effective governance framework.
AI Incident Database Study Highlights Challenges and Solutions in AI Incident Reporting Practices
• The AI Incident Database reviews over 750 AI incidents to improve monitoring and analysis of AI-related harms
• New editorial challenges arise due to the increasing legal requirements for AI incident reporting
• Lessons from cybersecurity incident practices are applicable to AI incident reporting and cataloging.
About ABCP: We are dedicated to reducing Generative AI anxiety among tech enthusiasts by providing timely, well-structured, and concise updates on the latest developments in Generative AI through our AI-driven news platform, ABCP - Anybody Can Prompt!