Generative AI Weekly Research Highlights | Apr'24 Part 1
Research Paper Summaries and Links:
Green AI: Exploring Carbon Footprints, Mitigation Strategies, and Trade Offs in Large Language Model Training [https://arxiv.org/pdf/2404.01157.pdf]
Evalverse: Unified and Accessible Library for Large Language Model Evaluation [https://arxiv.org/pdf/2404.00943.pdf]
The Butterfly Effect of Altering Prompts: How Small Changes and Jailbreaks Affect Large Language Model Performance [https://arxiv.org/pdf/2401.03729.pdf]
Unleashing the Potential of Large Language Models for Predictive Tabular Tasks in Data Science [https://arxiv.org/pdf/2403.20208.pdf]
How Interpretable are Reasoning Explanations from Prompting Large Language Models? [https://arxiv.org/pdf/2402.11863.pdf]
Confronting LLMs with Traditional ML: Rethinking the Fairness of Large Language Models in Tabular Classifications [https://arxiv.org/pdf/2310.14607.pdf]
AI Act and Large Language Models (LLMs): When critical issues and privacy impact require human and ethical oversight. [https://arxiv.org/pdf/2404.00600.pdf]
Developing Safe and Responsible Large Language Models - A Comprehensive Framework [https://arxiv.org/pdf/2404.01399.pdf]
Disclaimer: The content in this video is AI-generated and adheres to YouTube's guidelines. Each video undergoes manual review and curation before publishing to ensure accuracy and quality.
00:00 Intro
00:19 How AI Impacts the Environment
00:45 How to Evaluate LLMs Easily
01:03 Small Changes, Big LLM Differences
01:27 LLMs Meet Table Data
01:52 Making AI Explanations Clearer
02:12 Fairness in AI: A Closer Look
02:33 LLMs and Privacy Concerns
02:55 Building Safer AI Models
03:14 End