Generative AI Weekly Research Highlights | Oct'23 Part 3
Disclaimer: The content in this video is AI-generated and adheres to YouTube's guidelines. Each video undergoes manual review and curation before publishing to ensure accuracy and quality.
Title: The Foundation Model Transparency Index
Link: https://arxiv.org/pdf/2310.12941.pdf
Title: Sociotechnical Safety Evaluation of Generative AI Systems
Link: https://arxiv.org/pdf/2310.11986.pdf
Title: Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks
Link: https://arxiv.org/pdf/2310.10844.pdf
Title: A Survey on Video Diffusion Models
Link: https://arxiv.org/pdf/2310.10647.pdf
Title: Privacy in Large Language Models: Attacks, Defenses and Future Directions
Link: https://arxiv.org/pdf/2310.10383.pdf
Title: Survey on Factuality in Large Language Models: Knowledge, Retrieval and Domain-Specificity
Link: https://arxiv.org/pdf/2310.07521.pdf
Title: GenAIPABench: A Benchmark for Generative AI-based Privacy Assistants
Link: https://arxiv.org/pdf/2309.05138.pdf
Title: RALLE: A Framework for Developing and Evaluating Retrieval-Augmented Large Language Models
Link: https://arxiv.org/pdf/2308.10633.pdf
00:00 Intro
00:22 Foundation Model Transparency Index
00:53 Sociotechnical Safety of Generative AI
01:09 Adversarial Attacks on LLMs
01:30 Video Diffusion Models
01:47 Privacy in LLMs
02:03 Enhancing LLM Factuality
02:22 GenAIPABench: Privacy Policy Assistance
02:42 RALLE Framework for Factual QA
03:07 End
#generativeai,#promptengineering,#largelanguagemodels,#openai,#chatgpt,#gpt4,#ai,#abcp,#prompt