Generative AI Weekly Research Highlights | Feb'24 Part 4
Disclaimer: The content in this video is AI-generated and adheres to YouTube's guidelines. Each video undergoes manual review and curation before publishing to ensure accuracy and quality.
Summary of Research Papers
1. CHAIN-OF-KNOWLEDGE: GROUNDING LARGE LANGUAGE MODELS VIA DYNAMIC KNOWLEDGE ADAPTING OVER HETEROGENEOUS SOURCES https://arxiv.org/pdf/2305.13269.pdf
2. Chain of Logic: Rule-Based Reasoning with Large Language Models https://arxiv.org/pdf/2402.10400.pdf
3. Chain-of-Specificity: An Iteratively Refining Method for Eliciting Knowledge from Large Language Models https://arxiv.org/pdf/2402.15526.pdf
4. The (R)Evolution of Multimodal Large Language Models: A Survey https://arxiv.org/pdf/2402.12451.pdf
5. Safety of Multimodal Large Language Models on Images and Text https://arxiv.org/pdf/2402.00357.pdf
6. Large Language Models are Vulnerable to Bait-and-Switch Attacks for Generating Harmful Content https://arxiv.org/pdf/2402.13926.pdf
7. Round Trip Translation Defence against Large Language Model Jailbreaking Attacks https://arxiv.org/pdf/2402.13517.pdf
8. Can AI-Generated Text be Reliably Detected? https://arxiv.org/pdf/2303.11156.pdf
Chapters:
00:00 Intro
00:16 Chain of knowledge
00:38 Chain of Logic
00:57 Chain-of-Specificity
01:17 The (R)Evolution of Multimodal Large Language Models
01:36 Safety of Multimodal Large Language Models on Images and Text
01:53 LLMs are Vulnerable to Bait-and-Switch Attacks for Generating Harmful Content
02:08 Round Trip Translation Defence
02:25 Can AI-Generated Text be Reliably Detected?
02:43 End
#generativeai,#promptengineering,#largelanguagemodels,#openai,#chatgpt,#gpt4,#ai,#abcp,#prompt,#responsibleai,