This video covers advanced techniques to integrate domain-specific data and knowledge into Large Language Models (LLMs). It delves into Retrieval Augmented Generation (RAG) with a focus on handling unstructured data and the associated storage requirements. Additionally, it demonstrates how to seamlessly utilize Azure Blob Storage for RAG, leveraging built-in Azure AI services integrations, as well as preferred application components.
Chapter 1: Introduction and Agenda 00:00 – 00:51
Chapter 2: Bringing domain knowledge to LLMs 00:51 – 02:03
Chapter 3: RAG and storage requirements 02:03 – 04:16
Chapter 4: RAG with Azure Blob Storage 04:16 – 05:56
Chapter 5: Demo 05:56 – 11:05
Chapter 6: RAG with Azure Blob, LangChain, and Pinecone 11:05 – 17:54
Chapter 7: Summary and More Resources 17:54 – 19:33
More information:
https://msft.it/6056oD7tr
https://msft.it/6058oD7tp
https://msft.it/6050oD7tn https://msft.it/6051oD7tX
#Microsoft #MicrosoftAzure