sebae banner ad-300x250
sebae intro coupon 30 off
sebae banner 728x900
sebae banner 300x250

Build and deploy LLMs | Retrieval Augmented Generation | Data & AI Masters | Intel | Microsoft Azure

0 views
0%

Build and deploy LLMs | Retrieval Augmented Generation | Data & AI Masters | Intel | Microsoft Azure

Retrieval-augmented generation (RAG) offers a new way to maximise the capabilities of large language models (LLMs) to produce more accurate, context-aware, and informative responses. Join Akash Shankaran (Intel), Ron Abellera (Microsoft), and Juan Pablo Norena (Canonical) for a tutorial on how RAG can enhance your LLMs.

We will also explore optimising LLMs with RAG using Charmed OpenSearch, which can serve multiple services like data ingestion, model ingestion, vector database, retrieval and ranking, and LLM connector. We will also show the architecture of our RAG deployment in Microsoft® Azure Cloud. In addition, the RAG’s vector search capabilities are using Intel AVX® Acceleration, which enables faster and high-throughput generation of the RAG process. Learn more at https://canonical.com/solutions/ai and https://canonical.com/data

Date: November 27, 2024