15K views
HPE GreenLake for File Storage can address the biggest challenges many enterprises face today in its IT infrastructure to support AI workloads. The video shows how a Large Language Model (LLM) with Retrieval-Augmented Generation (RAG) works and a demo of a private instance of a chatbot using LLM+RAG with its inferencing workload supported by a HPE GreenLake for File Storage via RDMA and GPUDirect.
Date: June 7, 2024