At OpenNebula, we have been exploring AI integration from different perspectives: enhancing cloud operations using AI-driven approaches and leveraging OpenNebula clouds for AI training and inference.
This webinar introduces how OpenNebula implements “AI as a Service” model for LLMs, providing users with a robust and vendor-neutral foundation for deploying and using the latests LLMs efficiently and simply.
A demo will showcase OpenNebula appliance for Managed Inference, which integrates Ray Serve, a scalable and high-performance model serving framework. This appliance simplifies the deployment of AI workloads by leveraging Ray Serve’s capabilities for efficient inference, enabling organizations to deploy and scale AI models within OpenNebula cloud environments