In this latest video, Chris Chase demonstrates the expanded capabilities of Red Hat OpenShift AI since we last saw him and his dog in action.
See how Red Hat OpenShift AI enables enterprises to build intelligent applications with AI/ML. He shows operations engineers how to achieve fast, efficient, and scalable GenAI inference by deploying and monitoring models at scale, using Red Hat validated and open-source models with built-in guardrails. Chase highlights customizable model deployments on Nvidia, AMD, and Intel accelerators with flexible storage options. For AI/ML engineers and data scientists, he showcases workbenches (Jupyter, VS Code) for fine-tuning models with custom data, such as personalizing an image generator with pictures of his dog, Teddy. He illustrates AI pipelines for repeatable training and serving models as APIs for seamless application integration, exemplified by a chatbot generating custom image.
0:00 – Introduction to OpenShift AI
0:21 – Key improvements in GenAI inference
0:36 – Red Hat validated and open source model deployment
0:56 – GenAI model deployment & accessing models from storage
2:10 – Workbenches
2:43 – Demonstration: adding image generation to a Chatbot
2:58 – Accessing initial image model and fine tuning for personal data
4:14 – AI pipelines
4:35 – Serving models as APIs
5:15 – Generating custom images from the fine tuned model
For more info, visit: red.ht/AI