sebae banner ad-300x250
sebae intro coupon 30 off
sebae banner 728x900
sebae banner 300x250

Fine-tuning open LLMs on GKE: The implementation gap

0 views
0%

Fine-tuning open LLMs on GKE: The implementation gap

While Large Language Models (LLMs) offer incredible general capabilities, they often lack the specific domain expertise required for enterprise use cases. In this video, Senior Developer Advocates Ayo Adedeji and Mofi Rahman break down the "implementation gap"—the challenge of moving from prototype to production.

Watch along and learn how to build a production ready multimodal fine-tuning pipeline. The duo discusses the three main barriers to entry: infrastructure complexity, data preparation hurdles, and training workflow management. Learn how to leverage Google Kubernetes Engine (GKE) Autopilot and open source frameworks like Axolotl to fine-tune models like Gemma, Llama, and Mistral on any own data.

Chapters:
0:00 – The challenge with general foundation models
0:48 – Why fine-tuning matters (The Specialist vs. Generalist)
1:48 – The future is Multimodal
2:20 – The "Implementation Gap"
2:40 – Challenge 1: Infrastructure Complexity
3:08 – Challenge 2: Data Preparation
3:40 – Challenge 3: Workflow Management
4:10 – The Solution: Google Cloud Enterprise Infrastructure
5:10 – The Framework: Axolotl and Open Source

Resources:
Check out the blog post → https://goo.gle/building-a-production-multimodal-tuning-pipeline
GitHub repo → https://goo.gle/building-a-production-multimodal-tuning-pipeline-code

Subscribe to Google Cloud Tech → https://goo.gle/GoogleCloudTech

Speakers: Ayo Adedeji, Mofi Rahman
Products Mentioned: Google Kubernetes Engine, Gemma, GKE

Date: November 25, 2025