Fine-tune Gemma models in Keras using LoRA → https://goo.gle/407Kise
Learn how to optimize large language models using LoRA (Low-Rank Adaptation) with a model from Google AI, Gemma. Watch along as Googler Paige Bailey utilizes Google Colab and the Databricks Dolly 15k dataset to demonstrate this fine tuning technique.
Chapters:
0:00 – What is Gemma?
0:44 – What is Low-Rank Adaptation (LoRA)?
1:20 – [Demo] Setting up your AI environment
2:45 – [Demo] Fine tuning with LoRA
3:33 – Conclusion
Subscribe to Google Cloud Tech → https://goo.gle/GoogleCloudTech
#GoogleCloud #DevelopersAI
Speaker: Paige Bailey
Products Mentioned: Gemma, Google Colab, Gemini