Tutorial: Fine-tune Gemma 2 with Hugging Face TRL on GKE → https://goo.gle/3ZcLQiL
Hugging Face Deep Learning containers → https://goo.gle/3Otahn3
Docs: Hugging Face TRL → https://goo.gle/3ZgsyJs
Supervised fine-tuning of a pre-trained language model teaches a model to become better at a certain task by showing it many examples of input and their desired output. A smaller, fine-tuned model can actually outperform a much larger, general-purpose model on those specific tasks. Join Googler Wietse Venema as he dives into fine-tuning open models using Hugging Face Transformer Reinforcement Learning (TRL) using Google Kubernetes Engine (GKE).
More resources:
Learn more about Parameter Efficient Tuning (PEFT) → https://goo.gle/4eUb6js
Watch more Google Cloud: Building with Hugging Face → https://goo.gle/BuildWithHuggingFace
Subscribe to Google Cloud Tech → https://goo.gle/GoogleCloudTech
#GoogleCloud #HuggingFace
Speaker: Wietse Venema
Products Mentioned: Gemma, Google Kubernetes Engine (GKE)