sebae banner ad-300x250
sebae intro coupon 30 off
sebae banner 728x900
sebae banner 300x250

Training Hub: continuous learning for LLMs

0 views
0%

Training Hub: continuous learning for LLMs

Post-training is crucial for making your LLM useful beyond basic language prediction. Mustafa Eyceoz, Principal Research Scientist at Red Hat, explains the options for model customization, from initial pre-training to continual learning methods like Orthogonal Subspace Fine-Tuning (OSFT), featured in the open source Training Hub library.

00:00 Introduction
00:44 Basics of Language Model Training
01:13 Pre-Training Explained
01:57 How Pre-Training Works
04:40 Post-Training Goals
05:02 Supervised Fine-Tuning
09:55 Continual Learning
11:11 Orthogonal Subspace Fine-Tuning (OSFT) for Continual Learning
14:57 Parameter-Efficient Methods for Fine-Tuning
15:50 Popular Methods of PEFT
16:53 Low-Rank Decomposition
19:20 Reinforcement Learning
21:48 Rules-Based Verifiers and Group-Relative Policy Optimization (GRPO)
25:14 Other RL and Alignment Techniques
26:42 Training Hub overview

🔗Explore more advanced LLM post-training methods: https://developers.redhat.com/articles/2025/11/04/post-training-methods-language-models

#RedHat #AI #LLM #modelcustomization #continual learning

Date: December 1, 2025