Welcome to Random Samples — a weekly AI seminar series that bridges the gap between cutting-edge research and real-world application. Designed for AI developers, data scientists, and researchers, each episode explores the latest advancements in AI and how they’re being used in production today.
This week’s topic: Continual Post-Training
As large language models (LLMs) transition from static systems to dynamic components in real-world applications, a major challenge emerges: how can we teach them new tasks without making them forget what they’ve already learned? In this talk, we’ll introduce a practical and theoretically grounded method for post-training continual learning that enables full-model fine-tuning — without increasing model size or compromising general capabilities. The key insight lies in constraining updates to carefully selected low- rank subspaces, allowing models to adapt flexibly while preserving past knowledge.
Paper link – https://arxiv.org/abs/2504.07097
Blog post – https://ai-innovation.team/blog/orthogonal-subspace-learning
Code – https://github.com/Red-Hat-AI-Innovation-Team/orthogonal-subspace-learning
Subscribe to stay ahead of the curve with weekly deep dives into AI! New episodes drop every week.