sebae banner 728x900
sebae banner 300x250

EP8: Training Models at Scale | AWS for AI Podcast

0 views
0%

EP8: Training Models at Scale  | AWS for AI Podcast

Join us for an enlightening conversation with Anton Alexander, AWS’s Senior Specialist for Worldwide Foundation Models, as we delve into the complexities of training and scaling large foundation models. Anton brings his unique expertise from working with the world’s top model builders, along with his fascinating journey from Trinidad and Tobago to becoming a leading AI infrastructure expert.

Discover practical insights on managing massive GPU clusters, optimizing distributed training, and handling the critical challenges of model development at scale. Learn about cutting-edge solutions in GPU failure detection, checkpointing strategies, and the evolution of inference workloads. Get an insider’s perspective on emerging trends like GRPO, visual LLMs, and the future of AI model development.

Don’t miss this technical deep dive where we explore real-world solutions for building and deploying foundational AI models, featuring discussions on everything from low-level infrastructure optimization to high-level AI development strategies.

Learn more: http://go.aws/47yubYq
Amazon SageMaker HyperPod: http://go.aws/3JHkwVO
The Llama 3 Herd of Models paper : https://arxiv.org/abs/2407.21783

Chapters:

0:00:00 : Introduction and Guest Background
0:01:18 : Anton Journey from Caribbean to AI
00:05:52 : Mathematics in AI
00:07.20 : Large Model Training Challenges
00:09.54 : GPU failures : Lama Herd of models
00:13:40 Grey failures
00:15:05 : Model training trends
00:17:40 : Managing Mixture of Experts Models
00:21:50 : Estimate how many GPUs you need.
00:25:12 : Monitoring loss function
00:27:08 : Crashing trainings
00:28:10 : SageMaker Hyperpod story
00:32:15 : how we automate managing grey failures
00:37:28 : which metrics to optimize for
00:40:23 : Checkpointing Strategies
00:44:48 : USE Utilization, Saturation, Errors
00:50:11 : SageMaker Hyperpod for Inferencing
00:54:58 : Resiliency in Training vs Inferencing workloads
00:56:44 : NVIDIA NeMo Ecosystem and Agents
00:59:49 : Future Trends in AI
1:03:17 Closing Thoughts

Subscribe to AWS: https://go.aws/subscribe

Sign up for AWS: https://go.aws/signup
AWS free tier: https://go.aws/free
Explore more: https://go.aws/more
Contact AWS: https://go.aws/contact

Next steps:
Explore on AWS in Analyst Research: https://go.aws/reports
Discover, deploy, and manage software that runs on AWS: https://go.aws/marketplace
Join the AWS Partner Network: https://go.aws/partners
Learn more on how Amazon builds and operates software: https://go.aws/library

Do you have technical AWS questions?
Ask the community of experts on AWS re:Post: https://go.aws/3lPaoPb

Why AWS?
Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—use AWS to be more agile, lower costs, and innovate faster.

#LLMtraining #AWS #AmazonWebServices #CloudComputing #AWSForAI #artificialintelligence

Date: September 3, 2025