sebae banner ad-300x250
sebae intro coupon 30 off
sebae banner 728x900
sebae banner 300x250

[Random Samples] Hopscotch: Discovering and Skipping Redundancies in Language Models

0 views
0%

[Random Samples] Hopscotch: Discovering and Skipping Redundancies in Language Models

Random Samples is a weekly seminar series that bridges the gap between cutting-edge AI research and real-world application. Designed for AI developers, data scientists, and researchers, each episode explores the latest advancements in AI and how they’re being used in production today.

This week’s topic:
Hopscotch: Discovering and Skipping Redundancies in Language Models

Abstract:
Join us for a presentation and discussion on Hopscotch, a method produced by Red Hat’s AI Innovation team aimed at understanding and reducing redundancy in language models. With Hopscotch, we can skip entire attention blocks within a model, offering improved inference speeds and memory savings with minimal quality drop-off. This coarse-grained method also provides insights into the frequency of task-specific redundancies within language models, and just how inefficient large models may be today.

Speaker Bio: Mustafa Eyceoz is a research scientist on the AI Innovation team, and author of the Hopscotch paper. Mustafa currently works on language model post-training and customization, having worked on training and distributed systems with Red Hat for the last 5 years. Mustafa completed his undergraduate and master’s degrees at Columbia University, where he conducted NLP research in speech recognition, language model logic and reasoning, and Text-to-SQL translation alongside IBM Research.

Subscribe to stay ahead of the curve with weekly deep dives into AI! New episodes drop every Friday.

Date: August 8, 2025