Welcome to Random Samples — a weekly AI seminar series that bridges the gap between cutting-edge research and real-world application. Designed for AI developers, data scientists, and researchers, each episode explores the latest advancements in AI and how they’re being used in production today.
This week’s topic: Synthetic Data Generation via SDG-Hub
We will introduce SDG Hub, an open-source toolkit developed at Red Hat for customizing language models using synthetic data. We will begin by unpacking what synthetic data means in the context of LLMs, and how it enables model customization. The session will explore SDG Hub’s core components: prompts, blocks, and flows, and demonstrate how users can compose, extend, or modify pipelines to fit specific tasks. It will also cover strategies for choosing the right teacher model depending on the use case (reasoning, translation, etc.), and walk through two real-world examples: building a document-grounded skill using a pre-built pipeline, and customizing a reasoning model by authoring new blocks, prompts, flows, and integrating a custom teacher. The talk will conclude with a demo of the new SDG Hub GUI, showcasing how non-experts can visually construct and manage their own synthetic data pipelines.
Original InstructLab Paper link – https://arxiv.org/abs/2403.01081
Code – https://github.com/Red-Hat-AI-Innovation-Team/sdg_hub
Subscribe to stay ahead of the curve with weekly deep dives into AI! New episodes drop every week.