
Sponsored by Buf Technologies
Streaming data might be more accessible than ever, but the gap between "hello world" tutorials and bulletproof production systems is filled with hard-learned lessons. Which processing engine won’t leave you stranded in two years? How do you evolve data formats without breaking everything downstream? What’s the real story behind choosing Iceberg, Hudi, or Delta Lake?
In this session, Bartosz Konieczny and Scott Haines share the battle-tested patterns that keep streaming systems running when it matters most. You’ll discover:
– Why Protobuf isn’t just about serialization—it’s your insurance policy against breaking changes and how Protovalidate catches bad data before it pollutes your entire downstream ecosystem
– Real-world data value generation patterns for Spark and Flink that actually work in production
– How broker-side validation for Kafka and zero-ETL Iceberg generation can eliminate entire classes of problems and save you time and money along the way
You’ll walk away with practical knowledge to tackle the challenges that separate streaming prototypes from mission-critical systems that run for years, not months.











