If you’ve worked with Apache Kafka, you know the trade-offs: Blazing-fast local disk writes but costly replication across zones. Tiered storage for long retention but complexity in managing hot vs cold data. The Kafka community has been asking: Can we make object storage the first-class durable store without losing Kafka’s semantics? That’s where Diskless Topics (KIP-1150 & KIP-1163) by Aiven started the journey — rethinking Kafka’s durability around object storage as the source of truth. Now comes Diskless 2.0 — a unified, zero-copy approach that: ✅ Removes parallel “disk vs tiered” estates ✅ Simplifies the produce/consume path ✅ Enables zero-copy migration (flip a topic from classic to diskless without reprocessing) ✅ Cuts costs by eliminating cross-AZ replication and reducing broker disk reliance This isn’t just an incremental patch. It’s a paradigm shift in how we think about Kafka in the cloud era — where durability and economics favor object storage, and Kafka brokers can finally be “lighter.” Of course, challenges remain: -Latency sensitivity (classic still wins for ultra-low-latency cases) -Compaction and deletion semantics (work in progress) -Operational readiness for large clusters But that’s the beauty of open source — progress through community iteration. And Diskless 2.0 feels like a community win, not just a vendor push. 👉 Dive deeper: Read the KIP-1150 proposal here: fandf.co/3VlQxW1 Explore Aiven’s take on Diskless 2.0: fandf.co/46pczw0 What do you think? Could Diskless 2.0 become the default way to run Kafka in the next 3–5 years, or will classic/tiered still dominate latency-critical workloads? Kudos to Aiven for releasing Diskless 2.0 and sponsoring this post. #Kafka #Sponsored #Aiven
Perfectly explained the trade offs in Apache Kafka! Brij kishore Pandey
A massive win for the open-source community and the Kafka ecosystem.
This feels like the future of Kafka infrastructure. Object storage makes so much sense if we can nail the performance trade-offs.
It’s refreshing to see such forward-thinking beyond incremental patches.
The economics of object storage finally align with Kafka’s durability needs.
Fascinating Brij kishore Pandey. Diskless 2.0 could really simplify Kafka deployments in the cloud.
Diskless 2.0 isn’t just an upgrade, it’s a new mindset for Kafka in the cloud.
Congratulations on the launch Diskless 2.0, Brij! This innovative approach not only simplifies the complexities of Kafka but also pushes the boundaries of what we can achieve in the cloud era. Excited to see how this evolves and the positive impact it will have on the community.
Diskless 2.0 bridges economics, durability, and simplicity in a unique way.
Co-Founder Twine Security | Challenging the Status Quo | Life Long Learner
4wReally sharp breakdown. We’re seeing the same pattern across AI infrastructure where object storage is becoming the default backbone for scaling models efficiently.