How Kafka Consumer and Deserializers Work in Apache Kafka? Last Updated : 09 Sep, 2022 Summarize Comments Improve Suggest changes Share Like Article Like Report Kafka Consumers is used to reading data from a topic and remember a topic again is identified by its name. So the consumers are smart enough and they will know which broker to read from and which partitions to read from. And in case of broker failures, the consumers know how to recover and this is again a good property of Apache Kafka. Now data for the consumers is going to be read in order within each partition. Now please refer to the below image. So if we look at a Consumer consuming from Topic-A/Partition-0, then it will first read the message 0, then 1, then 2, then 3, all the way up to message 11. If another consumer is reading from two partitions for example Partition-1 and Partition-2, is going to read both partitions in order. It could be with them at the same time but from within a partition the data is going to be read in order but across partitions, we have no way of saying which one is going to be read first or second and this is why there is no ordering across partitions in Apache Kafka. So our Kafka consumers are going to be reading our messages from Kafka which are made of bytes and so a Deserializer will be needed for the consumer to indicate how to transform these bytes back into some objects or data and they will be used on the key and the value of the message. So we have our key and our value and they're both binary fields and bytes and so we will use a KeyDeserializer of type IntegerDeserializer to transform this into an int and get back the number 123 for Key Objects and then we'll use a StringDeserializer to transform the bytes into a string and read the value of the object back into the string "hello world". Please refer to the below image. So as we can see here choosing the right Deserializer is very important because if you don't choose the right one then you may not get the right data in the end. So some common Deserializer is given below String (Including JSON if your data is adjacent)IInteger, and Float for numbersAvro, and Protobuf for advanced kind of data Comment More infoAdvertise with us Next Article What is Apache Kafka and How Does it Work? A AmiyaRanjanRout Follow Improve Article Tags : Java Apache Kafka Java Practice Tags : Java Similar Reads What is Apache Kafka and How Does it Work? Apache Kafka is a distributed, high-throughput, real-time, low-latency data streaming platform. It's built to transport large volumes of data in real-time between systems, without needing to develop hundreds of intricate integrations. Rather than integrating each system with each other system, you c 9 min read Apache Kafka Serializer and Deserializer Apache Kafka is a publish-subscribe messaging system. A messaging system lets you send messages between processes, applications, and servers. Broadly Speaking, Apache Kafka is software where topics (A topic might be a category) can be defined and further processed. Applications may connect to this s 8 min read How to Create Apache Kafka Consumer with Conduktor? Kafka Consumers is used to reading data from a topic and remember a topic again is identified by its name. So the consumers are smart enough and they will know which broker to read from and which partitions to read from. And in case of broker failures, the consumers know how to recover and this is a 3 min read Apache Kafka - Create Consumer using Java Kafka Consumer is used to reading data from a topic and remember a topic again is identified by its name. So the consumers are smart enough and they will know which broker to read from and which partitions to read from. And in case of broker failures, the consumers know how to recover and this is ag 6 min read Apache Kafka - Create Consumers in a Consumer Group using CLI In Apache Kafka, a consumer group is a set of consumers that work together to consume a topic. A set of partitions is given to each group member consumer to choose from. When a consumer in the group finishes consuming a message from its assigned partition, it sends a message to the Kafka broker to c 4 min read How to Install & Configure Conduktor Tool For Apache Kafka? Conduktor is a full-featured native desktop application that plugs directly into Apache Kafka to bring visibility to the management of Kafka clusters, applications, and microservices. Itâs eventually helping companies make the most of their existing engineering resources, and minimizing the need for 2 min read Like