Skip to content

Understanding Kafka and Event Driven approaches

Details of this picture follow the picture

The following text was provided by Google Gemini 2.0. The pictures were drawn by Gary Zeien. The output was reviewed.

Please provide an explanation of this picture that includes details on the key objects in the picture, the relationships, and a summary of the key points being covered in this picture.

This diagram illustrates the core concepts and architecture of Apache Kafka, a distributed streaming platform often used for building real-time data pipelines and streaming applications.

Key Objects:

  • Message/Record: The fundamental unit of data in Kafka. It consists of a key, a value, and a timestamp. It is immutable.
    • Key: An optional field used for partitioning messages within a topic. Messages with the same key are guaranteed to go to the same partition. If no key is provided, messages are distributed round-robin across partitions.
    • Value: The actual data being transmitted (e.g., a JSON object, a log entry).
    • Header (Metadata): Optional metadata associated with the message.
    • Timestamp: The time the message was created.
  • Event: Represents an occurrence or change in state that is captured as a message in Kafka.
  • Topic: A category or feed name to which messages are published. Think of it as a stream of related data. Topics are divided into partitions.
  • Partition: A subdivision of a topic. Each partition is an ordered, immutable sequence of messages. Partitions enable parallelism and scalability.
  • Kafka Broker: A server in the Kafka cluster that stores and manages partitions.
  • Kafka Cluster: A group of Kafka brokers working together.
  • Zookeeper: A centralized service that manages the Kafka cluster's configuration, coordination, and metadata. It acts as the "engine" of the Kafka cluster.
  • Producer: An application that publishes messages to a topic.
  • Consumer: An application that reads messages from a topic.
  • Consumer Group: A group of consumers that work together to consume messages from a topic. Each partition is assigned to only one consumer within a group.
  • Kafka Connect: A framework for connecting Kafka to external systems (e.g., databases, cloud storage).
  • Log: Represents the persistent storage of messages within a partition. It's an ordered sequence of records stored as files on the filesystem of the brokers.
  • Lead Replica: The partition replica that handles all reads and writes for that partition.
  • Follower Replica: Replicas of the lead replica that maintain a copy of the partition's data. They are used for fault tolerance.
  • Filesystem: The local storage on the brokers where the log segments (files containing messages) are stored.
  • Node/Machine: A physical or virtual machine hosting a Kafka broker.
  • Container/Pod (K8S): The environment in which a Kafka broker is running, often within a Kubernetes cluster.

Relationships and Flow:

  1. Producers publish messages to Topics: Producers send messages to specific topics.
  2. Topics are divided into Partitions: Each topic is divided into one or more partitions.
  3. Partitions are stored on Kafka Brokers: Each partition resides on one or more Kafka brokers.
  4. Messages within a Partition are ordered: Messages within a single partition are strictly ordered by their offset (position in the log).
  5. Consumers read messages from Topics: Consumers subscribe to topics and read messages from their assigned partitions.
  6. Consumer Groups enable parallel consumption: Consumers are organized into consumer groups to enable parallel consumption of messages from a topic. Each partition is consumed by only one consumer within a group.
  7. Replication provides fault tolerance: Partitions are replicated across multiple brokers to ensure data durability and high availability. If the lead replica fails, a follower replica takes over.
  8. Zookeeper manages the Kafka Cluster: Zookeeper manages the configuration, coordination, and metadata of the Kafka cluster, including broker discovery, leader election, and partition assignment.
  9. Kafka Connect integrates with external systems: Kafka Connect allows you to easily import data from external systems (like databases, message queues, and cloud storage) into Kafka and export data from Kafka to external systems.

Key Points:

  • Ordering Guarantee (within a Partition): Kafka guarantees message ordering within a single partition. If you need strict ordering across all messages in a topic, you should use only one partition.
  • Partitioning for Parallelism: Partitions enable parallelism for both producers and consumers, allowing for high throughput.
  • Consumer Groups for Scalability: Consumer groups allow you to scale consumption by adding more consumers to the group.
  • Replication for Fault Tolerance: Replication ensures data durability and high availability.
  • Non-Destructive Reads: Consumers read messages without deleting them from the topic. This allows multiple consumers to process the same messages.
  • Offset Tracking: Consumers track their progress within each partition using offsets. This allows them to resume consumption from where they left off in case of failures.
  • Key-based Partitioning: Using message keys ensures that messages with the same key are always sent to the same partition, which is important for maintaining order for related messages.
  • Rebalance Protocol: If consumers join or leave a consumer group, Kafka's rebalance protocol reassigns partitions to the remaining consumers.

In summary, this diagram provides a good overview of Kafka's architecture and core concepts. It highlights the importance of topics, partitions, brokers, consumers, consumer groups, and Zookeeper in building scalable and fault-tolerant streaming applications.


  1. https://ibm-cloud-architecture.github.io/refarch-eda/technology/kafka-overview/
  2. https://ibm-cloud-architecture.github.io/refarch-eda/technology/kafka-producers/
  3. https://ibm-cloud-architecture.github.io/refarch-eda/technology/kafka-consumers/
  4. https://ibm-cloud-architecture.github.io/refarch-eda/technology/kafka-overview/advance/
  5. https://www.ibm.com/cloud/architecture/architecture/practices/strategies-for-kafka-reliability/

Last update: January 15, 2025
Back to top