The big data industry is growing daily and needs tools to process vast volumes of data. That’s why you need to know about Apache Kafka, a publish-subscribe messaging system you can use to build distributed applications. It is scalable and fault-tolerant, making it suitable for supporting real-time and high-volume data streams. Because of this, Apache Kafka is used by big internet companies like LinkedIn, Twitter, Airbnb, and many others.
Now you probably have questions. What value does Apache Kafka add? And how does it work? In this post, we will explore the use cases, terminology, architecture, and workings of Apache Kafka.
What is Apache Kafka?
Kafka is a distributed publish-subscribe messaging system. It has a full queue that can accept large amounts of message data. With Kafka, applications can write and read data on topics. A topic presents a category for labeling data. Furthermore, the application can receive messages from one or more categories.
Kafka stores all received data on disk storage. Kafka then replicates the data within the Kafka cluster to prevent data loss.
Kafka can be speedy for several reasons. It uses the offset when the message was received. It also does not track consumers for a topic or who consumed a particular message. Consumers interested in this data should follow this information.
You can only enter an offset when loading data. Then, starting at that offset, Kafka will return the data to you in order. Many more speed optimizations are available with Kafka, but we won’t cover them in this post.
All these optimizations allow Kafka to process large amounts of data. However, to better understand the capabilities of Apache Kafka, we first need to understand the terminology.
source: https://kafka.apache.org/
Apache Kafka Terminology
Let’s discuss some terms that are key when working with Kafka.
Topics
Remember when we talked about event streams? Of course, there can be many events at once – and we need a way to organize them. For Kafka, the basic unit of event organization is called a topic.
A topic in Kafka is a user-defined category or resource name where data is stored and published. In other words, a case is simply a log of events. For example, when using web activity tracking, there might be a topic called “click” that receives and stores a “click” event every time a user clicks a specific button.
Partition
Topics in Kafka are partitioned, which is when we split a case into multiple log files that can live on separate Kafka brokers. This scalability is essential because it allows client applications to publish/subscribe to many brokers simultaneously and ensures high data availability as partitions are replicated across multiple brokers. So, for example, if one Kafka broker in your cluster goes down, Kafka can safely switch to replica partitions on the other brokers.
Finally, we must discuss how events are pleased and organized in sections. Let’s go back to our website traffic use case to understand this. Let’s say we split the “click” topic into three areas.
Every time our web client publishes a “click” event to our theme, that event will be attached to one of our three sections. If the key is part of the data part of the event, it is used to determine partition assignment. Events are added and partitioned sequentially, and each event’s ID (e.g., 0 for the first event, 1 for the second, etc.) is called an offset.
Replication of Kafka’s theme, leaders and followers
In the first section, we mentioned that partitions could live on separate Kafka brokers, a fundamental way Kafka protects against data loss. This is achieved by setting the topic replication factor, which determines the number of copies of data across multiple brokers.
For example, a replication factor of three will keep three copies of the topic for each partition in other brokers.
To avoid the inevitable confusion when both accurate data and copies of it are present in a cluster (e.g., how will a producer know which broker to publish data to for a particular partition?), Kafka follows a leader-follower system. In this way, one broker can be set as the leader of the thematic section and the rest of the brokers as followers of this section, while only the leader can process these client requests.
Message system
The messaging system transfers data between applications. This allows the application to focus on the data rather than how the data is shared.
There are two types of messaging systems. A classic example is a point-to-point system where producers keep data in a queue. After that, only one application can read data from the line. After reading, the message system removes the message from the queue.
Apache Kafka relies on a publish-subscribe messaging system. Consumers can subscribe to multiple topics in a message queue. They then receive specific messages that are relevant to their application. The Tutorials point website has a helpful image that illustrates this.
Broker
As the name suggests, a broker acts as an intermediary between a buyer and a seller. The Kafka broker receives messages from producers and stores them on its disk. It also makes loading messages easier.
Apache Kafka Architecture
Now let’s look at the internal architecture of Apache Kafka.
The Kafka ecosystem consists of a group of producers and consumers. For our purposes, producers and consumers are external actors. The internal ecosystem includes Kafka’s Zookeeper service and Kafka cluster. Zookeeper is a distributed configuration and synchronization service for managing and coordinating Kafka brokers. Zookeeper notifies producers and consumers when a broker is added to or fails in the Kafka system.
Brokers inside a Kafka cluster are responsible for load balancing. Zookeeper initializes multiple Kafka brokers inside the Kafka cluster to accommodate the higher load.
Again, the Tutorial point has a good image that can help you visualize the architecture.
Source: https://aws.amazon.com/msk/
Apache Kafka Use Cases
Next, let’s look at some use cases for Apache Kafka.
Tracking Website’s Activity
The original use case for Kafka was to be able to refactor a user activity tracking channel as a set of tracks to publish and subscribe to in real-time. This means that website activity (page views, searches, or other actions users can take) is issued to central topics, with one for each type of activity. These resources are available for subscription for various use cases, including real-time processing, monitoring, and loading intoHadoop systems or offline data warehouses for offline processing and reporting.
Activity tracking is often very bulky because many activity reports are generated for each user page view.
Kafka is often used for operational monitoring data. This includes aggregating statistics from distributed applications to create centralized active data sources.
Log aggregation
Many people use Kafka as a replacement for log aggregation solutions. Log aggregation typically collects physical log files from servers and stores them in a central location (perhaps a file server or HDFS) for processing. Kafka abstracts the details of files and provides a cleaner abstraction of log or event data as a message stream. This enables lower latency processing and more accessible support for multiple data sources and distributed data consumption. Compared to log-centric systems like Scribe or Flume, Kafka offers equally good performance, stronger resiliency guarantees through replication, and much lower end-to-end latency.
Stream processing
Many Kafka users process data in multi-stage processing pipelines where raw input data from Kafka topics are consumed and then aggregated, enriched, or otherwise transformed into new topics for further consumption or post-processing. For example, a process feed for recommending news articles can crawl article content from RSS feeds and publish it under the “articles” topic. Further processing may normalize or deduplicate this content and publish the cleansed article content to a new topic; the final stage of processing may attempt to recommend this content to users. These process pipelines create real-time graphs of data flows based on individual issues. Starting with version 0.10.0.0, a lightweight yet powerful streaming library called Kafka Streams is available in Apache Kafka that performs the kind of data processing described above. In addition to Kafka Streams, alternative open source stream processing tools include Apache Storm and Apache Samza.
Advantages of Apache Kafka
There are some advantages of using Kafka:
Reliability: Kafka distributes, replicates, and partitions data. Additionally, Kafka is fault tolerant.
Scalability: Kafka’s design allows you to process massive amounts of data. And it can scale without any downtime.
Durability: Received messages are stored as quickly as possible. So we can say that Kafka is resilient.
Performance: Finally, Kafka maintains the same level of performance even under extreme data loads (many terabytes of message data). So you can see that Kafka can handle large volumes of data with zero downtime and no data loss.
Disadvantages of Apache Kafka
After discussing the pros, let’s look at the cons:
Limited flexibility: Kafka does not support extended queries. For example, it is impossible to filter specific asset data in reports. (Features like this are the responsibility of the consumer application that reads the messages.) With Kafka, you can retrieve messages from a specific offset. Notes will be sorted as Kafka receives them from the message producer.
Not designed to keep historical data: Kafka is great for streaming data, but the design doesn’t allow you to store recorded data in Kafka for more than a few hours. Additionally, data is duplicated, so storage can quickly become expensive for large volumes of data. You should use Kafka as a temporary store where information is consumed as soon as possible.
No support for wildcard topics: Last on the list of disadvantages is that it is impossible to consume from multiple issues with a single consumer. For example, if you want to use both log-2019-01 and log-2019-02, you cannot use a topic selection wildcard like log-2019-*.
The above drawbacks are design limitations designed to improve Kafka’s performance. For some use cases that expect more flexibility, the above limitations may limit the application from consuming Kafka.
Conclusion
Apache Kafka is an excellent tool for ordering messages. It can handle vast amounts of message data by scaling the number of available brokers. Zookeeper also ensures that everything is coordinated and stays in sync. Still, if your application processes large amounts of message data, you should consider Apache Kafka for your technology stack.
Kafka can be speedy for several reasons. It uses the offset when the message is received. It also does not track consumers for a topic or who consumed a particular message. Consumers interested in this data should follow this information.
The Kafka ecosystem consists of a group of producers and consumers. For our purposes, producers and consumers are external actors.
Apache Kafka relies on a publish-subscribe messaging system. Consumers can subscribe to multiple topics in a message queue. They then receive specific messages that are relevant to their application.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.
I am a Machine Learning Enthusiast. Done some Industry level projects on Data Science and Machine Learning. Have Certifications in Python and ML from trusted sources like data camp and Skills vertex. My Goal in life is to perceive a career in Data Industry.
We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.
Show details
Powered By
Cookies
This site uses cookies to ensure that you get the best experience possible. To learn more about how we use cookies, please refer to our Privacy Policy & Cookies Policy.
brahmaid
It is needed for personalizing the website.
csrftoken
This cookie is used to prevent Cross-site request forgery (often abbreviated as CSRF) attacks of the website
Identityid
Preserves the login/logout state of users across the whole site.
sessionid
Preserves users' states across page requests.
g_state
Google One-Tap login adds this g_state cookie to set the user status on how they interact with the One-Tap modal.
MUID
Used by Microsoft Clarity, to store and track visits across websites.
_clck
Used by Microsoft Clarity, Persists the Clarity User ID and preferences, unique to that site, on the browser. This ensures that behavior in subsequent visits to the same site will be attributed to the same user ID.
_clsk
Used by Microsoft Clarity, Connects multiple page views by a user into a single Clarity session recording.
SRM_I
Collects user data is specifically adapted to the user or device. The user can also be followed outside of the loaded website, creating a picture of the visitor's behavior.
SM
Use to measure the use of the website for internal analytics
CLID
The cookie is set by embedded Microsoft Clarity scripts. The purpose of this cookie is for heatmap and session recording.
SRM_B
Collected user data is specifically adapted to the user or device. The user can also be followed outside of the loaded website, creating a picture of the visitor's behavior.
_gid
This cookie is installed by Google Analytics. The cookie is used to store information of how visitors use a website and helps in creating an analytics report of how the website is doing. The data collected includes the number of visitors, the source where they have come from, and the pages visited in an anonymous form.
_ga_#
Used by Google Analytics, to store and count pageviews.
_gat_#
Used by Google Analytics to collect data on the number of times a user has visited the website as well as dates for the first and most recent visit.
collect
Used to send data to Google Analytics about the visitor's device and behavior. Tracks the visitor across devices and marketing channels.
AEC
cookies ensure that requests within a browsing session are made by the user, and not by other sites.
G_ENABLED_IDPS
use the cookie when customers want to make a referral from their gmail contacts; it helps auth the gmail account.
test_cookie
This cookie is set by DoubleClick (which is owned by Google) to determine if the website visitor's browser supports cookies.
_we_us
this is used to send push notification using webengage.
WebKlipperAuth
used by webenage to track auth of webenagage.
ln_or
Linkedin sets this cookie to registers statistical data on users' behavior on the website for internal analytics.
JSESSIONID
Use to maintain an anonymous user session by the server.
li_rm
Used as part of the LinkedIn Remember Me feature and is set when a user clicks Remember Me on the device to make it easier for him or her to sign in to that device.
AnalyticsSyncHistory
Used to store information about the time a sync with the lms_analytics cookie took place for users in the Designated Countries.
lms_analytics
Used to store information about the time a sync with the AnalyticsSyncHistory cookie took place for users in the Designated Countries.
liap
Cookie used for Sign-in with Linkedin and/or to allow for the Linkedin follow feature.
visit
allow for the Linkedin follow feature.
li_at
often used to identify you, including your name, interests, and previous activity.
s_plt
Tracks the time that the previous page took to load
lang
Used to remember a user's language setting to ensure LinkedIn.com displays in the language selected by the user in their settings
s_tp
Tracks percent of page viewed
AMCV_14215E3D5995C57C0A495C55%40AdobeOrg
Indicates the start of a session for Adobe Experience Cloud
s_pltp
Provides page name value (URL) for use by Adobe Analytics
s_tslv
Used to retain and fetch time since last visit in Adobe Analytics
li_theme
Remembers a user's display preference/theme setting
li_theme_set
Remembers which users have updated their display / theme preferences
We do not use cookies of this type.
_gcl_au
Used by Google Adsense, to store and track conversions.
SID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
SAPISID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
__Secure-#
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
APISID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
SSID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
HSID
Save certain preferences, for example the number of search results per page or activation of the SafeSearch Filter. Adjusts the ads that appear in Google Search.
DV
These cookies are used for the purpose of targeted advertising.
NID
These cookies are used for the purpose of targeted advertising.
1P_JAR
These cookies are used to gather website statistics, and track conversion rates.
OTZ
Aggregate analysis of website visitors
_fbp
This cookie is set by Facebook to deliver advertisements when they are on Facebook or a digital platform powered by Facebook advertising after visiting this website.
fr
Contains a unique browser and user ID, used for targeted advertising.
bscookie
Used by LinkedIn to track the use of embedded services.
lidc
Used by LinkedIn for tracking the use of embedded services.
bcookie
Used by LinkedIn to track the use of embedded services.
aam_uuid
Use these cookies to assign a unique ID when users visit a website.
UserMatchHistory
These cookies are set by LinkedIn for advertising purposes, including: tracking visitors so that more relevant ads can be presented, allowing users to use the 'Apply with LinkedIn' or the 'Sign-in with LinkedIn' functions, collecting information about how visitors use the site, etc.
li_sugr
Used to make a probabilistic match of a user's identity outside the Designated Countries
MR
Used to collect information for analytics purposes.
ANONCHK
Used to store session ID for a users session to ensure that clicks from adverts on the Bing search engine are verified for reporting purposes and for personalisation
We do not use cookies of this type.
Cookie declaration last updated on 24/03/2023 by Analytics Vidhya.
Cookies are small text files that can be used by websites to make a user's experience more efficient. The law states that we can store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies, we need your permission. This site uses different types of cookies. Some cookies are placed by third-party services that appear on our pages. Learn more about who we are, how you can contact us, and how we process personal data in our Privacy Policy.