Ez Kafka setup on Confluent

Ritesh Shergill
5 min readAug 3, 2022

So you want to migrate your streaming services to Kafka. It has been your prerogative for a long time to use Kafka.

But you don’t have an idea about Kafka. You want to learn, code and apply. That would take a week or two. Who has that kind of time ⏰⏰⏰???

You want Kafka and you want it fast. What can you do?

Let me introduce you to the new kid on the block — Confluent

Confluent is a cloud-native, complete, and fully managed service that goes above & beyond Kafka so your best people can focus on delivering value to your business.

It provides hosting on the popular cloud providers like AWS, GCP and Azure.

Creating a cluster on Confluent is as easy as point and click with a wizard. This is what creating a cluster looks like (I am using the free tier btw)

Here I select the cloud provider as well as the Region. For production, availability should be multizone but for my purposes, I am using a Single zone.

In the next step, Confluent will ask for your payment details (card details) but you can skip this step as its optional and you can always add your payment details later in the Billing section.

Next step is to create a Cluster. I enter my cluster name here..

You will see the details about

👉Configuration and cost

👉Usage Limits

👉Uptime SLA

For the plan that you have chosen.

🚀Click on Launch Cluster to boot up your shiny new cluster.

Once the cluster is ready, you get this screen

I want to setup a client so I will choose Node js for now. Why?

Because Node js is ez and I like ez things.

Selecting Node js gives me the following screen

The instructions here are fairly straightforward and self explanatory. I will clone the examples project and show you how easy it is to setup a client.

Before you proceed, you must note down your api key as it is only displayed once. This is imperative!

Once you Clone the project, follow the instructions to get the following structure.

Cloned Client project

You need to be in the folder ->examples\clients\cloud\nodejs

Now, there are some prerequisites to running the client.

The first is that the version of node js needs to be > 8 atleast.

Also, you need Open SSL installed.

This is already mentioned in the client setup screen from Confluent.

Installing the pre-requisites is a bit of a headache so the path of least resistance is to just use docker.

So if you see, I have created a docker file and a .env file as follows

Docker

FROM ubuntu:18.04RUN  apt-get update -qqy \&& apt-get install -y --no-install-recommends \build-essential \node-gyp \nodejs-dev \libssl1.0-dev \liblz4-dev \libpthread-stubs0-dev \libsasl2-dev \libsasl2-modules \make \python \nodejs npm ca-certificates \&& rm -rf /var/cache/apt/* /var/lib/apt/lists/*WORKDIR /usr/src/appCOPY .env *.js *.json *.md /usr/src/app/RUN npm install -dENV LD_LIBRARY_PATH=/usr/src/app/node_modules/node-rdkafka/build/depsCMD [ "node", "producer.js",  "-f", ".env", "-t", "test1" ]

The .env file

# Required connection configs for Kafka producer, consumer, and adminbootstrap.servers=<SPECIFIED-BY-CONFLUENT>security.protocol=SASL_SSLsasl.mechanisms=PLAINsasl.username=<PROVIDED-BY-CONFLUENT>sasl.password=<<PROVIDED-BY-CONFLUENT>>session.timeout.ms=45000

Essentially, I am going to copy over all the files in the directory examples\clients\cloud\nodejs

To the docker image, including the .env file.

Then I will use docker run command to execute the producer to produce messages to a topic.

So now I run the commands’

docker build -t confluent-kafka .
docker run confluent-kafka

To see the following output

Created topic test1
Producing record alice {“count”:0}
Producing record alice {“count”:1}
Producing record alice {“count”:2}
Producing record alice {“count”:3}
Producing record alice {“count”:4}
Producing record alice {“count”:5}
Producing record alice {“count”:6}
Producing record alice {“count”:7}
Producing record alice {“count”:8}
Producing record alice {“count”:9}
Successfully produced record to topic “test1” partition 0 {“count”:0}
Successfully produced record to topic “test1” partition 0 {“count”:1}
Successfully produced record to topic “test1” partition 0 {“count”:2}
Successfully produced record to topic “test1” partition 0 {“count”:3}
Successfully produced record to topic “test1” partition 0 {“count”:4}
Successfully produced record to topic “test1” partition 0 {“count”:5}
Successfully produced record to topic “test1” partition 0 {“count”:6}
Successfully produced record to topic “test1” partition 0 {“count”:7}
Successfully produced record to topic “test1” partition 0 {“count”:8}
Successfully produced record to topic “test1” partition 0 {“count”:9}

The producer.js file is simply producing records with key as alice and value as an increasing count.

On the Confluent dashboard it will capture the action of the producer producing messages to show this

You can see stats of messages published, topic names, etc when you click on the client.

And that’s it as a starter for Confluent.

I found the setup to be pretty straightforward (took just about an hour) and fairly easy to have a performant Kafka Cluster setup within such a short time span.

So if you are breaking your head everyday trying to maintain your own Kafka clusters, try Confluent. It will give you some well deserved Peace of mind!

--

--

Ritesh Shergill

Cybersec and Software Architecture Consultations | Career Guidance | Ex Vice President at JP Morgan Chase | Startup Mentor | Angel Investor | Author