 Hello everyone, this is Aykut Bulgu. I work at Red Hat. I'm a services content architect in the Red Hat training team. As Gregor Samseh awoke one morning from Anise's dreams, he found himself transformed in his bed into a gigantic insect. But today, I'm not gonna talk about Gregor's transformation story, but tell you a kind of metamorphosis story that I had with the Kafka technology. As a developer before Red Hat, and in my first two years in Red Hat as a middleware consultant, I had some traditional Kafka experience. I either used Kafka as a developer or had to build a fully functioning cluster for customers. I had times that I had to support the customer for urgent issues. Along with all this experience I had, I understood that a CLI is a vital tool, not just for ordinary configuration stuff, but for the times that everything is on fire, especially. After having those experiences with Kafka and being an OpenShift consultant and a Kubernetes lover, my interest on streams got bigger, while the interest of the customers were also getting bigger. While learning streams, I had a chance to make some community talks. I had one for the Istanbul Java user group community. Everything went well, especially in the demo, but as a traditional Kafka user or a Kafka developer, I realized changing the YAML file for a specific custom resource for a specific configuration, and calling the EOC apply for that YAML file didn't feel like it's like Kafka, but it felt like it's something different. So for a direct user experience, point of view, things seem to have changed with streams. I had the same feeling when customers asked for a CLI for streams, or asked if they can use the Kafka tools, the traditional ones, for streams. The answers were no and not exactly. Most of those customers were either in the POC phase for those technologies. They had already adopted OpenShift, but started their DevOps transformation journey recently. They started to build the DevOps bridge to get to the cloud-native cost, but they also had to adopt those new tools, but they were not ready. So they needed something to make them pass to the other side, a ferry, maybe before building the bridge. This idea of a CLI for streams came up to my mind while I was giving a talk for Tecton, a Kubernetes-native CICD platform that is managed by an operator and its resources, and it had a nice CLI. So you could do almost everything in Tecton with either custom resources or its CLI that creates those custom resources for you. So I decided to develop a CLI for streams as a site project. It was almost two years ago. I started to develop streams in Kafka CLI as an open source project on GitHub with an intention of making it easy to use for system admins and developers who use the traditional Kafka commands. So by generally keeping the command structure similar to the traditional tools, I created a CLI that is also designed to manage streams objects and configuration. So let me show you a small demo for streams CLI. So first let's install the streams in Kafka CLI with the sudo pip command. It's based of Python, so we have to use pip to install it, or you can use bru. We created a project namespace on OpenShift because we will be creating our operator and cluster. The kfk operator install command is used for creating the streams in Kafka operator in the relevant namespace. When the cluster operator is ready, let's create a cluster with kfk clusters command. It shows us the custom resource of it. When we approve it, it creates the zookeeper and Kafka broker nodes as you can see on the left. So let's create a topic with the name MyTopic with twelve partitions and three replication factor. Now we can try it out with a console consumer and a console producer. We will be producing some text messages and consume those messages. And yeah, voilà, here's the messages. For more information you can scan the QR code if you are interested. Thank you for watching my session, see you later.