 Hi y'all, Kenley here from Red Hat. So this is Kafka Broker Authorization in Red Hat AMQ Streams, aka Streamsy Kafka, using Red Hat SSO, aka Keep Cloak, and OpenLDep. So let's get right to it. So here in front of you is an architecture which describes the layout and deployments across three namespaces within an OpenShift cluster. So the first namespace named Kafka is for deploying your Kafka cluster in a sample Kafka client test application. Another namespace, OpenLDep is for deploying your OpenLDep server. And then finally, we have a Keep Cloak namespace, and that is where we deploy our Keep Cloak SSO server. So going through that, what is the relationship between these deployments? In short, we want to be able to lock down our Kafka clusters to valid users and groups coming over from an AD LDAP system. We also want to be able to apply granular rules and policies to these users and groups. And so for this setup, we can accomplish this through our deployments, which in this case is our LDAP system, our Kafka cluster, and our SSO provider. And so to streamline this demonstration for you, our audience, we've automated the deployment of these artifacts leveraging Ansible. For you, it'll just be a matter of fulfilling the necessary requirements before you run an Ansible script to commence installation. And of course, there's plenty of documentation and walkthrough for this process, as well as the demonstration I'll be showing you momentarily. So given the explanation behind the deployment layout and architecture, what exactly is the purpose of this demonstration? Well, let me tell you what that is, and I'll give you two in the form of a problem statement. So I, as a Kafka administrator, I want to manage user and group permissions to publish and consume messages from specific topics by adding them to an AD LDAP group. So let's take a look at the starting point of what we've got for a specific user. We'll call this user, PEPI. So PEPI is a member of the following groups, which are also tied to resources and policies that have specific permissions. Now, for the sake of simplicity, the group to permission naming scheme has been purposely made self-explanatory. And so we see that PEPI can read and write topics that start with the letter X, but can only write the topics that start with the letter A. So you know that's great and all, but PEPI's kind of greedy and he wants it all. So he can write the topics that start with the letter A, but he also wants to be able to read for them too as well. Luckily we have a group, resources, policy, and permissions for supporting that need. All that we really need to do is add PEPI to the topic A read group. So how do we do that using LDAP? And so that's a great question. Let us proceed to demonstrate. All right here. So let's go ahead and get started with this demo. The first thing I'm going to do is I'm going to go ahead and log into my OpenShift cluster from the terminal. So let's go ahead and grab that login command, do that over, paste here, and we'll paste here. All right, great. So OC, get nodes, and I should be able to see my nodes and verify that I'm logging to my cluster. All right, great. So the next thing we're going to want to do is we're going to want to head over to this repository, AMQ Streams Broker Authorization Sample. Now you'll get the repository location within the documentation that comes with this video. So let's go ahead and get that repository in this case. I'm going to copy this over, and I'm going to clone this repository, okay? And let's see that we have it, okay? Let's go ahead and go into our repository. And if you take a look at the contents of the repository, since we're going to go ahead and install this in an OpenShift cluster, let's go ahead and head over to the OpenShift folder. In this case, it's the OCP folder. And then within that folder are a few scripts here. One is for the installation, and the other one is to actually clean the installation up. So if you decide that you don't want the installation anymore, you can go ahead and run that cleanup script. So in this case, we're going to go ahead and run this install script, so .slash install. And what this is going to do is it's going to go ahead and run an Ansible script, which installs the entire infrastructure of deployments, in this case. That'll include OpenLDAP as well as AMQ Streams and Red Hat SSO. And this will take a while, so we'll just go ahead and we'll speed through this and then we'll resume when it finishes. And so through the magic of video editing, our Ansible Automation script has finished running, and we now have an infrastructure, which includes an OpenLDAP server populated with an organizational unit of users and groups. We also have a Red Hat SSO key cloak server, which we've created a realm for and have synchronized our LDAP users and groups too. And then we've also created policies, resources and permissions to govern access on those resources. In this case, Kafka topics and the operations that you can do on them, such as read and write. And then finally, we've also got our Red Hat AMQ Streams cluster, which is our Kafka cluster and that cluster has been configured to use key cloak for authorization. So let's go ahead and jump on over to our key cloak administration console and examine what's been synchronized against my LDAP server. So we'll go to the key cloak project, of course, we go to the routes. And then from the routes, let's go ahead and click on the link to our key cloak administration dashboard. So we'll go ahead and click on the administration console. Okay. And then from there, and so here within the Kafka off the realm, when we click on groups, we see the following groups imported over from open LDAP. So these are all the various groups that were imported over from open LDAP. And then clicking on users and clicking on the view all users, we also see all the users synchronized from open LDAP. Now let's look a little bit further at this list and we see that peppy is in that list. So let's go ahead and click on peppy and see what he's all about. If we click groups, we see that peppy is a member of the following groups. In this case, the topic x right topic a right and topic x read. So just like in the presentation, we see that peppy belongs to these groups. But what's missing is that we don't see peppy being a member of the topic a read group. So let's go ahead and fix that. And so to begin, we'll need two terminals that we can work from. First we'll need an OpenShift terminal, which will be used for working with the OpenShift cluster at the command line. And then we'll need a Kafka client terminal, which is used for working within the Kafka client application that we deployed earlier to connect to Kafka from. So for OpenShift terminal, we'll go ahead and use this top terminal up here. Okay, let's go ahead and clear this screen here to make it a little cleaner. And then for our Kafka client terminal, we'll go ahead and simply open a shell into our Kafka client application as followed. So let's go ahead and do that. So we're going to go ahead and get the key cloak route, and then this should hopefully spit out the key cloak route and then that there is that they're going to we're going to want to keep a note of this. So let's go ahead and change out to our Kafka project, okay, and we are now in our Kafka project. And let's go ahead and take a look at the pods. Okay, great. So we've got this pod called Kafka client shell. Now we're going to want to open a terminal into this pod. And the way we do that is by executing the following. Great. So now we're terminal into our Kafka client shell application. And so here, we're now terminal into our Kafka client application and can now start working with the Kafka cluster. And so in order to access our Kafka cluster as a consumer or producer, we need to create a token from our Red Hat SSO server. Let's go ahead and do that now. These commands here will set us up with our TLS environment. So I'm going to go ahead and put that in. And then to make things a little easier to work with as we move forward, we're going to copy the key cloak route that we made a note of earlier. So in this case, let's go ahead and copy and paste the output of the key cloak route that we quarried earlier into an environment variable and then use that to construct our token endpoint. So let's go ahead and do that. And our key cloak route from earlier. Okay. And then finally, let's go ahead and generate our token. Our password is just pass. Okay. And so now our token has been generated. So before producing messages in our Kafka topic, we need to create a producer consumer configuration in the form of a properties file. This producer consumer configuration has all the necessary authorization related configuration along with the token we created for Pepe. We can then produce messages on the Kafka topic, a topic as an authorized user. So if the Kafka topic doesn't exist, then the Kafka topic will be created automatically as it is written to. So let's go ahead and generate that properties file. And so now we'll go ahead and start creating messages on the topic, a topic. So let's go ahead and start doing that now. Right. So let's go ahead and start creating some messages now. And I think there should be enough messages. And so there you have it. Pepe is able to create messages on the topic because he's authorized to do so. And so the next step is to see if we can get Pepe to read from the topic that he published to. So at this point, Pepe does not have the permission to read from the topic that he published to. If he tries, he'll be met with disappointment. Let's entertain the idea and see what happens if we try to read from the topic we just published to. And so as you can see, Pepe is not authorized to read from the topic. The error that you see means that Pepe isn't a member of the LDAP Group Topic A read. If we add Pepe to that group in LDAP and re-synchronize the user in group assignments in Red Hat SSO, Pepe will be able to consume messages from the Kafka topic. Let's go ahead and add Pepe to the topic A read group. From our OpenShift terminal, we'll enter the following to add Pepe to the group topic A read. And so by doing that, Pepe is now a member of the LDAP Group Topic A read. And so there's another step that we need to do in Keycloak and that is to synchronize the user in group assignments into Keycloak. And under User Federation in LDAP, we'll go ahead and synchronize users in group. So let's go ahead and do that to user federation. It looks like we'll need to sign them in one more time. Let's go under User Federations and click on LDAP. And let's scroll all the way down. And what we're going to do is we're going to click on Synchronize All Users. Great. And so to validate even further, we should also see that Pepe is now a member of the Topic A read group too as well. So let's go back into Users, View All Users, click on Pepe, and click on Groups. And there you have it. Pepe is part of the Topic A read group. And so now for the finale, let's see if Pepe can read those messages from the topic he couldn't read from earlier. And so there you have it. Pepe is now authorized to read from Kafka Topics that start with the letter A. So in this presentation, we demonstrated the deployment and integration of Open LDAP, Red Hat SSO, and Red Hat AMQ Streams on Red Hat OpenShift Container Platform, leveraging Ansible for Automation. We were able to synchronize LDAP users and groups to Red Hat SSO Key Cloak and then create resources, roles, policies, and rules around those resources. We were able to manage Kafka ACLs from Red Hat SSO. Finally, we demonstrated TLS OAuth 2.0 authentication and authorization with Red Hat AMQ Streams Kafka via Kafka Broker Authorization using Red Hat SSO Key Cloak. With this, we were able to illustrate producer-consumer operations with authorization on Kafka Topics for LDAP users via Red Hat SSO Key Cloak roles and policies. Thank you for watching and feel free to leave comments and questions below.