 Hello, today we're going to talk about simplified Kafka connection management for Node.js. I'm Michael Dawson, active member of the Node community, and the Node.js lead for Red Hat and IBM. So, the first question is, what do I need to connect to Kafka? So, I end up needing one or more URLs that point me to the Kafka cluster. I need information to tell me what kind of connection am I making, I'm using SSL or the different options about how I'm going to connect. And then I'm going to need some sort of user ID and some sort of user secret. And of course, that user ID and user secret, I need to make sure that I handle that so that it's not disclosed to people that I don't want to, because I want to make sure, of course, that only the right people can actually connect to my Kafka cluster. What does that actually look like? This is an example of some of the information that you can use to connect to one of Red Hat's Kafka connections. In this case, it's telling me that I'm using the SSL mechanism as type planes. That tells me how I'm connecting. The client ID and client secret are the information that let me connect. And then the Bootstrap server is the URLs that actually let me connect to the server itself. Now, for node, there's two leading Kafka clients. And unlike some other languages, each of the packages has their own APIs and their own way of connecting. So if you look, I have the two examples of the code to connect with KafkaJS and NodeRD Kafka. You can use, in this case, I've used the same environment variables, but the way you pass them to the client actually ends up being different. And that turns out to be, if I wanted to change from one client to the other, that's a little bit more work than I need to do. And if I'm worrying about how I get the credentials and everything into the environment, that can cause some extra complications. In terms of choosing between the clients, if you want some recommendations or suggestions, you can take a look at the reference, the NodeJS reference architecture. There's some discussions with clients of why you might want to use one or the other. Now, in that example code I showed, we were configuring through the environment, and that was setting some environment variables. That's not necessarily the most secure thing, because if I actually go, for example, if I go into an OpenShift console, I can actually fairly easily get to and inspect the environment variables, assuming I have access to that environment. But often more people than you would think have access to a particular environment in terms of being able to dump the environment, or even if, say, you're generating core dumps that you want to investigate a problem and you want to share that. So generally, it's best practice to try and avoid using environment variables, if you can. There are some things like the .n mpm package that help make you do it, to add, inject things into the environment of the running Node process versus the overall environment that make it a little bit better. But really we'd like to avoid that if we can. The good thing is, for Kubernetes, people have already thought about this, not just for Kafka, but for all sorts of different cases, where really what you'd like to do is to map this information into and make it available to containers running in a Kubernetes environment in a way that's not just injecting them as environment variables. And what's been standardized through this spec, Kubernetes spec, the K8S service binding spec, is to map them in through a set of directories and files that you can read once they're mapped into a container. They basically are mapped under a service binding route and that one environment variable is added to your environment. So that's how you figure out where the top of the tree is. And then you can go in there and get the information, which you can see before things like the password, the bootstrap servers, the passwords, the provider, sasal mechanism and all that kind of information. Now, that is good, but now you've got to write some code that's going to read those files, pull all the information together, figure out the service bindings. Why would I want to have to do that for every application that I deploy? The answer is you don't. And the good news is, is we've put together this module called KOOP service bindings. And it does actually a number of things in addition to reading those files. So it will find if they are service bindings that have been mapped in to the environment based on the service bindings route environment variable. It'll then read that data in, but as I showed you before, because there's different clients, you're actually going to want that data in a different format to pay on the client you're using. So it actually also is aware of the different clients. It knows the KafkaJS client, it knows the RJF client, and it knows how to convert the data that's read from those service binding files into the format which is specific to that library that you want to use it for. And so that makes it a lot easier to use service bindings. So in fact, you end up with code that looks something like this, which basically, you know, you require the module, you can say get binding. Of course, the air handling always tends to be for the work, but in this case, I've admitted that. And in this case, I'm just, you know, asking for the service bindings for the Kafka client, for Kafka client, because we do think this will be useful outside of Kafka. I'm telling it, my client is KafkaJS, and it gives me back an object that I can use directly with the KafkaJS APIs to create the instances and pass the credentials. Similarly, you could write the exact same code with the NodeRD Kafka client, just tell it that you're using the NodeRD Kafka client and it would pass you back an object with things in the right format there. So let's see this in action. I've pre-created in a OpenShift cluster, which is available, you can actually use it. There's a sandbox where you can get an OpenShift cluster that you can use without having to set everything up. I've pre-done that, and I've connected my Kafka instance. There's also a Kafka service that you can get and connect into your applications. And so I've mapped that into my namespace there, which you can see as the NodeJS Kafka instance. I also pre-deployed an application, which uses code similar to what I showed you before, and I'll give you links to that example later. But if we look at that, oops, look at there, I look at the logs, we'll see that currently it's basically throwing errors. And that's throwing errors because the code says if there's service bindings, use those service bindings. Otherwise, I'm going to use a default Kafka instance, which has been installed locally, and if it's there, it'll work. Otherwise, it can't connect it, and we get these error messages. And what I really wanted to show you is that the cool thing about the service bindings is not only that there's that standard way of doing it, but that there's the infrastructure in Kubernetes environments that basically lets you make it so you don't have to necessarily manage that yourself. You don't have to set those environment variables. I've mapped in that Kafka instance to my environment, and that can be done through some applying some YAML, and you can have, say, the administrators do that for you. And then all I need to do is I need to drag, oops, let's see if I can do that, drag a link, which basically says create a binding connector, and that will connect my application to that Kafka instance. And having done that, the service binding, which is installed in the environment, will automatically map those credentials in. The code, because it's using Kube service bindings, will automatically recognize that it's there, pick it up, convert it into the format that the code needs for the particular client. And now, if I connect in, and I look at the logs for that application, I can see that it's connected to the backend Kubernetes, sorry, Kafka client, and is happily publishing away to the topic that was created. So what I think is really cool about this is, as a developer, I can connect to Kafka, I write my code, I don't even really need to know about what the format and how the credentials are going to be given to me, or the format they need to be for my particular client. And I can write that code, I can check it into GitHub repo because there's nothing sensitive there. And then on the deployment side, I can simply drag a line across in the topology to basically get that connection, and I could look at this and say, okay, well, let's take a look at what that service binding is. I can see that it's like my producer backend application has been connected to my NodeJS Kafka. So really, it's simplified what I have to do to get connected to a Kafka connection. Get a Kafka connection connected to the backend server with NodeJS. So I don't want to take too much of your time today, so I'm going to leave it at that, but I'm going to leave you with some useful links if you want to dive deeper. We have the reactive example that I was using where you can go and see we have a producer-consumer and it connects to Kafka. There's the links to TriKafka where you can get a Kafka instance that you can use. The NodeJS reference architecture, which I mentioned where if you want to read about some of the different modules and stuff that our teams have had success with, you can go about read that. There's a great blog post where you can find a little bit more detail on connecting applications with service bindings. And it'll show you doing those kinds of connections. Of course, you can do them also by applying YAMLs and from the command line and all that versus UI. So it actually shows you how to do some of that, which is pretty cool as well. The link to the COOP service bindings module itself, which you might want to take a look at. And then finally, if you just want to know what we're up to at Red Hat in terms of NodeJS and all the different things that were sort of the same guides that we're putting together, you can go to the NodeJS topics on developers.redhat.com. So thank you very much for your time and we hope to see you in another installment of one of our videos.