 Hi, welcome to Kubernetes by Keytar. I'm Jan Kleiner. I'm a developer advocate at Red Hat, and I focus on OpenShift, which is a distribution of Kubernetes. In this talk, we're going to cover four different sections. We'll start by talking about what Kubernetes is in the first place. Then we'll look at the Kubernetes API and resource types. Third, we'll have a demo where I use my Keytar and the WebMid API to interact with the Kubernetes API to deploy and manage some resources on a Kubernetes cluster, and then finally I'll share some resources where you can learn more. So I'll get out of the way now, and we'll go on with the presentation. So what is Kubernetes? Kubernetes is an open-source platform for managing containerized workloads at scale. As a container orchestration system, it can help you automate application deployment, scaling, and management. In other words, you can cluster together groups of machines or hosts running containers, and Kubernetes will help you easily and efficiently manage those clusters. A Kubernetes cluster consists of different Kubernetes objects, things like pods, services, and deployments, which we'll learn about later. These Kubernetes objects are persistent entities that represent the state of your cluster, and you can manage them with the Kubernetes API. Here's a list of some of the most common objects that Kubernetes implements, pods, deployments, namespaces, services, and so on. You work with each of these using the Kubernetes API. But you don't have to just use the raw API. You can also use command line tools like kubectl or kubectl depending on how you like to pronounce it, which wrap the Kubernetes API. Kubernetes distributions like OpenShift also have a web interface that you can use to simplify the management and deployment of applications on your cluster. So when you create objects in Kubernetes, that works as a record of intent. You're telling the Kubernetes cluster what your desired state will be. For example, I could say that I want to deploy a pod running a certain image. Kubernetes will continually work to try to bring the actual state in line with the desired state that we have asked for. The Kubernetes API is a RESTful API. As I mentioned, you can interact with a cluster directly using the API or using command line tools like kubectl. There are also REST clients for many languages that you can use and our demo is going to use a Node.js Kubernetes REST client. In Kubernetes, API object primitives include these fields that you see here. Kind, telling you what kind of resource this is, that's the object type, pod deployment, for example. An API version, metadata, which can include fields like the name of the object. A spec, this is where you give your desired state. Then a status, which is what the Kubernetes cluster reports back to you, letting you know how close you've gotten to reaching that desired state in your spec. Many of the objects will have more fields than this, but in almost every case, an API object will at least have these five fields. Let's talk now about some of these basic resource types. We'll start with a pod. A pod is a group of one or more co-located containers. In many cases, a pod is just running one container, but it is possible for a pod to have more than one container running, for example, in the case of a sidecar. Pods are also the minimum unit of scale in Kubernetes. You can scale up and down the number of pods that you have running. Here's an example of a pod spec on the right. This example is using YAML, but it's interesting to note that you can also use JSON, and you can use the two interchangeably when you're interacting with Kubernetes. So in this example, you can see that the kind is pod, our API version is V1. In our metadata, we have some information, including the name of this pod, which will be hello-k8s. And then importantly, let's notice that we have some labels. We have a label, which is just a key value pair, where we have run and then a value of hello-k8s. Our demo is going to always be displaying resources that include this particular label. And then we have our spec, which is a container spec in this case. It's going to specify the name for a container, what image we want to use, and then some ports that the image uses when it's running. So let's take a look at a demo now. Before we jump into the demo, let me explain a little bit about how this demo is going to work. I have my keytar here. It's connected to my laptop with this USB MIDI cable. And so when I play notes on the keytar, it's sending those MIDI messages to my computer, and therefore to my browser, which is listening for certain notes to be played. Depending on what note is played, it's going to trigger a request to the Kubernetes API to do something on my cluster. We should probably take a minute to talk about MIDI in the web MIDI API. MIDI stands for Musical Instrument Digital Interface, and it's a technical standard that's been around since the 1980s. It's designed for communication between digital musical instruments like my keytar, audio devices, computers, and so on. Communication in MIDI happens through MIDI messages. In our demo app, we're listening for a particular type of message called a channel voice message, and it consists of three numeric values. The first is the type of event it is. We're listening for the note on event, which is what happens when I press a key on the keytar. The note on event has a value of 144. Second piece of information is the note number. So note number can range from zero to 127, and to give you a frame of reference, middle C is 60 on that scale. The third value is velocity, which corresponds to how hard I'm pressing the key on the keytar. Very, very softly would be towards the zero end of the range and extremely hard, extremely loud notes would be higher up in the range. So every time I press a key on the keytar, it's sending a channel voice message that includes all of these pieces of information. And to put it all together, the WebMiddy API allows us to interact with those MIDI messages in our browser. You can see a code snippet here, showing how you would check if WebMiddy is supported in your browser. This is important because WebMiddy API is only supported currently in Chrome and Opera. Then if you do have access, you can request that access and listen for MIDI success and MIDI failure messages. The web app that we're using for our demo is using the Kubernetes API, the WebMiddy API, and for the front-end visualizations that's using React and SVG. So in this first case, I'm gonna play a certain series of notes which will then trigger a pod to be created. It's a little overkill, but it's really fun. So let's try it out. Okay, so over here in VS Code, you can see I've got a pod spec. In this case, it happens to be in JSON, as I mentioned before. These can be used interchangeably, JSON and YAML. I believe it's exactly the same pod spec as we showed in the presentation, just in a different format. So we're gonna play a certain riff on the keytar to deploy this pod on our cluster. Let's take a look. All right, all right. So with that riff, we have launched a pod on our cluster. You can see that as the pod came up, it turned from yellow to green. And so what was happening there is yellow was in pending state as the container was being brought up. And then once it was ready, we turn it green so that you can visualize that that pod is ready and running. Okay, so we have our pod up and running. That's great, but that's not very exciting. Next, we're gonna take a look at services which give us a way to more easily interact with that pod from within the cluster. All right, let's move on to service. So a service, you can think of it like a load balancer. It acts as a single endpoint for a collection of replicated pods. In our previous example, we just had a single pod. Imagine what would happen if we tried to access that pod but it had crashed and we brought up a new one which had a different IP address. That would be a lot to manage. Or imagine if we had multiple pods running and we wanted to be able to balance load across those. That's where a service can be helpful. Here's an example of what the spec for a service looks like. Again, we have our kind service, our metadata. We have a name for our service just like we did for the pod. We also are including the same label, run hello k8s. And here the spec looks a little bit different. So in our spec we have the ports that we're using for this service. And then we have a selector. The selector is very important. The selector is a label that you're going to look for in the pods that are running and that's how we associate certain pods with our service. Our service will act on any pods that have this run hello k8s label. Let's take a look at a demo where we add a service to our cluster. So looking at services now, we are back in VS Code. We have our service spec here. Again, just like we saw in the presentation but in JSON instead. All right, so now we're going to deploy that service on our cluster by playing part of take on me. And there we go, our service is now created. You can see this line connecting the pod and the service and that's to represent that connection between the two which is established by that selector, the label run hello k8s. So if we had more than one pod running with that same label, you would see that the service could direct traffic to either of those pods and you could have more than two. So the service is effectively going to act as a load balancer. If I were to get the IP address of that service from within the cluster, I could contact that service and it will assign my request to any one of the pods that are associated with the service. In this case, there's just one pod so there's only one place for it to go but if we did have multiple pods, it would handle that routing of requests for us. Now, typically within a Kubernetes cluster, you're not just manually spinning up pods. Part of the reason for that is if this pod were to crash, that's it. It wouldn't come back. We would have nothing to access with our service. One of the benefits of Kubernetes is its self-healing capabilities. And to take advantage of those, you need some of these different object types like deployments which we'll look at next. So now you've seen what we can do with a pod and a service but it's not really standard practice to just deploy pods on their own. One of the wonderful things about Kubernetes is that it has these kind of auto-recovery, auto-healing capabilities but you don't get that when you deploy a pod on its own. You start getting some of those benefits when you use things like a deployment. A deployment is going to help you specify container runtime requirements in terms of pods. So for example, with the deployment we have here, we're specifying a number of replicas. We say we want one replica, that means one instance of this container running in a pod. We're giving it some labels. Again, run hello kats, we see just like we did with the pod and the service. And then within this deployment, we have a template which we didn't see before. So this template includes some metadata like our labels and also has a container spec embedded within it. So in this case, the deployment, we're gonna run one replica of our container that is running this image that we've specified here. Let's take a look at how that works. But first, before we move on, let's clean up after ourself by getting rid of this pod. We're gonna keep the service there but we'll get rid of the pod. Notice nothing quite happened right away. When we mark a pod for deletion, it takes a little while before those resources are cleaned up. So you'll see the pod hang around for a little bit before it disappears. All right, let's move on to deployments. So here is our deployment. Like we saw in the slides, we are asking for one replica. Here's our container that we want to run. So let's give it a shot. Okay, so to get our deployment up and running on the cluster, we're going to play a part of final countdown. This is honestly the hardest part of the demo is playing these risks correctly. So bear with me. So when our deployment was created, you can see it's visualized here. And then that pod was also created with the deployment. You can see that the pod's name is a lot different now. It's much longer than that pod name was auto-generated by the deployment. And you'll notice that it automatically got that line connecting it to the service. And the reason this works is because we had all the proper labels and selectors set up in our deployment and service. Now, this isn't very exciting, right? Here's our deployment with just a single pod with our service. What would this look like if instead of having one replica, we wanted to have three? Let's try that now. And this time, instead of doing that with the keytar, let me show you what it looks like if we use the cube CTL command line tool just to give you another view into how people can interact with Kubernetes clusters. Okay, so we're gonna go side by side with the demo app on one side and our terminal on the other. And what we're going to do is use the cube CTL command line tool to scale up our deployment from one replica to three replicas. And the way we would do that is this, cube CTL scale. We tell it the name of the deployment we want to scale. So deployment, slash hello, A8S. And then we call it dash dash replicas equals three. So what that's going to do is say, hey, scale this deployment from one replica or from whatever number of replicas it currently has up to three. Three is our new desired state. So as we do this, what you should see is that almost immediately these two new pods are created. They're in that yellow pending state until that container is up and running. But now we have three pods running in our deployment. That deployment is managing these three pods and our service is also associated with all three of those pods. So if I were to make a request to this service within our cluster, it would get routed to one of those pods, but it doesn't matter to me as a user which one it is. It also doesn't matter if one of these were to crash because one, the service can route me to another pod and two, Kubernetes would work to bring up another pod so they would always have three running. So for fun, let's take a look at what that self-healing auto recovery process looks like. You heard before when we played the intro to Beethoven's fifth to kill the pod before, let's do the same thing now. And remember, it's going to mark that pod for deletion. So while the pod will hang around for a little bit, you'll see that Kubernetes will immediately take action to bring up another one to replace it even before it's all the way gone. And there you go. You can see this new one being created. And then once the pod that we did delete is completely deleted and terminated and cleaned up after, you'll see that that disappears and we're back to our desired state of three pods just like we asked for. Now with all of the Kubernetes API commands that I issued using the keytar and the API, we could have done all of these things via Cube CTL as well. Or in the case of our OpenShift cluster, we could use the web console to view and interact with these resources. I'll give you a quick look here. This is the topology view within the developer perspective of the web console. You can see our deployment here. If I click on to this, you can see we've got three pods running. I can also show pod count if that's helpful right there. You can see our three pods that are running. If I wanted to look at more details, for example, if I wanted to look at more details on this deployment itself, I can always go in and look at the YAML for it here. But a lot of the time I like to work in the topology view where I can get to most of the things that I need just with a couple clicks, including pod logs if you wanted to see what was running there. All right, so that concludes our demo and we'll pop back over into the presentation now. So we've only looked at a few examples of Kubernetes objects. Pods, services, deployments, and so on. There's so much more that you can learn but hopefully this gives you a good starting point and helps you visualize how these objects work together on a cluster. We've also talked a little bit about the Kubernetes API and I've shown you how you can interact with the API to take actions on a cluster or how you can use Kube CTL command line tool to interact as well. At this point, if you'd like to learn more we have a lot of resources available. I've listed a few Kubernetes and OpenShift resources here. Kubernetes.io is the official Kubernetes website, has tons of great documentation, examples, and more. Kubernetesbyexample.com is a great resource with some practical examples of how you can use and interact with different object types. Learned at OpenShift.com is a self-paced interactive learning platform where you can learn about Kubernetes and OpenShift in a hands-on way. You get quick access to an OpenShift Kubernetes cluster where you can experiment and learn. The final two links are to some more resources on developers.redhat.com. There's a page for Kubernetes resources as well as one for OpenShift and there's a lot of great material in there to help you learn more. Next we have the GitHub repos. The first one here is the repo for the Kubernetes by Kitar demo app if you wanna take a look at that. Below that we have the GoDaddy Kubernetes client. This is the Node.js Kubernetes client that I used in my demo. There is also an OpenShift REST client if you wanna interact with some of those OpenShift specific resources that are available on top of Kubernetes. So for example, things like routes and projects and things like that. Finally, if you'd like to get in touch with me, I'd love to hear from you. You can find a link to my Twitter and my GitHub accounts here. Thank you for watching. Hope you learned something about Kubernetes and the WebMiddy API and had a little bit of fun. Thanks so much for your time.