 Cool. Hello and welcome. This is hands-on intro to Kubernetes and OpenShift for JS developers at NodeJS Interactive. Thank you all for joining us this morning. I'm doing an overview for the cameras here because it looks like they're recording. So looks like hopefully we're all in the right room and space. You can find these slides at this URL, bit.ly-k8s-interact. So if you open up that link, you should see the same thing I have on my screen. And Good morning, everyone. I'm Ryan J, Ryan Jarvanen. You can find me as Ryan J most places online. I'll put this URL in our spreadsheet as well. Here is the spreadsheet we're gonna want to take a look at for signing in. Okay, I put another copy of the bit.ly URL in the slides. But if you are planning on following along, get your laptop out and join us at this bit.ly address. My slides are at the other bit.ly address, k8s-interact. And this is where we'll be picking user names. I have claimed user one as my user ID. You are all welcome to grab your own user ID out of this list. You can mark your name if you like so folks don't claim your particular user ID. But for the rest of the workshop, as far as the computers are concerned, you will be known as user and then some number, right? So copy in this value whenever you see that user name. And I think it says this in the spreadsheet, but when you log in, when you're prompted to log in to OpenShift, the password that you're going to use is OpenShift. Yeah, step, line three here. Password is OpenShift. Cool, so everyone ready? I will jump ahead. So Ryan and Jan Kleiner here. We are both developer advocates in Red Hat's OpenShift team. We have been OpenShift is kind of Red Hat's distribution around Kubernetes. You have probably heard of Red Hat before as far as Red Hat Linux or many of the other Linux distros. We maintain CentOS Linux, Fedora Linux, CoreOS Linux is one of them. All of these distributions are attempts to help you all be productive with open source and particularly with Linux, right? And when we give you a distro, we don't just give you the Linux kernel and say, good luck, you're on your own. We give you a lot more support than that to help ensure your productivity. There's things like security, user access controls, a way to source packages from the community, a way to do system updates. And we try to do all of that in our Kubernetes distribution as well. And that's what OpenShift is. So we're going to show you first hour is basically all intro to Kubernetes. So you understand kind of what's happening in the larger community. And then second hour we'll flip into a little bit of what OpenShift adds to the experience to kind of really help you get some traction and find some productivity, hopefully. So first I have a quick survey for the folks in the room here to get a little bit more information about who you are and your background. So how many folks here have experience using containers? Docker or some other, and that's cool. Almost everybody here looks like they're using containers or have used containers. How many folks here have experience using Kubernetes? Looks like half or more abouts. Cool. That's encouraging. I've noticed a lot more hands going up at JavaScript events than I've seen in years past. So that's really cool. How many of those folks, I'm kind of expecting decreasing numbers of hands with each of these. How many consider yourselves to be basically proficient with either the OC or Kube CTL command line tools? Anyone care to raise a hand on that? Cool. Some brave folks. It looks like four or five folks. And how many folks feel like they can name five Kubernetes resource types or primitives? I'm not going to call you on it, but yeah, two or three people. Okay, not a whole lot of folks, but a couple folks feel like they can name a couple of these things. That's really cool. All right, so out of you folks that are remaining, how many of folks feel like you can confidently say you have a plan for iterative web development that involves Kubernetes? Or are you still doing like, hey, awesome. Cool. Nice. Good to see. All right. Well, I would be very curious to chat with you afterwards to see what's working well for you and what's not working. Usually I hear folks sometimes what they really need in their local development is to be able to make a small change and to reload their browser and see that change instantly. And using Docker or using Kubernetes, they don't always have a clear path for achieving that kind of real-time development speed on a container-based platform. So hopefully we have time to show you a little bit of that as well at the end. So first off, we've got a introduction. We're already going through that part here. Environment setup, and then like I said, Kubernetes basics, and then we'll flip to hands-on with OpenShift. So let's go for it. I'm going to be floating around the room while Ryan does this first part. So as we start doing the interactive stuff, do you have any questions? Definitely. Yeah, we have a small enough room here. Feel free to raise a hand at any point if you need clarification on anything that haven't been set. Or if you get stuck on any piece, definitely feel free to raise a hand. But I am going to try to keep the pace moving along during this first hour because Kubernetes is a really deep concept to try to absorb. And so it's a lot of info. I'm going to have to work expeditiously to get us through the first hour on time. So at a high level, if you're not already familiar, it seems like half of you folks knew this already. But for the other half, Kubernetes is designed to be an ops tool. Primarily for you folks being JavaScript developers are going to recognize it as kind of a collection of APIs for managing container-based workloads. It was started, patterned after some kind of best practices that Google had developed internally for wrangling all of their services. So everything that Google has been containerized for quite a while, over 10 years. And Google search, Gmail, apps, everything you touch in your web browser. And just based on the number of browser tabs I have open right now, I'm probably touching 20 different containers run by Google right now. Just as me, a single user, right? So Kubernetes, and if you click on this link here, there's a link to what Kubernetes is and what Kubernetes is not. And this is really clearly called out in the upstream Kubernetes documentation. They're trying to narrow the scope of Kubernetes so that it doesn't grow and expand into some huge overblown project. That is kind of beyond its intended scope and is unwieldy. Some folks have kind of migrated from the open stack community where they faced certain organizational challenges. And some of the Kubernetes organization is kind of an attempt to overcome the difficulties that past open efforts have had. And so this focus and scope is really intentional. But Kubernetes is designed not to be an all-inclusive platform as a service like you may have seen from Heroku where you can give it a repo address and tell it what language you're working in and get a host name as the response, right? This is really lower-level APIs and tools. There's some dashboard available, but generally it's a platform for running containers more than it is a development platform. On the other hand, OpenShift, which is a CNCF certified distribution of Kubernetes, does try to include platform as a service style, workflows, multi-tenant security, a container registry, metrics, logs, other things you'd kind of need to have if you were going to run Kubernetes on a bare metal environment. It's going to give you everything you'd need to run the whole cluster on your own hardware without having to sign up for additional cloud services if you're in the case where you can't. For more information specifically on Kubernetes, definitely check out the upstream documentation. It's available on GitHub and there's a really nice dock site. OpenShift has its own upstream source publicly available and some pretty decent documentation as well. So feel free to take a look at those later. For today's workshop, all you need is a browser laptop and you should be ready to go. Hopefully you've already picked your username in the sheet. So remember that for use in this link right here. So if you haven't already clicked through to the workshop, go ahead and open that up in a second browser tab. I'm going to put that one, do that one side by side. Okay, so I chose user one and password of OpenShift. Log in and you should see something like this will be spawning up a user environment for you. This is a kind of a homegrown project that some team members were working on that boots up a shell with some... Some usage info in here. Okay, so that gives me a couple terminals. I'm going to switch the order of these. How are folks doing? How many folks have a browser and a user account? That's everyone on this side. How many folks are stuck? Anyone need help? No? Alright, cool. Alright, so everyone sees generally what I'm saying here. You've got two terminals at your disposal. You can also choose to use one of your own terminals from your laptop, but you would need a couple command line tools, OC and Kube CTL. You can get those later and try to repeat all of these slides examples using Minikube. Or we also have a downloadable OpenShift called CodeReady containers. So either of those can give you an environment where you can run all of this later from your own laptop. So let's get started. I'm going to paste in a couple variables to initialize my shell and just to make sure that everyone here is familiar with how to copy and paste with this virtual terminal. So let's see if we all know how to use our keyboards. I'll do a quick check. Since I'm on Linux, I'm going to use Ctrl C over here. And then to paste into a terminal, anyone know how I pasted into a terminal on Linux? Ctrl Shift V. See if it works on your system. Or if Apple V works for you, great. But if not, Ctrl Shift V hopefully will get you to paste into the terminal. I'm going to paste these shell variables into both of these terminals so I have everything ready. And then I'm going to verify that I should already be logged in in one of these terminals since we logged in via the web prompt. I'm going to run OC Who Am I in order to verify my user ID. I could bump this font a little bit for you. And don't run this next one, but there's an example. If you were not logged in for some reason, you can run OC Login in order to generate some login credentials. This is a pretty basic feature. It's something that's not included in Kubernetes by default. Generally, your administrator will have a kube-config file that is kind of the root credentials. Hopefully they don't give out that file. Hopefully they lock the system down. But worst case, they're giving out admin credentials to everyone in the cluster. Open Shift includes a nice login command that will help you initialize your access to the cluster with an appropriate level of resource controls and permissions. So hopefully you see the right username echoed back here. Let's also run kube-ctl version and test that we have a connection to the cluster. So here my response has, I'm seeing two responses from the command. I'm seeing one is the client version and one is the server version. This kind of dual response is something we're going to see pretty commonly from Kubernetes. So keep an eye out for sending a request and getting two responses back as your answer. We'll see that kind of pattern coming up soon. Okay, if you have all of those responses, hopefully any errors, anyone need help? Jan's got you covered on the later rivals. Hopefully you can catch up. No problem. Good luck catching up. Cool. I'll stall a little bit and give you all some background while we have one extra person logging in. So one thing that you'll not be touching today. There is a database within every Kubernetes cluster called EtsyD. It was developed at CoreOS. It's been donated to the CNCF, Cloud Native Computing Foundation. It's a distributed key value store with automatic leader election. So if you'd like to see kind of what that looks like, I've got a small demo here. We could say if this green node is our currently the leader in the cluster, I can hit restart on that particular instance. Let's see, or I could stop it, do something like that. I got rate limited. Node 2 has already started. Anyway, this ought to give me away maybe the demos still rate limited. Here, it looks like node 2 is down. The cluster elected a new leader and is now doing replication across and is able to do a consistent data store across these five nodes. So this type of high availability for all the statefulness of the whole platform is stored within this EtsyD database. If you want to know a lot more about EtsyD, take a look at these links here. But that's kind of sitting behind the scenes. In front of EtsyD, we have the Kubernetes API that's going to kind of check all of our access control, make sure that the rights into that data store that the correct people have right control. If we allowed anyone to read from EtsyD or anyone to write from it, then anyone can modify the state of our cluster. They've essentially got root access to our cluster if they have access to that data store. So the Kubernetes API is going to be kind of an enforcement layer that protects that EtsyD database. Every time we have an interaction with the Kubernetes API, I want you to keep an eye out for these five attributes. They're going to be available on almost every piece of data that we fetch from the API. The two ones I want to point out most, these are the ones I want to emphasize the most critically is spec and status. I think if you don't remember anything else, remember that Kubernetes provides an API that's asynchronous and the two attributes you're going to be focused most closely on are going to be setting the spec and then reading from the status. And so when I said Kubernetes always gives you two responses, it'll tell you, well, here's what you asked for. You told me you wanted five containers. You said five containers in your spec, but currently I'm out of memory and I was only able to spin up two containers. And it'll give you the honest, I wasn't able to do it, but we only got halfway there. It'll give you a realistic answer about the state of the platform, both in terms of what you requested and what's the actual state. And so that's going to be the spec and status fields. For a full reference, check out this big link at the bottom to the Kubernetes 1.17 APIs. For today, we're going to focus a little more tightly down on these five basic API resources. So the first one that we're going to look into is called a node. So everyone here at the node plus JS interactive event knows exactly what I mean when I am talking about nodes, right? This is kind of one of the difficulties I find with talking to folks, especially JavaScript folks about Kubernetes is there's a lot of terminology overlap. And this is a prime example right here in Kubernetes terminology. A node is a host machine, physical or virtual where your containerized processes are run. So just keep in mind when you're talking to Kubernetes folks, they may be talking about nodes in a slightly different way. Node activity is managed via one or more master instances. And I'm going to try running this command right here in see what I get. Let's all try this out and see, oh, forbidden, that's exactly what we should see. I'm going to run an OC log in really quickly and log in as an administrator here and run the same command. And now I can see the list of nodes. And it looks like for this particular command, this particular cluster, we've got 19 nodes in the cluster. So since I'm logged in as an administrator, I can run the query. List nodes on the API using this command line tool, kubectl get nodes. And apparently average users do not have access to retrieve that data from the API. So hopefully you've learned there's a data store, not everyone gets access to it. And kubectl get nodes is a way to list resources by type. Let's see. Oh, so here's my observations. Yeah, basically everyone agree to this list of observations from this initial section. I know we've only run one command. But any questions about this first part? No? Perfect. That is what I hoped. All right. So your JS runs on nodes. Kubernetes is going to actively manage processes. We'll see that in the next section. And we're trying to run on a large cluster scale system where if individual nodes fail or individual processes across this, we can always, we always have sufficient capacity to route around these problems and have a highly available solution exposed to our users. So next section pods. Here is a, this is a quote from one of my team members, Steve Pustey. He used to say, pod scale together and they fail together. This is one thing I like kind of thinking through in my mind when I'm trying to architect my solutions in Kubernetes. So I like to think of Kubernetes in a way as kind of like a modeling language for my solutions. And one of the most fundamental units other than a node, which I kind of gave you a brief look at pod. A pod is the first resource we're really going to look deeply into. So a pod is a group in Kubernetes terms. It's a group of one or more co-located containers. The folks at Google, when they were scheduling containers across their cluster, often found that sometimes they would need to schedule not just one, but they'd need a sidecar of some sort attached to a container. And if the sidecar ever failed, they'd want to make sure to reboot both processes as a group. So this is kind of a multi-process but all co-located. So one example I try to get folks to volunteer, well, hey, where would you want to have two things run together? And I usually try to trick someone into offering WordPress as an example of here's where you would have a front end and a database and WordPress. It's got PHP and it's got MySQL and you want to run them together, right? That's actually not a good example for tying two containers together in a pod. And the reason why is just this quote right here. Pods will scale together and they'll fail together. So if I wanted to scale up my front end, my PHP instances, I don't want to add a database with every web instance that I add, right? I want to be able to scale those two tiers independently. And since they need to be scaled independently, they cannot be grouped together in a pod. So for our purposes, we're basically going to have one container per pod. So you can almost think of a pod as a container, but I just need to point out that I have multiple processes and the way to do that is multiple containers per pod. Cool? So let's try to run a basic query. This one, I swear you will be able to execute this query successfully. Unfortunately, it'll return an empty result because you have not provisioned any pods yet. So let's take a look at what a basic pod spec would look like. So I have up here on the screen, hopefully you can see the result of this curl statement. And inside I have the five attributes that I told you would be there. There's a kind of data. All this data is internally typed and versioned. There's a API version. There's a metadata section. This section particularly, we could see it hasn't been created yet. So the timestamp is null. It has an ID or a name that will need to be unique within this namespace. And then there's some labels that we'll learn more about labels in the next section. And then like I said, there's a spec. And currently we don't have a status. That's because we haven't created this yet. And Kubernetes will start filling in the status as it makes progress towards achieving the spec that we've requested. Does that make sense? So I'm going to do a command to basically provision this container of jans. Thank you, Jan. We're going to provision NodeJS int workshop from Docker hub. So feel free to follow along and copy and paste this kube CTL or kube cuddle, depending on how you like to pronounce it. kube cuddle create dash f and paste that file in. And that should essentially tell the API that you want to load that JSON and you would like to provision a new pod. Any questions about that piece? Kubernetes is an API. You can manipulate these API endpoints to do work on the cluster. So congratulations, you have, if you hadn't before, you have now provisioned your first pod. So if you wanted to access the API using curl, not super advisable, but here's just an example. Feel free to copy and paste if you're interested to show how you would do that same listing data by type just using a raw request. And if you look into the path here, you could see API v1 v1 was in our spec. Let's see, find it in here. API version v1 and so that's also encoded here in the API. And this API is actually going to be almost identical to the path that we were, if we were able to access at CD, the at CD storage path looks almost identical to this. The Kubernetes API is really just doing kind of enforcement and access control on top of the at CD API. So let's go a little bit deeper. Instead of fetching all pods or all resources by type, let's try to fetch an individual resource by type and ID. You could either do type space ID or type slash ID, either format works fine, and we can output the result as JSON. Here's how I could do that with curl and the same thing with the command line. So I did get pod to fetch the resource of type pod with the following name hello k8s. We can all do this at the same time and we can all have the projects named the same thing because we're all in different name spaces. So this hopefully is working for everyone. Any questions about this section? It makes sense so far. One thing I would like to point out is the difference between our initial, let's see, we initially had this curl statement. And if I count the number of lines in here, we initially had 25 lines. And if I do get pods hello JSON and count the number of lines in here after it hits the API. Wow, Kubernetes filled in 130 some odd lines of additional information. So we could take a look at what changed. Well, now we have a status field. This didn't exist before and Kubernetes has started filling in all of this information about, you know, is the container up and running? How's it doing? It's making a lot of reports into that status field. It still has a spec field as well. The spec field has also grown quite a bit. We've added in some resource limits, some default resource limits here. There's now a creation timestamp that's been populated quite a bit more data in there. So Kubernetes will do a lot of work for you automatically, but it's also really nice to have a clear starting point that you can hand off to other users in your team. As the folks attending this section, I would expect you will need to do a lot of work to serve up these JSON or YAML files to your team members, so they don't need to learn what is a pod, what is a deployment. A lot of this, you almost want to hide this as much as possible. OpenShift gives you some nice ways of providing real smooth paths, platform as a service, Heroku style experience on top. So we'll see that coming up soon. Let's take a look at the exact same data, but I'm going to add, instead of dash O JSON, I'm going to add dash O YAML. And if you had a team that was really keen on using YAML instead of JSON for whatever reason, maybe you like comments or you dislike curly braces, either one. Oh, lost my signal. So did I lean into the presenter too hard? All right, cool. We're back. All right, so let's see. One other thing you can try is the kubectl describe command. This is meant to be kind of a more human readable output, assuming humans like tab separated responses. But yeah, this is kubectl describe is another kind of verb you could use in addition to the get and getting by type, getting by type and ID. You can also describe instead of get in order to get a slightly different formatted output that's probably a little bit more human readable. Observations from this section, API resources provide a declarative specification and asynchronous fulfillment. We learned about spec and status. If any of these processes, since there's only one process per container, it's very easy for Kubernetes to judge whether that single process has failed or not. And then restart the container as a result. Pods are scheduled to be run on nodes. We can actually see that if we look in the JSON, I think there is a... Where does it get set? There's some kind of label in here, node name right here. We can see the node that it got scheduled on to in our spec. The API ambidestriously supports both JSON and YAML. Any questions from this section? Welcome to Podtown. You now know what pods are. All right, services. Services abbreviated SVC give you a single endpoint for a collection of replicated pods. So I think this is a confusing term coming from like the web world. I think of a service as a web service. Like that's my Apache server or something usually. But this is more service from like a network endpoint perspective. It's a single identifier for a group of web services. And we can generate one using the kubectl expose command. So I'm going to run that off real quickly. And then take a look at the result. So I've generated a new... We can see there's an API version, a kind of data, metadata field. We have a spec and a status. All the things that I said we would find. The spec selector field happens to have something that says run hello k8s. This will come in a little bit later. But this is actually going to be running a query selector against the API. Searching for these two labels. A key of run and a value of hello k8s. And this load balancer will forward traffic to anything that matches that query. Any pods that match that query. So we'll see a little bit more about that in a second. But first I want to show you another nice feature of these services. Anytime you create a service in Kubernetes, Kube DNS will automatically start providing name server resolution for this value. So we can now do curl to hello k8s within our individual namespace. And hopefully you'll see a response from the container that you provisioned. Everyone able to see that? Raise your hand if you don't. We caught him. Everyone saw it, hopefully. All right. Cool. Congratulations. Hopefully that worked for you. Another nice tip. If you wanted to slice specific values out of the JSON response, you can use this get with a resource type and ID. And then instead of dash o json, use dash o json path to select out a particular field. This particular field is the node port value. If I wanted to try to access this container from outside the cluster, I could try hitting an address like this. Unfortunately, this is still an internal IP for Amazon. But if I had an external IP, I ought to be able to curl this high numbered port on any node in the system. And it'll get forwarded to the right service internally. It still doesn't give you full like a doname name servicing. You'd still probably need load balancers in front of that. But that's your shortest route to getting traffic into a cluster from the outside. Is this node port service that gives you an easy way to access these services on a high numbered port from outside the cluster? Communication inside the cluster is super easy as we have just proven. We could see I've currently have one pod running serving those requests. And you could see in this command I'm running get pods dash L. This is a new type of query. We're querying for resources by type doing get pods. But we don't want all pods. We want only the pods that match this particular label selector. That's what dash L is, label selector. So we want to find all resources by type, assuming they match this key and value in their labels section. Our service and our pods happen to have that match. And that's how it does the mapping from the service to those pods. So if we delete all of the pods that the service is routing traffic to, that should cause this to fail even though the service still exists. The service is no longer able to pass the traffic onto the pod. And so the only thing I'm trying to prove here is that the services and the pods can exist independently. You can have a service that doesn't have any pods associated with it at all. You can also have services. There's a type of service called a headless service. I don't know if I agree with the name, but headless service. You can have something that is a service shows up within the cluster with a local DNS, Kube DNS resolution, but the service is actually pointing back outside the cluster to a legacy data store, right? A big Oracle database or something. So your microservices within the cluster still have discoverability as long as you're creating this service abstract for it to have something to resolve against. But the service that you create can point to pods matching a label selector or it can point to back outside the cluster to something else. So anyway, service, that's kind of a load balancer or a network endpoint for a collection of processes. So with this, hopefully we have deleted our pods and deleted our services and gotten back to a clean state. Any questions from this section? No? Nothing? Man, you're a quiet group. I should have brought coffee for you all. All right. Service basically means load balancer. Hopefully that's clear. Label selectors can be used to organize workloads. Once we have a pod provisioned, we can relabel it or change the labels in order to remove it from behind a load balancer or to surface it behind a load balancer. The service uses label selectors. That's it, yeah. And they can be deleted and created independently. There's no linkage lifecycle-wise between the two. Deployments. Any questions before we move on to deployments? Jan, do you want to attempt jumping into this section? All right, all right. Okay, well, I'm going to try to power through and we'll swap at the open shift then. Okay, so we still have a lot to cover. No one's asking any questions yet, so I'm going to try to pick up the speed and get a little, you know, we'll see if I lose any of you in this next section. All right, a deployment. Now that you have all created pods, never ever do that again. This is like math class where it's like, oh, now I've introduced algebra two and now you don't have to do long division or, you know, it's all, here's a calculator. Deployments just generally solve a lot of the stuff that we just did with pods. Pods were an earlier abstraction and good to learn because they're your fundamental unit of scale. But deployments are how you scale up a collection of pods. So this is a much more useful abstraction. Let's dig into deployments and learn how to really get work done. So this is going to help you specify container runtime requirements in terms of pods now that we know what a pod is. So we could have a shorter command here. We could just run the top half of this in order to deploy Jan's image that we previously had deployed in that pod specification. But I'm going to add an extra line. I'm going to add these extra flags dry run and dash o JSON what those two flags allow me to do dry run says instead of immediately provisioning this deployment, instead marshal up all the JSON or the YAML and throw it all to standard out. Does that make sense? The reason why I like showing this extra step is that this gives you a clear way of generating your own deployment spec. And then you can hand that off to other developers or you could put it in a helm chart or you can like you have a way of reproducing this and modifying it, changing the labels, changing the resource allocation. You know, you have hopefully a starting point that you can continue iterating on and something you can give to junior developers, where you don't have to really explain what a deployment is or how to use kubectl in an advanced way. Hopefully they could kubectl create and then get back to developing. So that's why I have the second half here. So let's all create a deployment dot JSON file. So we have something we can share with other users. This is actually showing me some deprecation warnings. Good to know that there are changes upcoming in the API that I might want to know about. I tried using this generator run pod v1. And this is actually, if we wanted to make that pod.json that we had earlier, adding in that generator flag here essentially gives us exactly the pod.json that we started with earlier. So in case you wanted to generate that, it looks like this deprecation warning we were just shown is actually giving us advice on this new feature. That's newly been added and now we have a clean way of generating pod specs as well. Last time I did this workshop, that was not available. So new stuff coming down the pipe as we work. Let's also take a look at our deployment dot JSON. So does it have the five attributes that I mentioned? It has a kind of data, a version of data. And this I may need to update this if it's being deprecated soon. It has some label selectors. Remember how we did that selector that was set to run equals hello k8s. This label selector is going to say our deployment should match that label. Anytime someone does a label based query with resource type equals deployment, this is going to match on those key value labels. There's a spec that says what our current replication level is. And there's a template that is basically an embedded pod spec. You can see inside this template, there's a second spec. This is the pod spec kind of dumped within the deployment spec. And then here's the status for how far along we've made it in this particular deployment progress. So let's launch that deployment. Now that you have that deployment file, you should be able to store this on GitHub or hand it off to anyone. Anyone should be able to kubectl create into their own cluster to deploy that particular container. We're going to run this kubectl expose command in order to make a service just like we did before. Except this time we're exposing a deployment instead of a pod. And we're adding on this dry run flag in order to create a service.json. This is just to show you that Kubernetes is like a modeling language. And you use these JSON files in order to kind of model the topology of your microservices solution. So if you have lots of microservices, you're probably going to have one service file per microservice and one deployment per replicated web tier, essentially. So you may end up having collections of these in a repo, in which case you could do something like kubectl create-f and then a directory folder. So everything from staging.star, you know, all the YAML files in that folder. Let's launch them all. You could give it a path as well as a file name. Any questions about that piece? No. I'm going to create the service and we're going to see what we get as a result of this query. I'm running kubectl getpo which is short for pod, svc short for service, deploy short for deployment. This is listing multiple resources by type using the command line. Nice to know that you can easily do that as well. And now that we have a pod, a deployment, and a service, I should be able to run curl and verify that we have access to the container. Cool. Next step is let's scale up that container and see if we can demo some of the high availability features of this cluster. So I can use the kubectl scale command on the resource of type deploy ID equals hello or name is hello kads and I want to update the spec with a new replica value and set the replication to three. Let's list all pods by, list all resources by type where type equals pod and it looks like I now have three containers up and running. Hopefully you have three up and running if not one, it may still be working towards that goal. And if it's not there, hopefully it'll give you the truth about how much progress has been made. Here's another nice trick that will kind of use, use whatever your default editor is. So we saw a kubectl get, getting resources all by type, getting by type and ID. What about kubectl edit? What do you imagine that does? It looks like I've opened this file in an editor. I'm going to find the replicas line and I'm going to edit this to five. Feel free to follow along if you dare. It looks like VI is the default editor. So I went over to this line and hit S for substitute and then five and now I'm going to hit escape colon WQ for folks that aren't used to VI. And what do you imagine this will do? Am I going to write this file locally? Where's this going to go? This actually will send the file back across the network and save it back to our EtsyD database in the Kubernetes API. And now if I get pods, I have five pods up and running. So not that I would recommend live editing these resources in the API, but if you're learning, this may be a great way to tweak certain values or if you're just scaling up a web service, you probably want to use the scale command instead of a kubectl edit. But this command is kind of medium smart. If I open this up and write it out without any changes, it'll notice you're not actually shipping any changes to the API and it'll give me feedback to that effect. So it does a decent job of allowing you to quickly edit things while staying out of your way. And I think if you customize the editor variable, you can use something other than VI as your default. So cool, we have all scaled up. I'm going to run this get pods dash dash watch in this lower shell down here. And I am not going to background it. I had this ampersand on the end, but I'm just going to leave it running in the foreground. So that's going to continually keep an eye on my number of pods and leave the connection open. This is like a streaming connection. JavaScript folks, as a JavaScript person, I am always really excited when I see fully asynchronous APIs. But then in addition to that, streaming APIs where I can continually get a streaming response as the updates come in. So this is huge that the API supports this type of watch functionality, in my opinion. And that there's a nice way of accessing it from the command line as well. So this embedded query here, this is going to do basically a fetch from the API to get a series of pod names that are all random pod names that are space separated so that I can run kubectl delete pod and delete three resources by ID. That's basically what this complicated command is doing. So feel free to copy and paste and what this ought to do is kind of like a shotgun blast of damage across your cluster and take out three random containers out of your group of five. So let's see what happens when we sustain some damage. It looks like right away in our watch down below, the Kubernetes API has recognized that these containers went missing, it was our fault of course here, but this could have easily been one node out of our cluster suddenly went offline. And all of these processes are suddenly unaccounted for. The API is going to recognize that a node has gone offline. It's going to flag these containers as being down and it's going to provision new containers on other available nodes in order to get us back up to our expected allocation, which was five running pods. So hopefully you're back up to full health at this point. Another thing you could do is get deploy, oops, a little bit wrong, get deploy, and that ought to show you how many are ready, up to date, available, all those nice details. So hopefully you all have five healthy pods. Feel free to rerun that shotgun blast of damage as many times as you like and hopefully it'll keep regenerating. That's what deployments do is deployments kind of allow you to have a replication spec. And then whatever happens throughout the life of the cluster, it's going to continue working to achieve your spec, even if you artificially knock it out of alignment. It's like your thermostat. You set it to 72 degrees and even if you leave the refrigerator door open and the window open, it's going to keep trying to heat the house or cool it depending on what the temperature is outside. So yeah, it'll keep working to achieve your goal and give you an honest answer in that status field. So observations from this section. Dry run flag will help you generate a new resource specification. A deployment spec contains a pod spec in its template field. The API provides get, edit, get and edit or edit and watch operations in addition to the get set and list. Any other questions? Yeah. Yeah. Yeah. So in this first one, if I want to create a YAML file, then I include dry run in order to create a file. If I don't care about creating a file at all, we can just do this command. This will create the deployment immediately. It will create the file and then post the file to the API. It'll do two steps in one, right? Create it, post it to the API. Don't even write it to disk. If you want an intermediary step where you create it, write it to disk and don't touch the API and then you have a file that you can share. That's why I added the extra flags, but you can skip a step just with run. And then run and create. This is also just one step, but based on the file you have. So it gives you an opportunity to edit the file, change the labels, change the default replication so that when they do the create, the default replica is five replicas, right? So I like having that extra step. So when I share things, I have something more customized that I share. So it depends on what you need. This is what I do on the command line, is this. But if I want to share it, dry run. Good question, thank you. So last section. And then I will hand it over to Jan for the OpenShift pieces. Replicaset. Replicaset provides replication and life cycle management for a specific image release. Does anyone remember what my title was on that last section? Deployment helps you. It's almost the exact same thing, replication and life cycle management for a specific image release. Let's see how it's different than deployments because this sounds very similar on the surface. So let's take a look at the current state of our deployment. It looks, hopefully, you are all able to fetch data. You were able to fetch it when you had a single replica and you scaled up to five. We had some damage, but we recovered and it still looks healthy. I was watching the pods in this lower terminal. I'm going to do a control C and break out of that or foreground if you need to foreground a job. And I'm going to run getDeploy with a dash w or dash dash watch, either one. And we'll watch the deployment. We'll make a deployment dashboard in this lower terminal. And then in the upper terminal, I'm going to run kubectl setImage. This command is basically just going to edit the... It's going to pull down the deployment resource from the API, open up the file, find the spec and the pod within the deployment spec, and then look at the identifier of the image within the container, within the pod, within the deployment. It's kind of all nested in that JSON. And it's going to update the image value and set a new tag. Now we're adding colon V1 on our container. So this is going to do a roll us forward to a new deployment. Let's assume our developers have already cut a release. Something's already made it past QA. This is more release management than a developer move, but let's go ahead and roll forward. And we should see in our dashboard below that some activity is happening. We can get RS to look at the replica sets, and it looks like there is currently some action going on. I'm going to run this... We can already see the new value. And if I get replica sets, it looks like I'm fully rolled forward from this to this, whatever that means. Let's see if we can find out some more info. I get pods. If we look at the names of the pods, you can see hello K-A-S. This is named after the name of the deployment. And then there's this middle identifier. This is an identifier for the replication controller. And then this is a random ID for the individual pod. So all of these from the old replication controller are terminating. Oops, scrolled up too far. And the new replication controller is running, and we have our new response. Good morning for the classroom. Was everyone able to roll forward there? Yeah, no problems anywhere? Perfect, excellent. Let's try... Now let's take a look at our rollout history. And it looks like we have two revisions currently that Kubernetes has tracked relative to this deployment we've just created. I can do a rollout undo in order to rollback. And let's run some curl requests. And we can watch... Let's watch as this changes. That should be 8080. There we go. Yeah, hopefully that was a typo on my part, but you should have, as long as these applications are stateless web apps and if you're storing your session information in a distributed cache like Memcache or Redis, then you ought to be able to do zero downtime rolling deployments if you are reasonably stateless in your architecture. So Kubernetes is great for high availability of your web resources, zero downtime rollouts and rollbacks, which sometimes is a whole lot of stuff that developers aren't always concerned with. So I'm going to do a cleanup. Let's do kubectl delete service comma deployment. We're going to delete two resources by type as long as they have the same ID. This is kind of like... Does that make sense? Whoops. Just back to terminal three. Got picture and picture enabled somehow. Figured it out. Cool. All right. Sorry for the technical difficulties. Let's see what happens. And then I'll do get all and... Looks like there's a couple pods being cleaned up, but otherwise we hopefully have cleaned up after ourselves. Any questions on that section? No? Do you understand the difference between replica sets and deployments? Vegley replica sets are if I have that initial image that's colon latest and I have five replicas and I want to roll forward to a V1 tag, the replica set it's going to... The deployment will create a new replica set and it will start scaling up the pods on this new replica with the V1 image. And since we requested a spec of five, the deployment is going to try to keep us at a spec of five even though it's doing this rolling deployment from replica one to replica two. So as it scales replica V1 up, it'll scale this one down and try to roll us across and keep us at an even five containers as it does the rolling deployment. And so the deployment resource under the hood is actually using a replica resource to manage the pods. So the Kubernetes API has higher order resources that leverage lower level resources in order to do automation. And a deployment is a higher order resource that takes advantage of replica sets primarily which then in turn take advantage of pods. So it's all kind of stacked like a Russian doll and best thing I can recommend is use deployments when possible because that already takes advantage of all the lower level pieces and then that'll keep things nice and simple for you. But hopefully you understand that this is like a modeling language with building blocks and the more you learn about it the more you learn how to architect your solutions and then you have a giant pile of YAML and or JSON that hopefully you can share with junior developers to give them a clear starting point and to make things easier for them. So I'm going to do a check-in on folks now that we're through the first half how many folks have experience using containers? I already asked this one and it was a hundred percent, right? How many folks can say they have experience using Kubernetes? A hundred percent? How many feel like you're maybe basically proficient with kubectl? I think hopefully you've done enough command line interactions. You can list resources by type, grab them by ID, edit them if you need to. How many people feel like they can name five basic Kubernetes primitives? Anyone feel like they can't? I'll single you out, I swear now to make threats at folks. Alright, so hopefully you are all ready to see what OpenShift adds on top of Kubernetes. We saw a lot of low level ops focused use cases which are great to know if you're trying to replicate something as production quality as possible nailing these JSON templates allows you to reproduce things really easily but allowing for that real-time iterative web development is super important and being able to not overwhelm junior developers with terminology especially when you're trying to tell them that a node is something different or a service is something they're not used to. This is where OpenShift comes in. Jan, you ready to take it away? Cool. Yeah, you bet. And you prefer your own laptop or this is fine? So for the rest, I'm going to press the picture and picture button up. Okay, I'm a leaner too so we'll have to see how this goes. So for the rest of the workshop, oh man, this is not my trackpad. Ryan, I just want to make it take up the whole screen. Apple F on the Windows keyboard. Okay, okay. So for the rest of the workshop, we're going to be just focusing in this one window. So you have this panel here on the left-hand side where it says Workshop Overview. That's going to be your instructions from now on. If you scroll, go ahead and click that blue continue button and I'll explain what we're doing here. So we're still going to be working in this web terminal. Let me control C down here. But this has some just click and run. So this OC help, this is just to make sure that these commands will actually run for you. So we've been using kubectl or kubectl, whatever you prefer to call it. We're going to be using OC from here on out. OC does everything kubectl can do, but additionally has some of the features of OpenShift. So it's OpenShift command line tool that does everything that kubectl can do, but also some other things that we'll see in a moment. And you can get the help for that command right there. So hopefully that worked for you. We're going to be using a project. You've already been working in a project, this user one project that you're in. Projects are somewhat analogous to namespaces in Kubernetes, but a project is an OpenShift construct that also kind of ties the role-based access control to your namespace. So you as user, whatever number, have access to the user whatever project, but you don't have access to my project. You don't have access to all the projects like the admin of the cluster does. So you're working in a single project right now. If you run this OC project command and you can type these, or you can just click the button to execute them, it should tell you what project you're using. Oh, I've done it. Here we go. I have to like do this at arm's length. I do this on treadmills all the time where you like lean forward and then it turns off. So if you click that, you should see your own project come back there. So we haven't looked at the OpenShift web console yet. We're going to do that now. So there's a link here you can click. You also can just simply click on the word console up here. And that's going to drop you in. You might have to log in. You're probably already logged in. But so what this is, and we've got tiny resolution here. So there we go. We'll pull it over so you can see them in you. So this is the OpenShift web console. If you don't want to use the command line ever or sometimes, you can use the web console to get a lot of the same things done. By default, it's going to drop you into this administrator view and you can call that by this toggle up here. So this is kind of like the default view if you need to do more ops-related things in the cluster. There is also a developer view. And I didn't click my project first. Select your project. I'll show you what I'm going to do. Click into your project here to make sure you've actually got a project selected. Left click. Again, not my trackpad. And then go in here. And now you're in this developer view. There is nothing to look at right now because we haven't got anything deployed because we cleaned up after ourselves after this last one. But now we're going to look at using the web console and using a project called source to image to actually go from source code to a container that's running on the platform without having to build the image manually, not having to create a Docker file, not having to already have an image available. We're going to use some of the features of the platform to do that for us. So source to image, as I mentioned. So this is an open source project. It's included with OpenShift, but you can also use it outside of OpenShift. It's available for use. And what it does essentially is you give it source code, like a GitHub URL. You can either tell it what kind of code it is. You could say this is Node.js, or it can usually infer that based on information in the repository. So if there's a package.json file, it's going to say, okay, I'm going to use a Node.js builder image to create this application image. So that's what we're going to do now. So if you're willing to, and if you have a GitHub account, the best way to do this would be to fork this repository of Ryan's because that way you can set up what's our timing, see if we even have time to do the webhook. So my phone's over there. All right, you can try it. So if you want to, go ahead and fork this. I'm not going to use yours. I'm signed into mine, so I'll do my own. But if you want to fork Ryan's repo here, then you'll be able to do this with your own copy of the code. But what we're going to do, I'll give you a minute to do that. But I'll go ahead and start walking you through what we're going to do next, and I'll pause to let everybody catch up. So over here, this topology view is going to look cool in a minute once we have something deployed. In the meantime, when there's no workloads running, it gives you, screen resolution is really odd. There we go, that's a little better like that. So it's going to give you some options here of different ways that you can deploy things. We're going to use from get, but just to walk you through what else there is, you can deploy an image, which is what we were doing on the command line before. You can deploy from a catalog. This is going to give you a catalog of things on the cluster that are available that you can use to build off of. So I'll just show you really quickly. So in the developer catalog, you'll see things like languages and run times. So if you've got something that's PHP or whatever, there's also databases you can deploy, CI CD solutions, Jenkins, whatever you want. You can deploy all of those from that catalog. Let's go back over here, though. You can deploy from a Docker file. So if you've just got a Docker file up there somewhere, you can deploy from that. You can drop in YAML or JSON. So like the deployment.json file that Ryan was creating when you did the dry run before, you could just drop it in and just say, click it, paste it, you're done. Or databases, which again maps back to the databases we were looking at in the catalog before, but it's just an easier view into that. So all that to say, click from get. You're going to put your fork here. I think I still have mine. Let's find out. I'll use my own. A little bit. Yeah, you're like, I have all the power. Ooh. All right. Let's do it. You use yours. I'll use Ryan's. So hopefully we'll get to that in the next step. I think we'll have time. So drop your get repo URL there. Scroll down. Click Node.js. When we're doing it in the web interface here, it allows you to explicitly select which builder image you're using. You can do this from the command line too in the new dash app command. And in that case, you can just give it that get URL and you don't even need to tell it it's Node.js. It'll figure it out. You can select what version of Node you want to use. I'm just going to leave it 10 by default. And then it's giving you these options here to create an application name. So this is really just creating some labels on your deployment. And you'll see what that means in a minute, but it's allowing you to have like an application grouping. A logical grouping of components in an application to make it kind of easy to see and manage. But it's using standard Kubernetes naming labels to do that. And then there's the name for your deployment, which we'll just call it HTTP base. That's fine. By default, hopefully you can see this, in the advanced options, by default when you create something this way in the web console, it's going to create a route for you. We didn't really get into routes before. Routes are an OpenShift construct. They're like an additional benefit and feature that OpenShift adds. We talked before about how the services that you create were accessible with Kube DNS inside the cluster, but not unless you use the crazy node port thing, not accessible from outside the cluster. I'm not saying it was crazy, but it's not normal. It's not how you normally would. Yeah. Right. So Kubernetes has Ingress. Here we're using routes. I love it because it's all you do is check a box and it creates a URL for you. So go ahead and click Create. By the way, I can keep an eye on the time. All right. So what you see now, this is that topology view we talked about before. This light gray circle around it, that's our application grouping. That application name was HTTP base app. If we had more than one app or component in this application grouping, they'd all show up in this little bubble here. We only have one because this is pretty simple. You've got these decorators here. This one looks like a little circle thing. That's the status of your build. Your build is running right now. We can click on that on this window. Yeah, I'm going to just... All right. There we go. So our build is running right now. That's running that build to image, a process that we talked about. I'll come back over here so we can see it as it completes. It looked like it was almost done. As that completes, you will see this turn to a green check mark. And then so once the build completes, then the deployment will start. Let me go back and check and make sure all is well. It's running NPM install. It wasn't actually quite finished. It was doing the first step. Now it's on the second step here. So you can actually see everything that's going on. So if something were to go wrong, you've got the logs here for that build, and you can see what happens. So sometimes if there's like a dependency issue and like the NPM install process or something fails, you can go here and see, oh, okay, I need to go fix something in my code, and then come back. Okay, so now we see push successful here. Come back over to topology view. That's a green check. Now soon we'll start to see this ring around that change as the deployment. There we go. As the deployment starts rolling out. So I clicked on that center circle there to get this little panel to show up. This is information about our deployment. You can see the pod here. The container is creating right now. You can view the logs for your pod from here as well by clicking into that. It's still coming up, so there's no logs just yet. Here we go. Oh, okay. So now it's listening. That shows us the app is up and running. If you click this, this is going to give you that route that was created, that URL. And there is our very fancy web application right there. So hopefully you all got to that same point if you were following along. But so now it has deployed that. We can get to it from this URL and it is running. So again, you could have done all of that on the command line with OC new app as well. You would have then had to have exposed the service to create a route. You'd have to do that extra step if you did it through the command line. And we talk, ooh, scrolling the wrong way. We talk a little bit about how you would do that here if you wanted to do that from the command line instead. Any questions about that? No, okay. We'll move on. So what Ryan was talking about before, if we want to set up web hooks so that anytime we make a change to the code and actually push it out, it'll do a new build and deployment. That's what we can set up now. So if you created a fork, go ahead and go back to your terminal. How many folks are currently doing some type of like get push to deploy? Is it anyone using that currently? Not too many. Okay. That was like a revolutionary five years ago. But I'm like curious how many people are actually using that to kick off deployments today. I think a lot of folks have kind of maybe decoupled how that works. But it is definitely nice to know that you can wire up automation, whether it's just deploying to your QA stage or earlier stages or kicking off various types of automation based on changes in a repo. So if there aren't a huge number of people super keen on this, we could move forward to the R-Sync. So you have a question about it? Go to the containers. That one, since the web hook is going to try to call back into the cluster, the cluster would need to be addressable somehow. So there's some type of service, relay service you can run. And I forget the name of it. Ultra hook? Ultra hook? That sounds right. That sounds like it very well could be the answer. Okay. Yeah, that's what you used to... So now with OpenShift 4, we have Code Ready Containers. When we had MiniShift for OpenShift 3, I'm pretty sure Ultra hook is what we had in the slides back when we did that. So yeah, I haven't actually done it with Code Ready Containers, but that should be the process. I think usually for Code Ready Containers, if I have a local station, rather than relying on a web hook, I can just click on the build button in the dashboard to trigger a new build whenever I need to, or what I like doing instead of doing builds based on whatever's coming in, that might be useful, but I like testing my code before I make the commit and we can do that using some rsync features that we have queued up next, I think. So that one's even more useful for local development purposes. Is anyone super excited to see the GitHub web hooks and it's going to be sad if we skip it? Okay, you can also do it yourself because this cluster will be up for, well, probably at least the rest of the day. Feel free to try the web hooks on your own or stop by the Red Hat booth afterwards and we can give you a demo of the Git push to deploy and show the automation from GitHub back in. It's nice, but if you're not currently using it, just make a note that it does exist. And I think this, hopefully, live development is really where I see a huge opportunity for front-end developers to get some traction with Kubernetes because I think, for me, this is where you see a huge amount of value. If you're doing lots of microservices, you can have maybe 20 different containers deployed in a Kubernetes environment. Do this live development against one of the containers and then do a functional test or integration test and get feedback from that testing before I make my commits on this container. So to have that type of integration feedback or functional feedback as part of my local development loop, I think is really powerful if you're doing a lot of microservice development, especially when you start adding in things like caching layers between data tiers, you can replicate all of that very easily with OpenShift. And even in cases where you cannot replicate 100% of the production workloads, you can use the service to point to something outside the cluster and kind of fake it out for the development staff. So hopefully this is a big takeaway and a way to show you how to enable your junior developers with a containerized workflow and more visibility than they've had in the past for these more complicated problems without putting barriers in their workflow where they have to run a build as a prerequisite in order to get some feedback, right? We want to give you feedback during your real-time dev loop. And that's what this is all about. So I feel like I have to say this. So don't use OCR sync in production. It's pushing a file into your running container. So this would be definitely for doing your local development that inner loop stuff. It's also only pushing it to one pod. So if you've got five pods, just only use this for local development. Yes. You can use Jenkins with OpenShift for sure, yeah. We do have Jenkins in the service catalog. So if you want to look at the, if you look in the service catalog, there will be, and I can point you to the Jenkins deployment, but there will be a collection of JSON or YAML files that are deployments and services and other low-level things that altogether give you a Jenkins environment. And so you can package up that full Jenkins pipeline as part of your dev stage. And so when junior developers check out a development stage, they have their own Jenkins and their own CI tests as part of their own kind of decentralized dev stage that they can run independently perhaps, right? And then you can have another Jenkins in the staging area that does a second round of checks, but they could hopefully run, get as much feedback as they need as part of their local dev loop. If that's the way you're doing CI, then yeah. But you can also have a lot of other testing and feedback from other Node.js-based build processes as well. All right, so I am going to go ahead and edit this index.html file from our repo. I'm just going to change, like the h1 tag, say hello, OpenShift. All right, so I've changed that file. And then we are going to run this export here. What this is going to do is get us the name of our pod, basically, that has the label app equals HTTP base. And it's going to run that OCR sync. We get an error here because we're trying to upload, I think, too many things. There's some permission thing. However, it did actually do it. So if we go back to wherever it's running here and refresh, it says hello, OpenShift. So it did send that file up there. Right, and this is what I was telling you about. We get this failed to set permissions thing. So it kills the watch. That's a bug. We'll figure that out. So that's what you should be having here, except that we have this error that's killing the watch. So there is, we don't have time to get into this in this particular workshop, but I want to at least introduce you to another tool. It's another command line tool that you can use with OpenShift called ODO, or OpenShift do, but I call it ODO. And what that is also intended for is to help with this inner loop, this iterative development. It's not just for Node. It supports Java, PHP, Python, whatever you're using. And it's meant to help kind of, first of all, abstract away some of the Kubernetes terminology. So you're using more kind of like get style command syntax. So if that's something that's interesting to you, you can check that out here. But to create something in your development loop with ODO, you would do like ODO create Node.js, and you'd be deploying it from your local directory. So here you saw we were, it's actually, there is a public get remote URL that we're deploying from. ODO lets you do that local development from your actual laptop. So that's kind of a difference there that can be really convenient. It also can do this kind of watching loop. So as you make changes, it will sync those up for you. It's not as instantaneous as OCR sync, but it's a little more, what's the right word, robust, slightly more sophisticated. What I like about ODO is it gives me a way to, instead of relying on that web hook workflow to somehow call back into my local system and also coupling all the builds to a commit, I can decouple those two and use ODO and do a ODO push. And what that'll do is push whatever the current contents of my repo is, whether it's been committed or not, push whatever's in my repo into a build pipeline, run a build and stream the build results back into my console while it's building. So it gives me a quick kind of evaluation of whether it will pass a build or not, and it actually triggers a build in my local cluster or whatever cluster I'm pointed at. And while decoupling the process, so I could kind of test the code before I make my commit. And then if it looks good, great. Then I make my commit and my get push, and maybe that'll trigger a build in some other pipeline for my CI team. But I can do just ODO push while I'm iterating or ODO R-sync if I'm doing real-time development. Or yeah, ODO watch. Yeah, that's right. ODO watch for real-time development. And then after I'm happy that everything is working and I'm confident that the integration tests pass, then I make my commits with confidence, right? And without being worried, I'm gonna muck up the pipeline further downstream for folks. Yes. So there's also something called NodeShift, which if you went to Luke Holmquist's lab yesterday, I think he may have talked about it there. That is something you can run it with like NPX NodeShift. It's just another way of helping you deploy NodeJS applications on OpenShift easily. We're not, again, having time to get into it there. Code-ready containers. It sounds like at least one of you is using that already. If you want to run an OpenShift, you need a minimal OpenShift cluster locally on your laptop to do local development. That's what Code-ready containers can do for you. It takes a bit of memory, so you need to have a fair amount of memory available on your laptop to run it. But it's pretty easy to get set up and is nice for doing local work with OpenShift. If you don't try to run, stick with JavaScript and hopefully... You can do some. You can do a fair amount of stuff, but it requires a bit of memory. So that's Code-ready containers. If you want to check that out. If you haven't seen learned.OpenShift.com, I'm going to open this up really quickly here just to give you a quick view. There's a bunch of tutorials here, but there's also if you just want to kick the tires or have access to a cluster for a little while to try something out. These OpenShift playgrounds, there's one for OpenShift 4.2, which is the version we were just using. You're going to get a 4.2 cluster for 60 minutes, do whatever, and then it goes away. So if you just need to try something out and you don't want to install Code-ready containers, you don't need to. You're not ready to actually like do more than just try something out. Here you go. It's similar environment to what we were using in the workshop, but you can log in as admin and have full admin access here to this time-limited cluster. So that's another option if you want to try things out on your own. Yeah, so we do have some time for questions and I also just want to mention, as Ryan said, we'll be at the booth the rest of the day, the OpenShift booth out there in the sponsor showcase area, so we'd be happy to talk to you if you have any questions that we can't answer right now. I'm ready. Okay, I got two last things to shout out for you. So Jan just mentioned this learn.openshift.com. We have some of these cards if anyone wants a reminder about learn.openshift for like one-hour sessions without any sign-up or other expectations. The other thing that I wanted to point out, we have a link in the slides to this O'Reilly book. If you're interested in a free O'Reilly book on OpenShift, click on that link and you'll get a PDF download. I will say the content of that book is on OpenShift 3, so the web interface will be different than what you have, but most of the command line stuff is all the same. It's both still Kubernetes under the hood and trying to achieve a PaaS-style solution on top. So we covered some of this Kubernetes terminology. I didn't really dig too deeply into all of these, but we learned about routes and asked me about other details if you like. Last thing I wanted to give you a link to was try.openshift.com. This is a good way to get started with new clusters if you are interested in trying OpenShift on any cloud you like. You can deploy it. You need a developer account, Red Hat developer account, to set it up on your laptop, set it up on Amazon, set it up on Google, set it up on any cloud you like, and be able to, even on bare metal, and once you have it up and running, we try to bring in not just the support and expertise from the folks at Red Hat, but help from kind of the whole rest of the industry as well. So even if you're running on bare metal, we want you to have really solid access to products like Redis backed by the actual maintainers at Redis Labs. And if you're using MongoDB, we've got actual Mongo provided by MongoDB incorporated. So we try to work with all the maintainers in the industry. We have a shared support model with all these folks. So if you're an enterprise company who's interested in picking up support and making sure that the support dollars are directed towards qualified folks who are involved in maintaining the code upstream, this is a great way to ensure that you're able to reproduce any of these data stores on any cloud, even bare metal, and support the right experts in the community, right? So each of these, if you want to give this operator hub a try, we have this operator hub embedded in the dashboard. We're logged in. Logged in as a standard user, so you don't have access. That's an admin-only feature, but you can try it out on your own with OpenShift 4. And so administrators can go in, let's say CouchDB, if they wanted to install this. You can run this kube-cuddle create on any cluster, even a GKE cluster or an Amazon Kubernetes. Anybody's Kubernetes, you ought to be able to use kube-cuddle and standard tools. When you run this command, it will install a new custom resource on the API, and then you'll be able to kube-cuddle, get, couch bases, or fetch some new resource type that Kubernetes doesn't know about. And so all of these operators provide ways of extending the Kubernetes API to add new resource types while still maintaining the same kind of pattern that we've all learned, hopefully, through this workshop. The API is going to be asynchronous, and it's going to have a spec and a status, right? If you don't remember anything else, it's asynchronous, it's JSON, you could do YAML too, but spec and status, and you set the spec and you read from the status. And that's how all of these data stores work on Kubernetes. They create a new resource type. You set the spec, you say what I need, and Kubernetes goes to work fulfilling your dependencies. And so we're encouraging all the major data service and kind of soft infrastructure providers to jump in and develop their own extensions for Kubernetes. So hopefully you folks find a lot of success with the information we've put out here. Definitely give us feedback. If you have any thoughts on any of this. And find us in the booth. Oh, these are my old slides. I tried to add Jan. There should be a picture of Jan on there as well. But yeah, thank you very much. I'm Ryan Jay. Got Jan Kleiner. Thanks again. We'll be in the booth. Thank you.