 I think we have something perfect, that's good oversight, so I'm going to the other one. Did you know how that works? They should finish, like, get out of here and finish the workshop five minutes before the last one. Yeah, but that's the workshop. There is another session. You will show five minutes before the last one. They can make a break if they want in the middle. There were nine total, right? There were nine total, right? Yeah, two, they are probably screwed up. We have more USB sticks if anyone needs them. Who just came in? Who opened the laptop and hasn't heard that you need the Lucy client? The black ones are dead ones, right? Look at that gentleman there, the volunteer. I think the black ones are the ones who are completely corrupt. There were black ones, right? Or there was a different color? There were a black one and a blue one. This microphone is working because... You need to recompile a new kernel. You need to load a special network driver. And for Windows, you actually have to contact Russia to get the hack codes. All right, let's get started. We are one minute over. Calm down. People are going to come. We have three hours. We are one minute over, so it's time to get started. So, welcome everybody. My name is Grant Shipley and this is Stephen Pausti, otherwise known as the Stevo. I hope you were expecting a hands-on lab slash workshop today. That is what we are going to be doing. Everyone should have their own machine or look off of your neighbor's machine. Or you could perhaps actually do all of this on your phone if you wanted to. Just don't use the command line tools. Use the web interface instead. But if you do it that way, you are on your own. But we can walk you through it. So, Steve is going to run through the slides at the beginning to introduce you to what we are going to be doing today. Windows is fine. Look at this. Look at this, kids. I am using Windows. And I am using it too. Or at least the laptop I brought. Do you want to know why Grant and I use Windows? Yes, even though I prefer Fedora, it's because if you look at the on... You know there is OpenShift Online? If you look at the percentage of Windows users on OpenShift Online, which is what we mostly advocate, it is about 85% Windows users. So we actually use Windows so that, look, surprise, surprise, most of the engineers on OpenShift do not use Windows. So to really get the testing done and try patterns out, we use Windows as well. Yeah, so I have started to like Windows a little bit. I have been using it for the last year to have something like that. But it is mainly just to make sure OpenShift works well on Windows. It is my daily operating system now. Even though I have been using Linux for the last 14 years or so, I have switched to Windows. So how do you play Battlefield on Linux? We don't. That is the other reason why Bucky and I use Windows. We weren't going to say anything about that. I will put my division handle in here and I will also put my overwatch handle in there as well. Alright, so I am going to run through the labs later. I am going to be kind of the lead for that portion. They should work. I was up till one o'clock last night running through them again just to make sure that there wouldn't be any problems. And Steve is going to run through the slides at the beginning. So with that, I will shut up. Oh, one last thing. If we talk too fast or not loud enough, just let us know and we will slow down. Sometimes we just start talking and we start talking really fast and we think everyone can understand us, but sometimes they actually can't. So if we start talking really fast, just let us know and we will slow down. Alright, the other thing is, Grant and I both like questions. Actually for that man interrupting Grant while he was speaking, which is rare for most audiences outside of the US, he gets a free coffee cart. Please pass that down to him. So we like, there are five more coffee carts. I saw everybody looking longingly at the coffee cart. For good questions, not questions like, can I have a coffee cart? We will give out some more coffee carts as well. So this is the deck right here. This Bitly link at the bottom is actually the deck. I recommend you actually open it because there's links to some of the others. There's links to Tutorial Doc in there. There's links to other things in there. And you can follow along on that as you want as well. It's bitly-dev-conf-w for workshop, not Windows. It can stay active even on Windows. This will be, it's a Google slide deck. It's also a Creatively Common License. So if you love, which of course you will, love this whole workshop and you want to bring it home and then show it to other people, and just take some of the slides out, go for it. Just its attribution only, okay? Although I don't know what we're going to do with it. We'll probably repoint. Can you make it into a PDF or host it somewhere permanently? So goals, there's really only one goal here. It's an introduction to OpenShift and how to use it to build and deploy your applications. There's not many other goals than that. And I'm assuming that aligns with what everybody here wants. Pretty much what people are expecting. Have any of you started using OpenShift v3 yet? Okay, so you're probably going to be bored. Have you actually done development with it? Okay, no? Yes? That was something? You've done something with it? That's really good. I don't learn about security. What? We do. Fine. But in general, you might be bored, but for everybody else, it should be good. Here's the notes for the workshop. Don't click and read those yet. I see everybody getting ready. When we get to the workshop, you can click on those and start doing them, but that's what they'll be later when we do it. Ground rules, ask questions and interrupt us. The other thing is workshops to me are not the sit and do exactly what we said to do. That's actually because you've already seen it. I'd like you to actually play around. Like once you've done the basic thing, we'll give you enough time to actually like, oh, I wonder if I can point that at my own get repo. Oh, I wonder what happens if I try this and try things in the class. There is one, two, three, four, five, six, seven OpenShift either engineers or advocates in the room. So we're here to help you figure stuff out. It's not just to sit and listen to us talk about slides or to read a PDF or a document because you can do that at home. If you don't have the clients, there's the link to the client tools. Everybody should have them because we gave them on the USB stick. Today's stack. How many here have used Docker? I'm assuming everybody, yeah? I've tried it at least. Is there anybody who hasn't? Let me publicly shame you. Okay, great. I'm just kidding. I'm not going to shame you. We're not going to talk too much about Docker today. We can talk some. I'm going to talk a little bit in the beginning, but we're not going to go into detail much about Docker. And it turns out with OpenShift as an application developer, you don't even really have to know Docker that much. It's not low-level Docker commands. How many of you are app devs or develop applications? The rest of you, sysadmins, QE, sysadmin. Like a quarter of the hands didn't go up. What do you guys do? No wonder you don't have a laptop. Okay. What are some other things? That's it. You just were shy and didn't want to raise your hand. Yeah, oh, you're at Red Hat. You want to say what you do, though, anyway? I'm a customer now. You are a customer now? You left Red Hat? Man, that's right. I saw the email. I just forgot. Thanks for reminding me. Yeah? Yeah, just about the Docker. Well, being an app developer, that's exactly the shit I don't want to deal with. Exactly. That's why you're here. That's why I don't know what a Docker is. I'll explain it. Don't worry. But we're not going to go in depth on Docker. All right. So then the next thing is Kubernetes. How many of you have heard of Kubernetes? Everybody? I'll go over it just a little bit. But the idea is OpenShift is actually a Kubernetes distribution. I know our marketing people don't like us saying that, but I'll explain why it isn't a little bit, at least from my perspective. And then most of what we're going to be doing is OpenShift. So containers, in this case we're going to be working with Docker containers. Containers actually slice, well, everybody else, you can correct me when I get everything wrong. How's that? All right. Just fact check me while I'm explaining to the three people. So basically containers are not VMs. So a VM, you have to boot the entire operating system, right? With a container, what you're doing is you're taking the operating system and you're taking slices out of it. They all share the same kernel. So the thing that matters is there's a kernel and then you're using a whole bunch of Linux black magic. They basically make slices out of the kernel. Yes? How about the virtual machines? They're sharing the network. Go ahead. Yeah, I usually have the problem of sharing the data, so then only one web server can be on like 480. No, that is not the problem here. So it's not a problem there. Unless you were, we'll get to it later. It's not necessarily a problem. Usually what you do is each container will start up on their own port and then you'll redirect from 80 to whichever one you want to redirect to. But they raise only one external 80, right? Because there's only one physical. Yeah, but it's still just, it's not like one virtual machine would occupy it and the others wouldn't be able to. Can anybody fact check me on that one? Yes. That is not true with Docker, right? Yes. Okay. That's right. Each container gets its own IP address. But we're, they, let's keep going. It's not important to what we're doing today. It would be a great Google search for you to follow up and read more material on later. But hang on, as part of OpenShift, we actually include an overlay network so you don't have to worry about that. That's coming later. But that's exactly why we do it, so you don't have to worry about it. Yeah, that's why I'm sitting here. Yeah. Okay. The other thing is, it has, in general, it uses what's called a union file system. Doesn't it? Who knows where union file systems came from? This is showing your age. So anybody under the age probably of 25 does not know what a union file system is. If you are older, you have definitely used it. Yeah, go. Did you ever say where it's from? I write on my CD drive for the year. Exactly. It's a CD, it was originally designed as a CD drive format. CDRW, does everybody remember you could burn CDs? And you could burn multiple times to the same CD, right? And so that's what a layer, it's a layered file system. So you'll notice in what would happen in terms of a container, there's the kernel at share. When you start to write an image, though, you're writing everything in layers. So you can, in this one, all they did was put a busybox image in, and everything else spins up in its regular space. In this one, they put Debian in, then they added Emacs and they added Apache, and then the top on top of that is writable. And this is important in, especially in terms of comparative VMs, because you can actually have images inherit from each other. So you can have a base image, and then you can say, based on that base image, I want these following things. And then the next person, so then if I need to patch stuff, I can patch the base image and then rebuild the others, and they all pick up all the patches. No. You can, so it's just like a CDR. Once you wrote to that, do you ever use a CDR? You're in the age group, which I don't. Well, yes, but you could actually overwrite files. In a CDRW, that acted more like a hard drive, right? But with CDRs, you would burn, and then you would burn. You know, like there was, there are in the RWs, you would burn, you would burn, you would burn, and then you'd be like, garbage, right? Because you can't put anything more on it. So it's like that. It's not an RW where you could clear the space and then write over it again. Okay? The nice part about it, so how many of you have seen Dan Walsh here? Or talked to Dan Walsh? I saw one of Dan's talks. So at the very first DockerCon, I helped drive the keynote demo for the VP who was talking, and Dan was there as well because he was working on Docker at that point. And Dan said, I don't care about all this other stuff, the real reason I'm excited about containers is because we can finally get rid of RPMs. That was his big motivation at the time. He was super excited about it. Don't tell him I said that though. But the reason why is because with RPMs, what do we put on the machine when we do an RPM? We put the bits on. With a Docker container, you're putting on the bits and how it's supposed to be configured as well. So you can just start it. Like if you put a patchy on with RPM, you can then say a patchy run, but it doesn't have, you can't set any real configuration, like here's my index.html, it's in this directory, all that other stuff. With a Docker container, you can actually do that. You can, but as a software distributor, you can make your own, a user can make their own RPM, but as a software distributor, I can't ship it pointing in the same way that you can. You can put down configuration in a, it makes a runnable image rather than adjust the bits down on your machine with configuration files. Do you see the difference? No. The reason, okay. We're not getting videotaped, are we? No, good. So the reason is I was going to be the ugly American abroad that I said, and I was like, I'm going to start making country jokes. And then I almost got in trouble in Linux Conf Australia. I know I'm not in Australia and I know this crowd is different, but I'm not going to go there. So that was the whole, yeah. Sorry? Yeah, it's parameter. Okay, you, not for you. And then, just for terminology, a container is a running instance of an image. Right? The big fact, though, that I think this is the important part for developers, especially who are ones coming off of VMs, is that they're compiled and immutable. So how many, those of you who've used, how many of you have used AWS with EC2 images? Right? A Docker container is like an EC2 image. Right? I think a lot of us with VMs are used to, oh, I'll go in, I'll make a change. When I shut it down, that change is still there or I put my content in and it's still there. That is not the case with a running Docker image. You go in, you update an HTML page, the image restarts or you shut it down and bring it back and it's all gone. Right? So you have to operate in a different manner with Docker images than you would with VMs. And I think that's one of the things that's kind of hard. If you haven't used Amazon, especially, that's kind of hard for people to grasp. So that's all I think I'm going to talk about with Docker. Yeah. So I just, so how to build an image. This is the other, that's, so do you guys know that containers have been around forever? Really? So is that a, nobody's moving their heads. I'm going to count on the, let me see, who are the most forward people? The Spaniards will never talk. I've spoken Spain and they never talk. Germans will talk. I don't know if these things inherit from Germans. So I'm looking to the Germans to actually, can you, I'm German. Yeah. I knew you were, shocker, shocker. But, so what's an early, what's an early, what's an early container before Docker? Change room. Yeah. That's probably one of the oldest ones. Open VZ. I don't know open VZ. It is. It is? All right. I don't know open VZ. That's fine. But though, the probably the earliest one that was a container. And if you come to the keynote tomorrow, we'll be talking about the history of containers as well. Dan Walsh will be there and so will Mike McGrath. What, I think one of the things that made Docker so successful, there's a couple different things, but one is they built this nice compiling, you can write a plain text file and that you can use that to specify how to compile your image. Right. And so it's very, like so one of the things that's been good about that for Red Hat customers is most people find their Docker images where? Docker Hub. How much do you trust the images on Docker Hub? Not at all. You said yes. I don't want to be anywhere near your machines. Or actually, I do if I want to go spam a bot network. So basically, you'd have no idea what's actually, they'll have the file usually, right? They'll say this is how I compiled it, but you don't actually know if that's what they did, unless they have like an automated build and all that stuff. So what will happen for Sysadmins, especially for Red Hat customers, is they'll have the developer bring the Docker file to them and they'll say, this is what I want. And then they'll say, okay, great. And then they'll use like a RHEL certified image as the base layer and then build all the other stuff on top. The developer doesn't care because they got what they wanted, but the Sysadmin is happy because they know all the layers that went in and they control them more. So if you want to go look, you can look at what a Docker file is. They're immutable, so there's some links to some best practices. Any other questions about Docker? Because now I am really done with Docker. Okay. Is that enough for you? Yes. Okay. Because we were going to hold up the entire class just for you until you understood it. So containers are not enough, though. So most of you have played with containers, right? And so you go there, this is what my experience was. Okay. Oh, that's cool. Look, I spotted up a patchy. Now what? Right? Like I built this nice little image. Now what? It felt to me very much as a developer, like I had just moved to lightweight VMs. Like it was still all the same problems. How do I get this done? How do I actually do, like decide which machine it should go on and stuff like that? So that's where you need orchestration, scheduling, and isolation. And this is where Kubernetes comes in. So Kubernetes is an open source project out of Google. Everybody heard of Google? And so its purpose is to actually orchestrate, schedule, and do all the stuff that you actually want to do with containers past like just spinning one up on your desktop. Okay. So how about some history about it? It comes out of Borg and Omega. So Borg and Omega were how Google orchestrated their containers inside of Google itself. Everything inside of Google is basically containerized. How many of you have Gmail accounts? Okay. And the rest of you are lying. And then so that means each of you has like five Gmail accounts probably. So if you think there's 6 billion people in the world, there's probably about 30 billion Gmail accounts. When they launch Gmail they're launching a container. For each account? For each time you open the Web interface, that's a new container being launched. So this is probably two years old this figure. Google launches about 7,000 containers a second. Which is probably more than you'll ever do in a year unless you're at a big company. Yeah? We're talking about Docker containers. No. They're own containers, right? Remember we said there's change in one who brought in kernel namespaces. Wasn't it Google? Seagroups. Seagroups. Sorry. I knew it was one of those fancy Linux things that I like having there but I know nothing about. So they were the one who brought in Seagroups. Right? Because of all the container work that they did. So they launched about 7,000 a seconds and right now it's being orchestrated with Borg and Omega. And so the next version that they did of this though they said, hey, we really want to open source this because two reasons. One, we know it's easier to open source technology. Two, if everybody's using our open source technology it's easier for us to hire engineers. Right? The typical open source story. So they came to us, they came actually, we were one of the early, Red Hat was one of the early collaborators on it. We were doing something similar at the time. If any of you were following that space that early on it was called GearD because our original containers, OpenShift used to run containers and they were called gears and this was GearD and we looked at them and they looked at us and were like, there's a bunch of engineers working basically on Kubernetes and so for us, Kubernetes, I'm going to say it's not literally, okay, but you can think of Kubernetes as our new cloud kernel. All right, and what I mean for that is, does Red Hat own the Linux kernel? No card for you either. We do not own the Linux kernel. Right? We're like probably the second largest contributor. I don't remember first, second, third, contributors, but we don't own it, right? And we push things back upstream into it. Do we just give you the Linux kernel or do we build a whole distribution around it? Don't answer because I don't want to embarrass you again. Do we just give you the kernel or do we give you a whole distribution? Thank you because the kernel by itself is not really useful. So for us with Kubernetes our thought is that Kubernetes is our Linux kernel except in the cloud space. We don't own it. We contribute all that we do with Kubernetes. Everything that we do and Harrison is actually an engineer and so is Mikhail. Mikhail, have you done any Kubernetes? Pat, have you done any? Yeah. So basically most of the work is pushed upstream. All our work is eventually pushed upstream. Some things we try to push upstream and Google says no, we're going to do that. Right? So we don't control everything. But we also build a whole distribution around it to make it useful for you. We find they say that quite often. Yeah. We won't get into that discussion here. So, but that's how we act. And so when we're do we ship a modified kernel or do we ship a normal Linux kernel? Is it like proprietary kernel that's different? No. Sorry? We, yes, but we don't, it's nothing like it's not like who's, what company do you work for? It's not like HPUX versus AIX. Right? They all had Unix kernels but they weren't compatible with each other. There's minor differences between us and the Ubuntu kernel but usually the incompatibilities higher up not at the kernel level. Right, Micha? Thank you. Gosh. Check. Okay. So now I know what the category to start putting the check, because Marc does the exact same thing to me. So now I know the start wearing to put the check people in. So our idea is, yeah, not at all Marc. I don't even want to get you started. So my point is, my point with that though is OpenShift is a straight Kubernetes cluster. Okay? So if you, there's a command line tool for Kubernetes called kubectl, I call it kubectl, you can actually use that against an installed Kubernetes, OpenShift cluster. Right? We're actually just, we're building stuff on top of, not taking their stuff and making it different. Right? I just want to make that point clear with it and then so we're done with Kubernetes. I'm going to accept the political part about it. Now I'm going to show you what a cluster looks like. I'm going to go through this quickly. Why? Because did you come here to hear me talk about slides? Yes. Really? You can come by the booth later and I'll talk to you for hours. I'm assuming that most of the other people came here because they actually wanted to do stuff with their hands and play with it. And plus everybody has only so much information they can fill in their head. I don't want to fill you up now about architecture. Okay? But I'm going to cover the big points here. So one of the big things is Kubernetes puts its containers into pods. Right? So with Docker, some of the restrictions on Docker is two containers cannot see the exact same file system. Right? They can't share the exact same, like they can't look at the same files of each other. They can mount disk but like one container to another container. A lot of things actually need that. Not a lot. Some things need that. The other thing is they can't talk over local host. And some things expect to be able to talk over local host to another thing. So you can put two containers in the same pod. And then the, so the pod is the atomic unit in Kubernetes or OpenShift. Does that make sense? Right. It could be one to N. What? Okay. So what country are you from? Poland. That ruins my whole theory. Say you're from Germany or Austria. Yes. You can put one or more in the same pod. It's true. And, so one of the things that happens with the web developers is because they're like, oh, it's on local host and they're seeing the same file system and so they're talking really and it's really tightly coupled. I'm going to want to put my web server and my database server in the same pod. And that is a big no-no. I mean, I never want to do that. That's an anti-pattern. Because suppose you're sitting there and you're getting a lot of traffic and your database is humming along just fine but your web server is getting exhausted of connections. If you need to handle more connections and you spin up another pod, what have you also spun up before besides another web server? Another database, right? So it's basically what you want to do is a pod should be only the things that can live and die together. And then you had something that was like monitoring the logs of a database so it needed to see the file system. Great. Put that in the same pod because there's no reason for the monitor to live without the database itself. But if you need to scale things independently or they don't have to be together, put them in different pods. Sound good? Okay. So that's that. We'll get back to pods in a bit. What happens is there's one to end masters and one to end nodes in the cluster. The master and the node can be on the same machine if you want to. That's what we do with our all-in-one. But in general, for a real cluster, each master will be in its own VM and each node will either be in its own VM or bare metal. This runs on VMs or in bare metal wherever you want to run it. It just needs a machine. Does that make sense? And so the nodes are where what happens. The master is the part that takes care of what happens on the nodes, who goes where on the nodes, all that kind of stuff. Does that make sense so far? Okay. So the master to talk to the master as either a sysadmin or a developer, you're talking through a REST API. And so when you're using our web interface today or our command line interface today, they're both talking to the same REST API. Does that make sense? So the great part is something you can also talk, you can write Python, you can write any code that can talk to a REST API and automate those tasks against the same master. And with our command line tool, you can actually ask it to be really verbose and then you can see the exact REST calls and you can just copy those in your own code. Okay. There's a bunch of other stuff happening here like off and the one I really want to talk about though is the data store. So in OpenShift, somebody who said you've played with OpenShift, do you know no Red Hat engineers involved? What is that data store in OpenShift? Does anybody know? Have any of you heard Merrick stop talking on Slack because it's showing up on Grant's screen? Yeah, let me fix that. Oh, sorry. It's SED. Have you guys heard of SED? It's a key value store built to be distributed, right? And it was built by CoreOS and I can keep talking with the slides. Oh, you fix it? Okay. So this is actually the data store and it's on all the masters. How many of you use OpenShift v2? A couple of you? Some of you? All the Red Hat people keep their hands down for these questions. They do not apply to you. What happened with v2 is when you made a request and you said oh, I want to spin up Tomcat. Let's just say. What it would do is it would go here to the master and then the master would try to go find a node and then say to that node hey, spin up a Tomcat cartridge for me or a Tomcat container and then it would wait and watch and when this came back and said oh, yeah, I spun it up then it would come back to the master and the master would go and say hey, I spun up your Tomcat container and here's all the information about it. That is not what happens here. The way that Kubernetes works is what we call declarative syntax. It's a declarative cluster. Right? What you basically are doing through here is setting up facts in the data store and you can actually do this by submitting a little JSON or YAML file. So you could say I need two Postgres servers. I need five of these. There's an AMQ, five web servers, an AMQ server. This is the facts. This is what I want the world to look like. Does that make sense? You say okay, you feed it into the master and then what happens is all the nodes are constantly checking into the master saying hey, does the world match the truth? If the world does not match the truth, we're going to make it so. The master never says to everybody else hey, I got some new information. Go do this for me now. It's the cluster that comes in and says I need to make sure that the cluster always matches the truth. Which makes it able for us to basically state a truth file and then have the cluster go do it. And this allows things like if this whole node goes down, we got this yellow, there's a Postgres, there's supposed to be two yellow Postgreses. Does everybody see that? The truth of the world is there's supposed to be two yellow Postgreses. If this node drops out of existence, the cluster is going to keep saying how many Postgres instances am I supposed to have? Two, how many are in existence? One, okay, I need to reschedule and make another Postgres instance somewhere else on another node. Does that make sense? Which is awesome. So basically we get self-healing out of the box for free. That doesn't mean that every node needs to talk with each other. Yes. And is that handled by the cooblet? Do they talk to each other, all talk to each other, Micha? No, so they don't all talk to each other. Micha is smarter than me on this one. Okay, you've made up for yourself from before. And that's the cooblet that handles all that, right? The cooblet handles all that. The cooblet that lives here handles all the communication with each other. So the cooblet is kind of the bridge between the two. Between the master and actually getting stuff done. Okay. So does everybody get that idea? Okay. The other thing that this handles is scheduling. So the idea and what the scheduler is is when you say, oh, I want to scale up from two to three of my web server. The master is the one who's going to decide which node are we going to try to put that on. Does that make that scheduling? And so what's nice about Kubernetes is it has rules for scheduling. And you can specify as many rules as you want and specify the order in which you want them to apply. So for example, one thing that most people do in their rules and I think this is the default rule, I think, is not affinity. So we right now have, let's see, we have these two yellow Postgres. Let's say I scale up another Postgres. What this mean, not disaffinity means is, do not put it on this one and do not put it on this one because they already have a Postgres on them. Because if you're trying to build redundancy, you don't want the same thing to end up on the same node all the time. So then you can have another rule though. So we now have one, two, three, four nodes that we still have to choose between. So you can have another rule which says least CPU. And it's going to go through those four and say, okay, this one isn't pretty big CPU one so I'm going to schedule on that one. But you can specify least CPU, least memory, all sorts of other things. You can also set disaffinity between clusters. So you can say or between data centers like what happens is, and we'll see this later, everything in Kubernetes happens with labels and selectors. And I'm not going to talk about them too much just yet. But you can label each of these as being in and then you can have a whole another set of nodes that are in data center B and you can say I want disaffinity at the data center level. So I'll spin up one here then I'll spin one up in the other data center. Does that make sense? And there's nothing the system in has to do with that other than set up the rule. Once that rule is set up the cluster takes care of the rest of it. You, yeah? Maybe you're finished but afterwards maybe you want to talk a little bit about the truth and the reality because back in the days I was starting my database something waiting for it and then starting my web server so the web server knows oh my database is up and operational how's that happening? Is there truth and the real truth and the reality are? Oh so if I understand your question correctly it's you want to be able to say that the truth is the database has to come up first and then you start up the web server. Is that what you're starting? Come up first and the operational it's ready to handle the web server until that happens so there are there are I don't know can we handle that yet? No It's resistant to the database not being there and then when it is there so are you going to answer? Yeah The cluster doesn't handle that I'm drumming I know they answer but I think it's not so wait so the other thing I need to add to my weight just before you get to the truth I can talk to this just for a second but also the new rule I need to add about Germans is they will ask you questions just to make you these people educate the right thing Hey you should talk about it Yeah So there's one thing that you can't do The answer is pretty important you need to change your application for the new world Yeah we haven't even gotten that there's a whole bunch of implications for this design in general that are not like does everybody know the difference between microservices and monoliths? Right like you have one big bundled application in this scenario there's a whole bunch you have to this whole architecture is different these pods are ephemeral you can it's not like in the days when we most of us grew up building apps and we had a whole server and we built our apps even though we knew it was false that oh I've got a box with redundant power supplies and rate arrays that database will always be there and I can build my app assuming that that database is always there even though that's not true it was rare in this sense situation it's less true more than before that the database will always be there building on this kind of architecture means you cannot guarantee an IP is around except for one thing and one thing only which is a service and we'll get to that in a bit but you can't guarantee that thing is actually even up when you expect it to be up so you have to build for it's about horizontal scaling so the topic to read about is what's called horizontal scaling you have to build your app so that gives that get to what you wanted to talk about there are things what can I just say my one thing or do you want to say what are you going to talk about action hooks no I was just thinking about this and we don't obviously know everything about OpenShift no person on the planet could because it's a big system but I was just thinking in that scenario why couldn't you on your web front and set a readiness check that points to the database and then your apps wouldn't actually know what a readiness check is so there's the ability it's coming in the lab it's coming in the lab though and we'll talk about it but there is the ability in Kubernetes and you guys will do this later today to say do not route traffic to this thing until this check passes and so then what grant was does that make sense then that comes right in the platform what grant was saying is then actually write your web your web application code to query against the Kubernetes API which you can do to set yeah you could with an SA account RAM to say do not start trying to connect to the date do not come up until that indicates that it's ready is that what you were saying yeah with one but I want to get through the slides okay with one caveat he said change your application code it's not your actual code it's your configuration inside Kubernetes for the application has nothing to do with your actual source maybe whatever how can you be wearing all those jackets too aren't you sweating no I'm cool oh my gosh she's killing me in here okay so the main things I wanted to get from the architecture we know what a pod is now right please say yes so we can keep going okay thank you and we know that there's a master with nodes we know that there's truth and then the cluster trying to make the world match the the other thing you could have fixed when you turned off Slack was your screen isn't it plugged in why is it maybe it's not plugged in maybe it's not plugged in um yeah exactly we've learned that we've learned about scheduling we learned about the rest API the last thing I want to talk about in the architecture and we'll talk about this in a second is remember we said that you can't attach two containers to the same file space but we also don't necessarily want to even attach to the file space on the machine itself right because you might want to share it between different nodes okay so what that means is as a system administrator you can take a CEP cluster and attach it to the entire cluster and then each node can make claims into that cluster and get space this could be iSCSI it could be NFS it could be Gluster it could be Amazon EBS it could be Google Storage Engine so what's nice about that is the people writing the application code don't have to know what this is they can just actually write their application and make a claim against the space and it just works for them does that make sense yes thank you thank you one of the other things that comes from OpenShift that doesn't come from Kubernetes so these are the things that come from OpenShift not Kubernetes is logging metrics and included Docker registry so Kubernetes does not have an integrated Docker registry because we included Docker registry inside of our cluster we can actually do things that you'll be seeing later it gives us a lot more power but the other thing that we do that doesn't come with oh it's starting to come now from Kubernetes is routing so when a request is made to your app actually how to route to the appropriate pod they had services so they had ways to traffic between I'll get to it in a second I'll show you okay the next one I'm even going to do faster because we're actually these concepts are so specific to actually playing I want to come back to them while we're doing so don't ask too many questions if you want to get your coffee card talk about that this is some of the stuff it's in the early versions of Kubernetes you did not have users and groups once you got into a Kubernetes cluster you could see every namespace inside the cluster we originally put in the users and groups and some of that work has been pushed upstream into Kubernetes it's still not the same level of the users and groups in an open shift so there's a bunch of things which we've pushed upstream that are not fully pushed upstream yet because of negotiating with communities so we're not adopting them yet right so like one thing is ingress which you'll see in a second is our routing layer that's not we're not using ingress yet because it's not fully pushed up the big thing I want you to get from this one is users and groups and authentication that's plugable Kubernetes has a namespace this is basically your sandbox we do a materialized view of a namespace based on the users and groups and that is your project so a project which you're going to start right away is where you're going to put all your stuff in general it maps to an application but not always we go to that we're going to go through each of these pieces so I'm not going to go to them quickly pods services handles communication between pods sets of pods and your web server they're going to talk through services routes is how you manage stuff coming in directly to your pod so something from the outside world make sense okay we provide a software-defined network out of the box Kubernetes requires a software-defined network this is one of the things people hate setting up with Kubernetes we give you one straight out of the box it's based on OpenVswitch it's probably not what you're going to do and you're going to probably use one of the plugable providers from like F5 right who's the other ones that we support Cisco there's a newage these are all this is plugable almost everything we do is plugable in here okay there's a data stuff I was talking about before persistent volumes to actually says give me that space I need this much space okay so this is all we'll cover this later replication controls to say how many pods should be up and running in the world we think that people when they write applications wants more want more than that there's Kubernetes now has a deployment which is based off our deployment configs but it doesn't have all the same features that we have yet so we haven't switched fully to deployments right so we have a deployment config Kubernetes has a deployment this handles like how many pods to set out how many pods to set up what Docker registry to look to where am I getting the Docker image from when I update do I do a rolling do I do an up and down deployment it handles everything about deploying your Docker containers and we also handle builds so we'll go over all this stuff later and we'll probably come back to this pictures later as we do it sound good okay that's it for slides you see how my teams work here let's say I got harassed by who Mikhail I got harassed by Jorge Graham was gonna stand up and interrupt me I have an international crew of inter I haven't a crew of international interrupters and sass givers any questions before we get on to the workshop is it hot for everybody else in here is it just me USB stick does everybody have the OC client tools who has a laptop he wants to do the lab you want to do the lab you have it here you get your snacks you gave it to all right can we also be with the web console yes you know the funny part is that I also really really to you like if you ask my team one of the favorite things coming out of my mouth is I hate and then Grant gets anxious because I say that word so that's it for those of you who want to watch Grant will be doing all the lab exercises but they'll also be okay well you're not gonna do them as we go along yeah I do okay I like to do them after everybody's had time to do it so that we can talk it sounds good but if you'll probably be bored for large chunks of time while we let people do what they need to do okay okay so let's go back to one of his previous slides right here I'm gonna leave this up just for a minute this is the lab manual there we're gonna be using today that we created for you guys and so I'm just gonna stand here for like 30 seconds while everyone gets this pulled up in your browser okay now we the environment we're going to be using is running on Google compute engine is that what they're calling it now Google GCP Google cloud platform Google computed it's running on Google's cloud it's not on AWS it's not on Azure this would work the same on either of those platforms we have spun up a machine for us to use everyone is going to be using the same cluster we created user accounts for everyone my ask what I'm asking you to do is just use your user account you will know the username and password for everyone else's account but please don't mess with other people's stuff just be a good citizen and not hack into other people's stuff okay it's not fair to them they're trying to to work so what we're gonna do is I am just going to point to you and I'm going to tell you your username and your password I have a big brain I know what they all are all right you are going to be user one and your password is user one see how this works okay user zero one okay you're going to be two three four five six seven eight nine ten eleven twelve thirteen fourteen fifteen sixteen seventeen eighteen nineteen in the red twenty twenty one twenty two twenty three twenty four twenty five twenty six twenty seven twenty eight twenty nine thirty thirty one in the gray shirt thirty two okay so I'm going so this is the lab manual let me make this bigger so everybody works at different speeds but I will be showing these labs for those who don't have a laptop and still want to see it but please work at your own pace don't try to keep up with me and I'm not going to try to keep up with you the technologies we're going to be using today are docker images that we built we're going to be creating new docker images using the python language and I used to use Java but we switched to Python mainly because we have Graham on the team now and Graham wrote the mod whiskey Python server and so he's forcing us to use Python a lot more now okay so let's go ahead I'm going to show you how to do the first lab is just to install the OC client tool and Steve gave you these on the USB drive but if you didn't have that I also provide links in here with instructions you basically want to unzip the file and then add it to your path oh gosh so at the end you should be able to type OC version and I am running an older version of the client tools I'm running version one three you're going to be running the latest one for one I think and then once you log in you'll see the Kubernetes and OpenShift version so that's lab one okay so I'm gonna give everybody five minutes just to get this running on their local machine and again if you work faster just keep plowing through the material and at any point in time no matter where you're at in the workshop and you are stuck or have a question just raise your hand and one of us will come and help you solve your problem to get you back up and running or to answer your question okay but as a as a group as a class I am going to be setting times on things and showing it as we go along okay so let's go ahead and get started on the labs lab one it is okay to use an older version I wouldn't use a version older than one three or three three it'll probably still work even if you're using like one two but yeah it should be fine okay On what? Were you running a 64 bit version or 32 bit? Yeah you're out of luck. Are you using the Web interface? Yeah use the Web interface. End of the plate and plate one. Well yeah I have the smaller versions do you have the 970 or 980? Yeah Hey everybody I know I'm interrupting one more time. Tomorrow at the last talk of the day which I know you're all going to be excited for Graham's going to be going in we're going to cover readiness checks and liveness checks very shallowly here Graham is going to actually address the question about databases and readiness checks in his talk tomorrow in the last session okay so look for Graham Dumbledon open chip talk if you really want to dive in on readiness checks and I always make this comment and I'll make it again on this first lab when you add something to your path because I always think it's funny on Linux and OS X you just export a path in your new path I provide instructions for Windows because Microsoft considers changing your path by going into system settings and then advanced settings so changing your path is considered advanced. Windows stuff is 64 bit so it doesn't work it's not limited to Windows X speeds. It's just the version of your operating system yeah I mean we could build 32 bit clients. Yeah if your host OS is 32 bit your VMs are going to be 32 bit. Okay I'm going to go ahead and show the next lab so in lab two we're going to be exploring the command line and the web console if you don't have the command you're just going to do the web console if you have problems logging in this is the most critical thing to do this you want to do OC login HTTPS console.devconf.pixi.io port 8443 now if you're on Windows it'll if this is the first time it's going to ask you to accept the security certificate just say yes there it's a self-signed certificate we just didn't purchase one and then you're going to log in with your user so user zero one or user whatever I'm just going to use 50 then your password is 50 okay. Oh good okay so let me so they found a mistake in the documentation so let's fix that real quick and it's in the first lab and explore oh PIXI it's Y so let me change that real quick. So someone found a error in the documentation it was PIXI so I'm just going to kick off a new build to fix that and so that'll be fixed in just a second just refresh your page it's fixed yeah I fixed the documentation and the way I fixed the documentation just while you guys are working is I actually built a new Docker image on the fly and deployed it out that's how quickly I was able to do something like that with OpenShift that's why I love using it and I did a rolling deployment too so you shouldn't have experienced downtime while I was fixing the documentation. Would you like to scale up the pause so that everybody gets his own web server? Fine we'll add five web five documentation servers so it's all load balanced there it's coming up there now it's load balanced all right so back to the so the most critical part of that next lab is to make sure you can log in on the command line as well as log in on the web console so I need to log in to the web console so to do that I'm going to go HTTPS console.devconf.pixi.io8443 and I am user 50 so I'm going to log in as user no I'm already logged in let me log out and log back in user 50 my password is user 50 Does anyone need help logging in? Is everyone able to log in? Who is not able to log in? Yeah that's a better way of asking them. If you have a laptop and you're trying to log in. Graham and Steve will help you. While they're helping you log in I do need everyone to kind of take a break for a second because I need to explain the application that we're going to be building today okay we're going to be building a micro services based geospatial application okay so what we're going to be building is a map application that's going to indicate on the map a couple of things we're going to start with a back in a micro services back in that will display all the national parks in the world the front end of that is written in JavaScript using leaflet JS and just plain old HTML and we're going to pull that down as a docker image that we have built for this workshop so just to show how you can deploy something from docker hub once we get that deployed we're going to add a back in system to it that's a micro services approach that has API that's going to provide the national parks and that we're going to build a docker image from source code okay once and at the very end we're going to add another micro services back in that will also map major league baseball stadiums if you don't like baseball a grandma's written one for Australian dunnies that you can use we have a back in for flight information and I believe we also have a back in for weather data is that right airports something like that and itself discovery using Kubernetes and so we're going to learn how to apply labels to micro services back in that are front end will watch for and automatically display them on the map so that's really what we're building today we're using geospatial 2D box map queries to display that on the map just who does not have a github account okay one of the things we're going to be looking at today is how to change the application code and so to do that in the lab I ask you to fork one of the projects into your own github account when you get to that point if you don't know how to do that just raise your hand and we'll come and show you how to fork a project in github because we want you to have this code in your own account so you can make changes and then deploy it and see some of the automated deployment flows we have okay I realize some people may not have a github account may never have heard of github so just when you get to the making or when you get to the deploying the python lab just raise your hand and we'll come and help you get a github account and show you how to do this okay alright so that's the app so I'm going to go ahead and show you the next lab which is our sorry lab 4 but I it is important to read these labs like we have two hours so if you really want to learn like actually take the time to read it and I'll shut up in just a second so I'm not interrupting your reading after I deploy the first docker image and I'm just going to show you how this works and you're going to actually read the lab so in OpenShift we support the ability to create docker images on the fly or to use one that's in the hub now this is also important as you're going through the lab manual you'll see user XXX in all of the URLs in all of the examples replace XXX with your user name in every instance so if I make this bigger in the labs it says to create a new project called user XXX dash parks because I am user 50 my project is going to be called user 50 dash parks okay and I'm just going to click on create now I have a project and I'm going to click on deploy image this is where you can actually deploy an existing docker image from a registry so I'm just going to click on image name here paste this in this is in the lab and I'm gonna hit enter and it's going to pull down metadata from that docker image from docker hub and I'm going to click on create okay so that's the next lab so I'm going to give everybody like probably 20 minutes to get to this point okay but if you get to this point just keep going work at your own pace okay but it'll be 20 minutes before I show the next thing on the screen so you can actually concentrate and read yes so I was asked the question if I could explain you know when I deployed an image what that actually means so if you're not familiar with docker the way docker works is you have a docker file and a docker file is like almost like a it's a descriptive language that defines what your image should look like so if I wanted to install Tomcat for example an example of a docker file would be from sent off seven okay that would be my base image and then I would have yum install Tomcat and it would install Tomcat and then I would say CP my war file into Tomcat slash standalone slash deployments for example okay that's kind of defining what my image should look like and then I run what's called a docker build and I pass in my docker file and then docker creates a image based off of the commands that I have in my docker file and I give you a very simple example that probably wouldn't work but you know they can get pretty complicated docker when you run a docker build it creates an image so you have a image of what a running container should look like okay you then take that image and you push it or you add it to a docker registry okay so you upload this big image to a registry when you do an image pull when like we did from image you pass it the image name that you want to deploy it downloads that image and then it runs it it actually puts it into an executable state and that's where you have a docker container okay so image is like a snapshot of what your container should look like when it's running and then a container is a running implementation of the image does that make sense so you have one image but you could have like 12 or 1200 containers running that image and so what we're doing in this lab is taking that image from docker hub that someone wrote a docker file for ran a docker build uploaded the image and we're running it or executing it as a container inside of a Kubernetes pod okay two questions yeah a docker image can be very small up to very large depending on what your docker file does I believe Graham's python image that we're using is 17 megs is that right something like that maybe larger no idea it's still a few hundred megs when I created a wildfly docker image the other day with a full JDK 1.8 stack and it was like 300 megs okay so it's called the files that you want to sort of install yes and it's called the prescription yes okay yeah and so there's two ways to run images on open shift one is just to run a docker image that you've already copied all of your files into but once you do that you can't change them right later in the lab we're going to look at a project that's specific to open shift that we call source to image that actually builds a docker image on the fly based on your source code and and we'll talk about that when we get to that that lab but that's how I prefer to do things is with source to image but a lot of times you just want to run an application that's already preconfigured like WordPress or subsonic if you want to stream music or whatever the case may be Joomla or whatever and you can find a docker image for it you can just run that as the image okay so it's a couple of hundred megs then do I understand that the docker hub is actually hosted somewhere by someone has to occupy a lawn yes docker hub the public docker hub is actually hosted by docker Inc the company behind darker and yes they do probably have massive amounts of storage well so not really not really because it's it's a layered thing right and so if my image is 600 megs docker because I did from Sintos in my simple example Sintos is stored as a layer and everything I do on top of that is just an additive layer so my actual image is just a yeah yeah so it's all additive and when we sort of downloaded the image to this instance of the virtual machine does it mean that the Google computers somewhere downloaded those yep so when I deployed an image what actually happened is my open shift server which is running in Google Compute Engine connected to docker hub downloaded that image and stored it in the open shift registry so now it's local to me and I can use it as much as I want without it having to re-download yes it was quick because I did it last night and so it was already cached yeah yes of course yeah we love GitLab in fact our standard workshop uses GitLab and I had to rewrite it last night to use GitHub so yes it works with GitLab just fine and and bitbucket or whatever we actually for GitLab we actually what Graham was saying that we use GitLab is that we actually have we work actually 4-H our one of the other advocates work very closely with the GitLab people to actually make proper docker containers so that we can actually spin up GitLab in open shift so in the normal workshop everybody has their own GitLab instance and they're working at that Git repo and doing stuff Is that true or no? Am I saying another lie? One lie? One lie? One instance? Oh there's only one instance that we usually use? And everybody shares it? Okay so I lied. In the mask Sorry. Are you check also? You're greedy. Maybe that rule is really just anybody that's not me likes to pick on me So right we send up one for the whole instance and then everybody uses that The GitLab part of the day is because I was like So wait can I say one more thing for especially the people who haven't used Docker before Can you go to the console? The web or the web console? So we're using a is this Apache? What did you put your source code in? What's the web server? It's another Python one just to do the content too Okay so can you click up? I already showed this You already showed this? So everybody knows that this is scaling But what I want people to visualize while this is happening is if you were using VMs So if you said oh my gosh I've got so much web traffic I need to bring up another server How long would it have taken to go to 2 VMs? About a second Five seconds? At least because you have to boot the whole OS Then you've got to wait for the it's got to do its whole boot up Then it's got to boot the web server and then you've got to plug it in So this is back to why So Merrick says they're not immutable but they are compiled right So what you're actually doing here is you're not booting a whole VM You're spinning up a binary That's already on the machine So this is part and we're not spinning up all the pieces of the operating system again We're just spinning up the stuff we need to make another one of these instances And this is what's awesome about One of the things that's awesome about containers And this is also the stuff about Configuring everything needed to run that image is already in that image Which is if we were doing this with our PMs Then we'd have to run some sort of other script To move the content in there and do all that other stuff Yeah And then the other thing is Red Hat has a registry as well Access.redhat.com I think is it access.registry.redhat.com So for some of you who are using the Red Hat image If you're a customer you'll be pulling from that registry instead We have stickers if anybody wants them Yes sir Steve, are students really the thickest point of questions? Oh yeah but there's so many questions Steve Here's a patchy 17 Here's what? I'm not sorry I don't know The Chroma Keyboard Okay so here I'll give you a coffee card for your laptop Thank you There's one for everybody How many people do we have? What do you mean? You're the thickest point of questions I know we're here so everybody can get a coffee Everybody gets a coffee You get a coffee card and you get a coffee card Who wants a coffee? Who doesn't want a coffee card? That's easy I have hot chocolate You can use it for hot chocolate Okay now you want it? No, it's late We're all working our way down I think I have enough for everybody We'll find out otherwise it's going to be like There's more people than one of them obviously No one's going to get No one's getting more than one because they're not about to run out I shouldn't have promised one to everybody Mark, do you have any more? I guess I'm going to be buying some and expensing them Who didn't get one yet? That one and one Everybody? Oh yeah, you got it Okay Mark has no more though You did not see what just happened Mark, can you hand out just to make sure that people don't want them to get them to run out? Thank you Do you got one already? You got one already? Mark See even if you don't learn anything you got a free coffee out of sitting here for three hours and listening to us Oh that's coffee Mark thought that that was a good symbol for coffee Yes Yes we do Yeah so You need to do the Did you do the permissions lab? Yeah, there's a permissions lab that you have to do No it actually works great Okay so now I can say one more thing because I like to make people appreciate what they actually just did So suppose you started with a fresh money response How long would it have taken you to actually get modwiskey installed hook it up appropriately so that it's serving web content make all the security patches How long would that have taken you to do? Days Days For me as a developer I know I would have done it wrong because I could do the systems administrator but I'm always afraid like I'm the next Oh 7 million passwords exposed on the internet So the nice part about this is someone who actually knows how to set this stuff up set up the image and all I had to do was run one command and I've got a web server serving on a content If you want to use Java insert Python Oh my god We have labs for that We have labs for that It's this exact same process It's just a little bit slower for the initial build because it doesn't maybe It's only about a time different Yeah but the scaling is just as bad So it's hanging around until two years ago That's taken too long Thank you for that detail No one really needed to run Okay so I'm going to show the next thing and I realize most of you are ahead of me but that's fine So once I have this Docker image deployed in Kubernetes I have a couple of objects right now I have a pod and I also have a service but those are the two that is important to me right now The pod is the running container of the Docker image and then a service if you remember what Steve was talking about is a internal mechanism to Kubernetes to know how to talk within the cluster It's not a service in that it's not a web service You can think of a Kubernetes service more like a load balancer to route traffic internally to the cluster But what's missing from this is that we have a running pod and a running container so we have the service right here but it's not accessible yet to the outside internet You haven't been able to look at your application in a web browser for example and that's because we need to expose that Kubernetes internal load balancer or internal service to the external world and that's what we call a route and this is also similar to the Ingress concept in Kubernetes that you may or may not have heard about but in OpenShift we have a special implementation called a route and so what I need to do is expose this service as a route So you can do that one of two ways In the web console up in the top right you can say create route and you can give your route a name if you wanted it on a specific host name like your company was called ABC you can say I want this at www.abc.com as long as you own that host name and set up the DNS records and all that stuff if you wanted it at www.abc.com slash myapp you would put the path as myapp and it would route at that context does that make sense and then if you want to secure route so you want a SSL cert you can set that up, upload your cert so if I click create it's just going to create a default route that's how you do it on the web console on the command line what you would want to do is do oc let me make sure I'm in the right project so I'm going to switch to the user 50 parks even though I'm probably already in that project and now I'm going to look at the kubernetes services so I'm going to do oc get services and I have a service called parksmap.py because this is a python image so I just named it dashpy and if I do oc get routes I can look and see what routes I have I don't have any right now so I could either click that create button down here to create a route or I could do it on the command line with oc expose service I want to expose a service as a publicly to the public internet and I want to expose the parksmap.py if I wanted to specify a host name I can add some flags but I'm not going to I'm just going to let it default and that's going to create a route for me so now if I do oc get routes I have a route and it used the default configuration for my open shift cluster that I set up when I installed it to specify where my apps should live and then the way this actually works if I go back into the web console and go back to my app here you can see that let me refresh here is my route I clicked refresh and it was doing a ajax call apparently to load it in but the way what was I going to say click on that link yeah oh yeah okay so the way it builds this URL by default is it takes my service name or my project name it's actually the service name parksmap.py and then it pins my namespace which is user50 parks which is my project and then it adds apps.devconf.pixi.io that's just because that's the way I configured the server does that make sense okay and so the way this actually works and then it's based on the HTTP headers to determine which internal service to route to and that's probably getting too much into the networking side of things but that's how it actually works and so if I click this and if you click yours you should see as soon as it loads here I'm on the conference wifi just like you so I feel your pain but it'll load the map application in a html javascript style interface so does it also mean that you've got dns server in that so the way it works if we want to start talking about dns is you set up a wildcard dns okay so my dns would point dev console would point apps.devconf.pixi.io to the the public ip address of my google compute engine server that I spun up and I would do a wildcard dns so anything that goes to you know xxx.xxx.xxx.xxx. whatever .apps.devconf.pixi.io goes to the same server and then based on the HTTP header information that I send along with the traffic it knows where to route it appropriately you can have everything talking on port 80 and you did it prior to this workshop like yesterday yeah we said gondi is the dns server okay so he did it on my personal server so this is my personal server that I run at my house and I have console.techdope.io and I just set up a wildcard in my dns wherever I purchased my hostname from yes what about non-http traffic oh so the question is what about non-http traffic I don't know the answer so you can set up what's called a hostport is that right is it called a hostport so back in this architect can we go back and slide together and then go back up to the whole cluster so what you say it's a cluster command you set up a hostport and what you say this is the this port goes to the we don't by default we only do 80 and 443 so for the routing layer that's just 80 and 443 right now there's been stuff in the works which is actually to handle any kind of routing but right now it handles there a hostport says when something comes in you actually take a port on every single node and you say this belongs to that hostport and then what happens is you can route directly into that hostport so also don't you have to use like you have to use TLS termination or something like that if you want it to be secure it's complicated and you say this one node will have all these ports open to the internet yes it's a hack super solution but it's a workaround to get it working right but what that means is even if that pod is not on this node that port is still open on this node does that make sense it's not going to look and see so can we go back to this I just want to make one other point about the services because we're now recovering services so the service the object of the service now I want to talk about a little bit more in depth it's nothing but a proxy and a load balancer okay it's actually a firewall rule that's on every single node okay the IP pod IP addresses are not guaranteed to stay the same if your pod goes down and comes back up it can have a completely different IP the IP of a service is guaranteed for the life of that service once that IP is assigned as long as that service is there it will retain that IP address and so why this is important is one for routes we want to be able to make sure no matter how many pods are there we're going to talk to the service not directly to the pod but the other reason this becomes important and we'll see this later is these pods could be a database pod pods right because they could go down come up they could get new IP addresses they could be moved to other nodes you may scale up you don't want to have to keep telling the web I don't even know how you would do that for a lot of them so what instead is when you're writing your app code between the different layers you're talking to the service you're not going to talk directly to the pods if they're part of a cluster the pods can talk directly over their IPs so you're spinning up some clustered server that they can use their IPs because they need to know directly talking to each other but between layers in your architecture you always go through a service does that make sense so that's what a service is all about and you also learned about that sorry to say that again can I actually set the multiple of IPs into the answer? so I have no idea how to do that this is why I like being a developer advocate when people would do networking so Jorge, pay attention the question was suppose I have multiple IPs multiple public IPs how do I set that up for DNS? so we do what's going on in this view this is more like what you're doing is like in the code I'll just go to the follow what you're doing is I'll tell that right there and so we have a lot of system switches that help to let people know that's a good thing and so that is all a concept Yeah, this is just my question. So the board is going to make it this way. What's that specified as a job? Do you think it's going to go backwards? I'm sure it's going to go backwards. Yeah. It's specified to be on 12 exposed to 80s? Yeah. And when it goes backwards, it's specified to be exposed. I'm sure it will. Every year, all of you have something like that. Something like that. Yeah. It's probably what you're saying. I don't know if it doesn't exist. No, it's not. It's not. Because you, I guess, can explore that. I don't know if it's the ones that say those are exposed. It should probably be on the graph. Now the service, I totally can figure it out. You can say, the service, like, let's say your pod exposes 10 forwards. And you can do, like, I only want to have a service for one of them. The others are just for internal communication. Then you can just expose that one to the service. And then opposite way around, I don't know if that makes sense Okay, so the next lab that I'm going to walk through is the roll-based access. And this lab is important because the way that this application that we architected works, yes sir. Okay, yeah, the labs I will leave up at the same URL, the environment will not be up. You will spin up your own environment with OCE cluster up or something. Oh yeah, are you going to be here later today or are you, stop by or we'll have some books later, some O'Reilly books. But it's also available for free, is it any form book? Yes, it's one of those things. I've got it in front, so maybe. Wait, how long ago? This year. Was it blue? Yeah. Okay, then that's the book. So that book will teach you how to spin up an all-in-one vaguer machine on your box and you can do all these same things except for, are you doing a github hook? Yes. So if you can't do the github hook, yes you can't. Come back to your vaguer box. But in the math, all the rest of the stuff, you can use this lab. Yeah, so later today we will have this O'Reilly book that we can give you guys, if you want, that me and Graham wrote. Or you can download the ebook at this URL. Okay, so the way we architected this application is that we have a front end that we deployed as a docker image. That front end is going to self-discover back-end microservices that we add. The way it does self-discovery is we want to look at labels in Kubernetes. And so we're going to attach a label to our back-ends just to say that this is a back-end. And so in order for our front-end application to actually read Kubernetes labels, if this makes sense, and it will when you do the lab, we need to grant permission to be able to do that. And so that's what the roles and permissions lab is. We're just giving view access into our project so we can look at the labels. You could also use this if you had another developer on your team and you wanted them to have access to your project to help you work on it. You could grant permission based on different users that way. So that's what we're doing in this lab and I'll just run through it real quick. I have the commands here. I'm not going to read it. I'm just going to show you how to do it. We basically want to add a policy with the dash Z command that allows my user to view this. So I'm just going to paste that in. And now my front-end application has access to read the Kubernetes labels for things deployed in my project. And so I'm going to move on and do the next one real quick, which is deploying Python code. So I'm going to take probably five minutes to talk through this. Just one question. Can I do it somehow? Yes, you can. On the web console, you would click Grandma come and help you on this. But you can click on this membership management button here next to your project. And you can add permissions on that screen as well. Thank you. Okay. So now we have a front-end deployed that you should have working in a web browser. Now we need to add a back-end. The first back-end we're going to add, and we're going to add two today, is a Python microservice that uses MongoDB. But we're going to build that from source code using the Source to Image project. So halfway through this lab, I provide a Git repository. Click on that link, and this opens it up in GitHub. I am logged in, so you will need to log into your GitHub account or create a GitHub account. You will then click this fork button over here on the right. When you get to this point, if you don't know how, I'll come and help you. Click on fork. It's going to ask you which account you want to fork it in. I have a lot of organizations I belong to. Just select yours. You'll probably just have one. Click that button, and I already have it forked. Maybe I don't. And this creates a copy. Oh, I copied it into a different organization. That's okay. This creates a copy of that project. We call this a fork in the open source world. It creates a copy of that in your account on GitHub. So then you can make changes to the source code. Once you add changes to the source code, you can submit. You've probably heard of a pull request. That's how you submit a pull request. You would clone or, sorry, fork the repo, make your code changes, and then once you have something that's beneficial to the upstream project that you forked, you can create a pull request. But we're not going to create a pull request today. We're just going to modify the code and redeploy it. So now I have a copy of this project into my GitHub account. And I am going to clone this by copying the URL here that's displayed. Go back into my OpenShift console. And I'm going to click on add to project. And we want to add a Python application to our project. So I'm going to select Python. And on the next page, we've changed the UI to allow you some more flexibility to choose which version of the specific runtime you actually want to use. So you can list all the versions that's currently supported. Let's go ahead and use the latest version. Click on select. Give it a name. In the labs, it says name it National Parks. That's what I'm going to do. And then you paste in your Git repository URL. Make sure that this is the correct Git repository URL that you forked. Okay? So it should be github.com slash your username slash National Parks.py and make sure it has the .git on the end. This is one of the biggest gotchas that most people who work with GitHub initially have. If I go back to my GitHub page, you would think this is my URL for my GitHub repository. It is not. This is the URL for the web interface to my project. The actual Git repository URL you get to by clicking load or download and it is this URL. It's basically the same just with a .git on the end. Does that make sense? Okay. So once you get that, just click on create. And what OpenShift is actually doing, okay, under the covers, look at the logs, is I selected a base image of Python 3.5 and then I provided a Git repository. So you can think of it as doing this. It is taking that base image and my source code and it's kind of writing a Docker file on the fly using from Python 3.5. We spin up a separate Docker container and we run that code through a build process. In Python, it's going to be a build to resolve dependencies. In Java, it would run a Maven build or a Gradle build. It will take the artifact of my source code build that happens in that build container, matches it up with my base image and creates a new Docker image on the fly for me based on my source code. And then it creates that Docker image and pushes it to my OpenShift Docker registry. So I have that image in my registry and then we're going to pull the image down and deploy it as a running container. All of that happens automatically. Okay, so you basically just provide source code and like 20 seconds later, you have a running Docker container in the pod. Okay, so now it's pushing the layers. Like Steve was talking about Union file system. Each layer is an additive change and so the first time you deploy a source to image project, it has to push all of the layers. So it does take, you know, 30 seconds or so. The next time I do this, it's just going to push the changed layers so it'll be very quick, okay? And so it's almost done. And you will see these changing permissions errors. That's expected. Just ignore them. And if you go back to your overview page, you will have a National Parks back-end service and a front-end service. After you do that, we are going to add a MongoDB database, link them together, and we'll circle back at that point. I'm going to give everybody 20 more minutes to do this, okay? To get to this point. If you have any problems at all, just raise your hand and I'll come and walk you through it. Yes. There is a little source to image. I can't rather push all of these. I think that's better. You know, the work force, there's no good documentation of how to do it. Okay, cool. Cool source. So, we can do it. You can do it. Go back. I don't know how to do it, I just thought that could be a good idea. So I think Grant showed it before, are you going to show it for it? Yeah. No, I don't think you have to do it there either. Okay, so we're going to slow down the resources. Did you get it? Yeah. Yeah, but this one is already admin, so it's pretty admin, and then you select the Role, and then here, yes. So yes, I'm selecting the admin, edit membership. Yeah, so just select add role. Which one do we need to select? So here, add another role. Select which role you want to add. So in this case, it should be edit, but you don't have to add it to the user. You have to add it to the service account there. Yeah, so for the person. So you need to go to the service account. So donate it in, sorry. Okay, so service account? Service account. What service account is it? It's whatever you guys are using to look at the service. I think it's the... No, the fault. The fault? Yeah, this... I don't have the default one. Yeah, it should be here. You have to make one, because he doesn't see. No, no. No, the default one should be... I don't have the default one. You don't have any? I do have a build-down. I do have a... Deployer. This one should be created by automatically. It should appear automatically. It's not there. No, no, no. That was the issue we had last night. And I think we added... Yeah, the permissions, yeah. He needs to do it for the console. You've got to add the service account, because he's not done the conversion. This is not there by default. I haven't done anything. I know I'm explaining. Don't worry. I got... Give me. He cannot run any command-line tools. So the command where we actually created that service account has not been executed yet. He needs to first create a service account, and then he needs to give the permissions. No. Dude, him and him both do not have the service account. You want to argue with reality? That's probably a problem on the laptop or on the platform. Those three service accounts come by default whenever you create the project. The fourth service account should be always in the program. Okay, then you need to file a PR to teach him how to do a service account. Okay, I can show you how to do a service account, but that's a problem in the labs. The fourth service account should be there. Then the lab deleted it? I don't know, so we need to find what happened, but... Are you talking about the REST one? About what? Are you talking about the one to get access to the REST API? The way that Merrick set up this environment, they weren't created by default. That's why one of the modules... Okay, so just create here a new service account, use it whatever you have, with default as the name, and then you need to add the role as, I think it's a view or edit. Oh, it's up there in the middle. So, Jorge, I have a project that didn't do anything in default. It's not there by default. Should be, so... Okay, so it's a problem with OpenShift. No, it's just how Merrick set up the environment. No, it's not. Oh, maybe it is. Oh, so it's Merrick scripted? You see how our team meetings go? Do you want to comment on that? This is a typical team meeting. So then Merrick's employer doesn't create that account. We'll come and discuss this later. I don't create it. When he does the roadshare, he is not scripted. Do you create it? We'll discuss this later. There will be words. These words. Because this should help you. Okay, so... It should be per project. And the project people created themselves manually. So did you not get to what you needed to do? You still haven't created a service account. There's so many things... He still didn't help you. I'll come in and help you. Who got you the coffee tickets? And who is going to help you? I don't know how to do it, so we'll do this together. Ready? So... Let's figure out how to make a new one. Can we not do that from the web interface? Ah! So this is what he's doing. So what he... He's making a new service account here. Do you see this? Yeah. So you need to do user... whatever your user... whatever dash park slash... I'm going to make a new one that has... I'm going to try to see if it works for me too. User 50 dash parks. And I'm going to give it a... So that's saying that... What namespace do we want to create this user in? Slash. Slash? And I'm going to say Steve. Yeah. But you're doing default. The role... It looks like the role is viewed. Right where you... This is select the project. Oh, okay. So you choose user 50 parks. Actually, so then... Wait, take it back. Take it back. Everybody else can ignore this if you did this. Just keep going. This... So what you should probably do here... is Steve. Yeah. I'm going to tap or spell my name correctly. Because that is an insult. Then pick user 50 parks. Yes. And then you pick... View. View. View. At the bottom. And then add. And that should look like the one that I have up on the screen. Yes, I do. Okay. Now, the next step is... Oh, I think that's it. I think we're done. Shall I know? I don't think we need to though. Just the view. Just the view. Because there's no other users. Thank you. You've now created a service account. That has view and only view privileges. Okay. So does everybody feel like they understand service accounts? Just to make sure? No. Okay. Well, not at a deep level. Do you see why I tell Graham? Just... Does everybody understand them at a very basic level? Because I get a lot of questions about this before from developers or whatever. How do I actually do... Automate things happening inside of OpenShift without a human doing it? Right? Like I want something to automatically happen and I don't want to have to law. You need to create a service account and then use that service account to do stuff. You have a problem? Okay. This is... It's a bad idea. Intro lab. Yeah, yeah. You can have more than a quick question. You know what? Let's just... You already made the new thing though? You made it fail before? Go to the deployment? This one? Yeah. Now they changed the... And then say deploy. Let's just force it to... You may have failed so many times and it says I'm giving up. So you're saying I changed stuff. Go to deploy. So now we're about to do the overview. Yeah, or you can just use this word. Yeah. And it should be forcing the new deployment. Okay. That message won't go all the way down to you. Okay. Just a few seconds. Yeah, so let's click on it then. Did it work for you now? Yeah. No, I'm talking... Yeah, I'm still about ridiculing the application. So go to the reading end. Is there a way? It's a system. Yeah, it's a system. No, it's not a system. So let's go to the vaults. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. It's missing the output. Yeah. Yeah. You're just gonna do this one over there. Can I ask one more question and interrupt everybody? Does everybody understand the concept of labels and selectors now? Do you have... Does it come, though? Do you have something with it? Yeah. Never mind. They'd ignore me. What else did I... What do you do that? I know, you need to give us your last one. What do we do it? You change labels? Yeah, yeah, I do. Gosh, this is such a bad way of doing it. No, not that one either. That one. I'll tell them. He's right here. Not only that. But then at the end. You can do the whole thing, I know. So just one other thing that I just need to say, service accounts, this is actually a really advanced use case. So I've done a ton of development on OpenShift, and I've never touched a service account before. Okay? I don't want you to think that you normally have to go like make a service account and do all this stuff. The only reason we have, it's because Jorge likes to show really exciting things. So this app in the end is very exciting to look at, but it doesn't show you what a typical development pattern is. In general, you are never going to touch a service account. Okay? Unless you're doing something very advanced. Does that make sense? Because I think people are like, what the? What? This OpenShift development is so hard. This is not normally what you would do. Normally, what you would do is you would say, here's my source code, or here's my Docker image, now go run it. And that would work. You don't have to create any special accounts or anything like that, okay? I don't want you going home and being like, this is the worst. Why do I have service accounts and permissions, and how do I know which one to use, and do I have to do that in my app? So please don't think that, and I'll have words with Jorge later. I had word with you when you were making the workshop, and I told you this was too complicated. The thing is, he's Spanish, and I'm North American Jew, and so the words will be loud, and a lot of discussion and handshaking, and then we'll hug at the end and kiss each other. Love this time. Hello, my name is Enigo Montoya. You killed my father, prepare to die. Hello, my name is Jorge Morales. You dissed my app, prepare to die. Actually, there's some right in front. One more one? What do you want to question? You want to do it? There's someone else who wanted the water, but I don't want to do it. Do you want a coffee or anything? Way back. What do you want? Latte? Is it a tea ticket? No, maybe it's a latte. What am I going to get for myself? So I need some white chocolate. I did the dark chocolate, actually the hot chocolate itself was delicious, but I'm going to bring two, because I'm going to get a decaf mocha, I mean a decaf cappuccino, and then I'll put hot chocolate. So you just want a latte? Mid-size. I'm not sure about this, but I need a little jalapeño. It's too bad, it's too bad. It created a route, and I didn't name the route. There was also a user, but I don't mind if there's a number. Is that right? So he asked a question that probably other people are wondering as well. If you haven't got to this point, you will notice when you run a S2I build with the Python code, it automatically creates a route, whereas before we had to create one. Why is that? When you deploy an image from Docker Hub, we by default don't add a route, we don't necessarily trust that the image you're pulling down from Docker Hub is legitimate. So we want to give you time to look at it and then manually expose it when you're ready. When you did the source to image, it did create a default route based on the configuration of the environment, which uses your user and your project name. If you want to change that, you would just delete that route or edit that route by clicking Applications, Routes in the web UI, that's the most name that you wanted. S2I, it is always created. Yes. Yeah. Yeah, if you want to change it, so if I go to add the project, when you start your source to image, you can click this and you can even deselect to not create the route, you can get that target port, you can change. Yeah. Okay, it's been 20 minutes and I actually timed it this time. So now what I'm going to do is add a database to our application. I'm going to add the MongoDB database. So I'm going to go to my web console, I'm going to go into my project and I have two things in my project now. I have the front end, which we deployed from a Docker image and this National Parks back end that we deployed from source code. So I want to add a MongoDB database. So I'm going to click add to project and you can do this one of two ways. You can come down here and click on data stores to list all of the data stores or you can just filter based on MongoDB and we're going to use the MongoDB ephemeral data store for this. So I'm going to select that and you can set a memory limit. You can give it some names for the connection username. We're just going to use MongoDB, MongoDB for the password, MongoDB and MongoDB just to keep things easy. Okay. And then I am going to, you can select a version if you want a different version but just use three two and I'm going to click on create. And so what's happening is it's downloading, it's pulling a MongoDB Docker image and it's going to run it as a container inside of a Kubernetes pod. So at the end of this we'll have three things. We have a MongoDB database which is done. It's up and running. We'll have the back-end service and the front-end service. So now we need to link this MongoDB database to our back-end service. So it can have its own. So you can do that one of two ways. You can click on group service or you can click on this little chain icon to link things together. So I'm just going to click that as the same UI. You can then choose a service you want to link to this. I'm going to select MongoDB and click OK. So now these two things are in the same project. Okay. The next thing we want to do is actually add the authentication environment variables to our back-end service. So you can do that on the command line which is what I provide in the labs. You can also do it via the web UI and so just to mix it up I'm going to show how to do it in the web UI. So I'm going to click on applications deployments and I'm going to modify the national parks deployment. If you remember back when Steve was talking you may have heard something about the truth of what the deployment should look like and the cluster fixes itself if it gets out of whack. So what we're going to do is modify that deployment config or the configuration that defines the truth of my application deployment. Okay. So I'm going to click on environment and I'm going to add MongoDB user and that's MongoDB. Going to add MongoDB password and that's MongoDB and I'm going to add MongoDB database and that is MongoDB. So I'm going to do this really quick. What I'm going to do is click on save and then quickly go to the overview and you'll see that because I changed the truth that the cluster is going to correct itself. Okay. So I'll do this pretty quickly. Click save. Go to overview and you can see that it's doing a rolling update of everything that's no longer valid and in this case it's just redeploying that national parks application while setting the environment variables in the container before it starts or while it's starting up. Okay. So now if I look inside of this container so I'm just going to show you something that's not in the lab. I'm going to click on the actual pod. One of the things you can do is open up a terminal inside of the web UI. So I'm going to click on terminal and now I am connected to that backend container via terminal session. And if I do ENV grep for MongoDB we can see that when I linked these two things together you know when I grouped them it added the port and the service host and the IP address but it didn't add the credentials and so that's when I set like MongoDB user as an environment variable and now see that they're set inside of the container. Okay, redeploy them. Now after we do that and we go back to our mapping application and hit refresh nothing happened. Why? We still have a few other steps to do and this is in the labs. So I'm going to look at this so now we just have a database running the next thing we have to do is actually load the schema and that URL is here. This is a rest endpoint that will display all of the data in the database. Oops. Let me copy that again. And I'll just walk you through this here. So this slash WS slash data slash all will list all the information in the database and there's nothing in there right now. So we need to load the geospatial data into the database. So to do that there is another endpoint that is slash WS slash data dot load so I'm just going to go over here and I'm going to call that load and it said it inserted 2,740 records into the database. So now there's 274 national parks in the database. And we're going to refresh this. This is all the data that's now loaded in the database. So the web application actually had it somewhere built in and it can... Load its schema by itself. ...to the database and then it uses it from the database. Okay. So now there's one last thing to do because if we have data in the database we have the back end deployed, we have the front end deployed. Let's refresh our map and it still didn't work. Why is that? And this comes back to what Steve was saying. Generally this would just work but we've added service accounts in this lab that by default it doesn't have permissions to see the labels in Kubernetes. So that's why we added the service account. So now I'm going to label my back end as being a back end. Okay. And then the front end will self discover that. Is that way too complicated? Maybe if I do it, it'll make sense. Okay. Again, this is not how you would normally build your app. Yeah. You generally know all the back end that you want to use and you just connect them and you just put them together. So this is the command I'm running. And you can do this on the web UI as well. I'm going to run the OCLabel command on the national parts route. Okay. And I'm going to add a type value or name with a value of parts map back end. Okay. That's all I'm doing. Hit enter there. Basically go to the route. It'll say labels and you just put that label in there. So I'm going to applications routes in the web UI. Click on national parts. And is it annotations? No. You have to go up to actions. Edit? Edit. Yeah. You're going to have to actually edit the YAML. Yeah. But that's fine. You could also have done it when you... We haven't shown it in YAML yet. Yeah. Go back and show it just for a second. So this is important. Even for those of you not using the web UI, pay attention. Yeah. So here's the label I added. So it is displayed in the web UI. And then if I edit the YAML inside... Remember we said the truth? That's what the truth looks like for the route. So when we set up the route, there's the host. So that's the URL that we're going to go to. And what is it pointing to? All the rest of this stuff is all metadata. It points to the service, national parks. We actually have the ability to split the route between multiple services. But in this case, we're saying all of it goes to there. It's going to this port. And then it's defining truth. So you could actually... If you wanted to, you could actually, rather than doing that whole web interface step or making OC expose, you could actually save this file on your machine and then resubmit it on a totally different cluster or to the same cluster. And as long as that service was there, it would just work. Does that make sense? So you could actually spit out the configuration. There's a command line tool to do this. You can spit out this truth about your entire app project. Move to a different machine. And as long as that machine can see the same docker images or the same git repose, you could just suck the truth back into that cluster into the new project and it'll basically spin it all up. Okay. So here's the labels that you'll need to add if you're doing it via the web UI. Just add type and parts map back in. All right. So now, let's... Oh, I didn't even have to refresh. It just self-discovered. And now here are all the parts. And we're actually using a terminology called cluster grouping. So if I zoom out here and come to, like, where I live in eastern United States, there's 456 national parks. So if we weren't using cluster grouping, it would get out of control pretty quickly with all these blue points would be all over the map, right, because there's a lot of national parks. So let's go back to over here in Europe and we'll just... Does anybody do mapping stuff like this? Are you thinking... If anybody's thinking about doing mapping stuff like this, remember how many points did we put into the database? 2,740, right? Putting 2,740 dots on the map basically makes the map unusable. And if you go much above that, also you're chewing up a lot of memory in the browser, and the browser starts to actually become unusable. So the reason we did this is advanced visualization techniques. So there are no public national parks in, like, the Republic in a couple of them. That's just the data we loaded. Yeah, again, Blake Jorge for saying that was the official national parks. Yeah, it's just the data we loaded. Like, we did this ourselves. And so if you click on one, you can see the name. So the way this actually works if you want to actually dig into this, it's very simple source code. The browser you can think of is like a box, right? And so the ends of the box has coordinates, fungitude and latitude. And so every time that I zoom in or zoom out or drag the map, the coordinates change. And so we do a query, the backend does a query and we pass in those coordinates. It does a 2D spatial query in MongoDB based on these four coordinates and say, give me everything within these bounds. And it gives us the longitude and latitude points back along with the name via JSON. And then we just pin it on the map. It's very, I know it sounds complicated. It is actually extremely simple to do something like that. And just look at the source code. I recently wrote this app and I know nothing about spatial stuff and even I can do it. So it's like 20 lines of code. Yeah, the JavaScript is like 5 lines of code. Yeah, it's really easy to do this type of stuff. So how are people's heads feeling? Are you over full? Or can you take in, I'd like to talk a little bit more about labels because it's really important to understand labels in Kubernetes. Can you talk a little bit about it also? Okay, can I take the machine for a second? So this is why labels are important and it's going to reflect back on that whole truth thing and this is going to tie a bunch of stuff together. So remember when we talked about services and services front a certain number of a certain pod, right? How does it know which pods to actually route to? And so the way that actually happens is through a service and through labels and selectors. So let's go to services. Right, this was our original park maps pie, right? And then that was our front end. So that is the selector for that service. So that service says anytime there is a pod that comes up with this label, I am going to route traffic to it. It doesn't know anything that's in it, it's just saying anytime there is something with this label I am going to route to it and I'm going to route to port 8080. So whatever comes in, I'm going to route it in. Whatever comes into the service on port 8080 I'm going to send it to the port 8080 on the pod no matter what it is. Right? The same thing also, we didn't talk about deployment, I talked a little bit about deployment configs, did you go over while I was getting the coffee, did you go over a deployment config? I talked about it. If we go back to the deployment config for that parts map pie, you'll see it also has selectors. Right? And this is so it's saying, look I have to have one replica up, that means I have to be able to find at least one pod that has these labels on them. Does that make sense? So Mikhail just came back. Mikhail I have a question for you. I know, but we don't talk about replication controllers here because we hide them as an open shift. Right? Now you just made my life harder. Do you remember back in my diagram where I said replication controllers and then above them there was the deployment config? Replication controllers, all they did was just say how many are supposed to be up. We think that we need more than that. Most people need more. So we wrap the replication controller with a deployment config. Mikhail being the engineer had to go into the low level implementation of what that wrapping means, which is when I say this goes to three, it then goes to the replication controller that it's wrapped and changes the three down there. But I was trying to not put so much information in your head because there's only so much information people can hold in one session. That doesn't matter. People's head has only so much space. Whether you have three hours or it has to be multiple days. So does everybody get though that this is basically saying it's looking for pods that have these labels? Now I did have a question to you for Mikhail. If you put two labels here, does that mean it has to, both labels have to be on the pod? Okay, so this is like an amped condition. It's not an or, which I never knew until I was always confused about that, but now I know. So the pod actually has to have both those labels, otherwise it doesn't count. Now, this is where it gets, does everybody get what I've said so far? Are we good? No, done. Okay, so everybody got what I've done so far. Thank you for listening to my shushing. Just because I don't want, yeah. Okay, so now I'm going to go back here and I'm going to spin up multiple replicas. Did I click it? MSI makes very bad trackpads. So where's park, okay. So now I've got three replicas up, right? Three pods running. So let's say we're starting to see error messages from one of these pods. Things have gone terribly wrong, something's wrong in one of the pods. If you were running in a normal scenario or you were running docker containers, you'd actually have to take the docker container down and you would lose everything that was happening with it. We don't actually have to do that here. So what we can do, we'll go to the pods and we'll say this one, because it has too many letter T's in it, is the one that's causing problems. And we're going to go and we're going to edit the YAML. So remember, there's our labels for those pods. Right? So these are the two that the deployment config made. This is the one that's like I'm this unique one. So I'm going to go say, and here I'm going to say park maps. And I'm going to say can I use caps? Capital letters? Okay. So we're saying that one's bad. I want to investigate this one. But I don't want to keep putting traffic to it because it's causing problems. So watch what happens here. If I go quickly to the overview and I can scroll, I have to tap. I should just hook up a mouse. Okay? So if I go down here now it's too late. It was too fast. There's still three running. But if I go down here, there's a pod running all by itself. Not connected to a deployment config. So we've because it no longer has the, oh actually we changed the wrong thing. It's still going to get routed to you by the service. Why? You remember what the service selector was? It was based off the deployment. And I didn't change the deployment, did I? No. So it's, that was the wrong way to do it. But let's pretend I had done it the right way. If I had done it the right way and we scroll down to this and actually changed the deployment all the pod, these would still be running a new one has been spun up. They're all doing what they need to do. This one is still up and running. No problem. It was having all the errors. But no traffic is being routed to it anymore and it doesn't count as the replica. So this was part of that automatic self healing, right? As soon as I changed that label the truth of the world was there were still not three pods running with the label that it needed to see, right? So the deployment config said the world doesn't match the truth. I'm going to fix that. And in terms of the routing the service would have set up. This pod is no longer matching the truth. I don't route to traffic to it anymore. And so how fast was that? Nobody had to do, all you had to do to make that change was just change a label. So one of the real powerful things about Kubernetes is selectors and labels because it allows you to reconfigure your entire app on the fly non permanently also. Because all I have to do now to this one is edit the YAML again turns out it was a false alarm. It actually was good. I save go to overview tap and you can see it's scaling down now. So it put that one back into the deployment config again and it's getting rid of the most recent one. I've got too many that doesn't match the world. I need to scale it back down. So the real point of me showing you this is how labels and selectors work and why they're really powerful in Kubernetes. Did everybody get that? That was the take home from it. It's not read only. It's the image itself is immutable. The container is not. Remember in that diagram because you remember, have you used AWS? Have you used EC2 and AWS? Okay, never mind. When you use EC2 image you can make changes and you can put files in there but when you shut it down those changes aren't persisted. So what's happening if I go back to this one when we look at Docker again can you see that? The top level is a writable. So when you spin up the container that's actually a writable layer. You can write and do all sorts of stuff inside of the container as long as you don't exceed file space or whatever. But then as soon as you shut that container down or if you spin up a new one those changes will not be in the new one or when it comes back up. No. So remember when I was talking about persistent storage? So it depends on how can you go to the deployment config for that? For Mongo? Here's the mount. So here's the mount. So this one's using Jorge this is using an empty dir right? So one of the mount types for Kubernetes is what's called an empty dir which means this data directory is going to be empty when I start and I'm going to just grab it off the host volume. So I'm going to grab some disk off the node. So it does that you put the data in there for an empty dir that data persists as long as you don't delete the pod. So the pod can go back down and come back up the data will still be there. I've done that on my own. I went into Docker and I killed the actual pod and when it came back up it still saw its own data. So this is basically saying we're going to mount a host volume called MongoDB data and on the container it's going to go to varlib MongoDB data it's an empty dir. When we added MongoDB we could have done it differently meaning we have multiple versions of our database. So we have MongoDB ephemeral that uses the empty dir concept so the storage goes away when you destroy the pod. We could have done MongoDB persistent and that would have allowed me to delete the pod but keep my data around. Because remember the persistent volumes and the persistent volume claims? This would make a persistent volume claim to something that will stay around. Make a persistent volume claim. That would be like a different thing than just mapping to... It'll still do the same thing under the hoods. So if we go back to the deployment kit again you'll see that there's down here the volume it'll say I've mounted this pvc and I've mounted it at this place and then you'll see when you go back up to the specific mount that it does then that'll point varlib MongoDB data will actually be pointing to whatever that pvc was mounted at. Does that make sense? So it just depends on how much what you can set in the cluster for you and what you can get into this and you start doing a lot of stuff with databases and you care about them being up and you're worried about stateful sets. I'm not going to talk about them today. So it depends on where you... if you don't mount a director like if you don't make storage for it it's going to write it into that writable layer in the pod. The pod, it goes away. Does that make sense? You should really be writing all the works to a video. So wait I was going to answer that question and then the standard for Docker containers but I'll tell you why. Because what you want is a central log drain. We're only running three containers here or four containers. Suppose I have 50 Mongo containers and 100 Apache containers and 100 Python containers it's impossible for you to actually go and read the logs in every single container. So if you put stuff to standard out what happens is the framework grabs the stuff that's coming from standard out and you have a built-in logging but it could be something else that you plug in. It takes those and puts those into... in us it's an elk stack? E-F-K E-F-K Oh E-F-K we're using fluent D rather than log stash? Okay it's an E-F-K I don't know how to pronounce that as opposed to elk? Yeah? That's what shows up in these logs as well. Yeah I was... I was unable to have it on the web page so that was... So what happens is this gets sent to the logging framework? Yeah. Yeah exactly. This would not make sense if there was no other place to put it except for small applications. Because you want to be able to search it and save it and do all that other stuff on it. I mean there's nothing to also stop you though from doing a PVC and then mounting and then Java applications you could say I'm doing log for J and then define a logging output to that persistent PVC as well. The persistent volume claim. Okay there's one other thing I want to show you we have 30 minutes left and then I'll just not talk anymore and then if you can leave as you finish or you can just have quiet time to work and we will help you finish the labs. And that is the point to where you all will hate us both because we basically do everything we've done today in one command with all of these things and that's called application templates. So we do support the concept of a template that once you get things running the way you want it to run you can create a snapshot or a template of that and then developers can just deploy a template and it brings everything up the way it's supposed to be. So that's useful if you had a development stack and you have someone new on your team instead of them spending time getting everything set up and their servers configured and their source code repositories to compile and they can just do it with one command. And so that's what we're going to do we're just going to add another back end as a template and so the templates are written in JSON or YAML and so you can do this on the web or on the command line on the command line you would basically say oc create it would be a URL and I'm just passing in a JSON template that defines the application so now that I've created this template it's now available for me to use and so if I go back to the web UI and I click add to project and I search or if I go to uncategorized it should show up here maybe is where it's at MLB parks so it's added to the web console as well so as an admin you could add this template and then the developer could just come in and click select and it would deploy so I'll just do that from here and I can give it labels but it's already, look at this it's already going to have these are the images it uses let's see go back up so it already has the database name all that stuff already go in and set the environment variables for everything it's already got those set and it lets you change them if you want to yep and it'll have the labels so if we wanted to do that on the command line I just do oc new app and I use the let me clear this, paste it oc new app and I'm going to new app this MLB parks-pi that's the name of the template that's available in the system now I'm just specifying the application name the get source code and the reference hit enter and it goes off and it creates everything including the Mongo user account links it up and it deploys it so if I go back to my web console and go to the overview page we'll see it spinning up here down at the bottom maybe here's MLB parks so it's waiting for the database the database is coming up once the database comes up it will then deploy the app and marry them together and then we'll reload the web UI in just a second once it gets finished deploying yeah so it builds been running for 32 seconds so let's look at the log so we can see when it's done here and it's pushed 5 of 10 layers so this is the first time we've deployed this back in so it has to push every single layer each layer is new so that's done we go back to the overview page did you talk to those before so it's spinning this up and it's waiting for it to become ready we have a readiness probe and if you're more interested in readiness probes and liveliness checks Graham is giving a talk tomorrow on that that you can check out so we're just waiting for that to say it's ready so this is creating the database schema and everything for major league parks now it's ready and it has that label so if we go back to our map visualizer and this pulls every 15 seconds or something like that so it'll show up here in a second so now we have two things we can display we can display national parks or major league baseball stadiums so let's say I don't want to look at national parks I'll just deselect that and here's all the major league baseball stadiums in the United States and if we wanted to look at greyscale and then maybe we don't want parks we want national parks you can switch between them like that with multiple microservice back in it would actually have been possible to do all of this in one file the entire application and it's not just good for new devs it's also good for moving from development to QA to prod and all do you understand why that would be the developer can build this like this give them the template say hey QE I'm done, here's the template QE can take it, run it they can change the resource amounts that they give to things or maybe change the URLs and then suck all that in and it's running exactly as it was running for the dev which is the key part oh well I set up wildfire on my desktop like this and I used my database like this and I did this and I used these passwords it's the exact same environment so there's not as much of that I chucked it over the wall and you guys have to figure out how to actually get it all wired together which is usually what happens like oh you didn't set up your oracle you're not connecting to oracle the same way we're connecting to oracle and so we don't know how to connect up and you have to go and fix it a little bit back and forth it's all parameterized and in there and all they have to do is change the username and password because they're talking to the QE database rather than the dev database okay so we'll let everyone finish the labs over the next 24 minutes and then just answer them are there any other questions though? because the thing is you can do the labs if you want but this is the only time you're going to have engineers and the advocates all in the room together if you have any other questions about this so in this case the app doesn't know that it's running on open shields so my actual Python code is not talking to any open shields there is no need it's just labels and so even your even your Python code doesn't need to know about that which is why I'm sad that we did all this other work but so in general that is why we have builders do we talk about the S2I builders? so in general what most people do for those of you who are using Docker now how do you actually build your Docker images with source code in them? you do a Docker build, right? is that right? yep, nope or you're not using Docker but that's a completely different build process than you were probably using before right? what the point is of the S2I is you just feed its source code in the way you're usually used to feeding its source code and you're using Docker image that then gets deployed and just runs so you could basically take most repos if they're not talking to a database, right? because then you have to put environment variables and stuff but if you're just running the normal code you just feed that code in and it just runs and goes there's nothing special that's why I say you don't even actually have to know Docker to do this yeah, but I would have to probably somewhere the source code no let's just find one, I'll show you so if I go to my GitHub and I'll just find a repo here simple PHP this is just PHP code, right? and this is the same for Java or Node.js there's nothing open shift in this, it's just PHP code I know, I know so one way is to upload everything to some like somewhere and then the open shift will pull it on its own and build it and everything the source code or the other way would be how about if I want to deploy a binary build that I've already got go so when people talk about binary build I assume you're talking about Java with an ear file or a jar file yeah, so you would just deploy the binary, there's a special deployment type you would put it in an deployment directory and then just specify the command to deploy a binary and it uploads the binary yeah, you can say I want this wildfire this tomcat and it's up and running is it oc build or oc new app or oc deploy, there's one of the commands and then you say dash dash binary and you don't have to have it in a github what will happen is that the start build will do a github hide on the local and it will stream it to the builder so you don't even have to recall out there, you can just come with a wildfire and just well, I'm talking about the situation, I don't even have Naven, I don't have anything I would use the simplest thing and you don't have to Java source code and when you stream it to the builder that hasn't made one it will build so we're talking about binary deploy like you just deploy on the warfire yeah, I just want to you put in a directory called deployable and then you just say oc I don't know what the command is though is it start build you just define a build that's a binary type and it will upload it from your machine it won't do the whole meme thing on it just put it into the deployable directory and away you go the only caveat on binary builds is that because you're pushing it in then you can't have that automatic redeploy when you do a change as if it's on the github I don't care about that it means that you would set up your Jenkins environment when it gets to the point of creating it it would then issue the command to ish initiate the start build the actual command is oc start build dash dash from dir equals and then the directory where your warfire is send it no challenge to having it in your web console well I don't know, the binary build is not on the web console sorry it doesn't web console it doesn't have your local directory push across wait, you can still do stuff from your local directory sorry maybe we'll have a look at the build pipeline I think you can realize all that stuff with the build pipeline alright so here's the other thing that you can actually do actually a lot of our customers do because they don't want developers actually pushing anything to openshift what they want is developers putting code into Jenkins Jenkins does a whole bunch of checks and all that stuff then it makes a docker file and then it deploys to openshift so we can actually support that entire build the pipeline in openshift or you can put your Jenkins outside and it will watch Jenkins and allow it to deploy into the integrated registry and then the deployment cycle takes over and just deploys the whole thing so that's another way that you can get around ever touching open... developers don't even know that openshift is involved in the process they're just building docker they're just putting stuff into Jenkins it builds the docker image and deploys so I've got an IDE I've built a web application I can deploy locally through my phone I can deploy to openshift this same way so which IDE do you use? like Eclipse I know everybody does I do too so we have integration for v2 with IntelliJ we have to keep bugging them to do the integration for v3 in Eclipse if you wanted to play with that there's actually a plugin so what you can do in Eclipse is you can just say pick a deployment target and say here's my code, deploy it to openshift like make a project it's all happening in your IDE if you want to do that does that make sense? there's a bunch of other stuff we can show you do you want to just see other stuff now or do you actually want to finish the exercise? I'll show you something really cool if you're doing an interpreted language so not Java but something like PHP or Node.js or some interpreted language so I just deployed a PHP app so let's wait for it to come up here's a quiz why is that URL there? without there being a a pod there did you notice how he has a URL? just two seconds ago this pod wasn't here why? how could that be? no gram how does Kubernetes work? going down the wrong route no pun intended you're going down the wrong route stop there why would there what's happening when he does all those commands? what's he updating? he's changing the truth so the truth is there's a route named this that route gets created instantly it doesn't require a pod it actually probably requires a service so first it waited until the service was created then it created the route and then it doesn't care that's why when he does the command line in old v2 we would have been sitting there waiting for everything to spin up all it's doing when it does that command is updating the truth and then waiting for the cluster to take over my PHP app is deployed and I actually want to use OpenShift for real development and I don't want to get pushed because that's not my workflow developers don't make a change and then push it they work on a feature over a day and they push it to the Git repository so how do you use this for daily development as an interpreted language developer so here's the app and it's awesome so I want to start changing it so the first thing I want to do is clone it to my local machine here so I'm just going to make a directory called grant cdn to that directory and then clone this repo so now I have the repo on my local machine and I'm going to open it up and here's my code or my code editor and here is my code oh that's awesome yeah it's great code so what we actually have is a if I do oc get pods and we'll see a lot of pods because I'm using the same project here and then I can oc rsync my current directory because I'm in my project directory with my grant php-18dr 8mpod this one because it's running and I want to send that to the oc app root source directory I just happen to know that's the home directory where all source code lives and then I want to dash w to watch it so this is going to start a oh wait what did I type it wrong mphp-1 too many dashes oh yeah ok rsync doesn't necessarily have to be in the pod for this either if rsync isn't available in the pod it'll just use tar to stream it it's not a script ok it's not a script thanks grem ok so now if I update my code in my IDE and maybe we'll just remove a few lines and echo this lab is over and I save that I just hit ctrl s or file save if I go back to my web app here and refresh assuming I didn't do anything wrong no go to your console to make sure that rsync first oh yeah I dare it out because this is why I at the very beginning remember I said I'm using version 1.3 of the client and we're using 1.4 correct version then it would have just synced it over I forget what so do you remember what Ben said it to the sync interval it's like 2 seconds, 3 seconds every 3 seconds it basically doesn't rsync and it just sits there doing it and so you could do the same thing with your warfile right you just set you rsync though where your deployables is to the deployables on the server every time you compile a new warfile it's gonna set it up there and it'll redeploy it and you can just keep going this method is only suitable because I wouldn't use that for warfiles this isn't really suitable for scripting managers I usually put warfiles on the screen where you can put the source code up and you can just library interpret rather than having to re-pile it locally so the other option you could have done is in your command line go oc-start-build-grant-the-hp from the dot and that would have actually uploaded the source code done and rebuilt the image from your local directory and redeployed the image yeah but that you're not gonna do it so if that way means that if you've got things like node with extra modules that need to be installed or Python with extra modules that should change the requirements then it will build them all in so this is a great example of how every developer does it differently and we can handle most use cases this is the way I do it not the way he does it, not the way you do it so the only two minutes what Graham said is actually very true but if you're building a warfile the dependencies are already in it and for a PHP or Python developer you actually have to do a build if you change your requirements.txt because you need to pull down new dependencies so the idea for the workflow with this is you would do this, do this, do this, do this oh I'm ready to commit this to get up and actually trigger a full build because the builds take a little while that's not how you normally do develop so this is for the intern stuff when you don't have a full build yeah it actually so ok Python developer Tomcat and Wildfire you said it to auto-deploy if you drop the warfile and it says oh I will redeploy that for you exactly Graham's up why won't you so they have USBs if you guys need to like find a browser or any of that just a lifetime awesome I don't want to copy it on the other hand so what we did is load it and then add it to the console and then that's what we're going to do yeah that's what we're going to do we can put it on I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I