 Hi, and I'm not going to talk today. There is another talk tomorrow that I will be leading. So please join us tomorrow at half past ten In training room 2 2 we're talking about open secret or Kubernetes operators as well You don't have to type the commands in You can just click on it. So if you've even got an Android tablet, you can still do this iOS tablet Sorry, it's not working So I can just click on that and it will run it. So it's very simple to go through this Okay, so just read click and you don't you don't have to worry about trying to copy paste anything or enter in by hand And that's it This is this running on overshift. We've also embedded access to a web console in there So that you can play around there and look at what's happening as well The web console is not used in the workshop itself. It all the workshop is done using command line only But you can go through and dig through there if you want And just a little trick you can drag this left and right Whoops You can drag that divider left and right now. I've marked up all my screen to make it So you can see it better But just get on a little and in a minute I'll just start going through it myself once I sort out my laptop and and get this other head set off Okay, so if you came in late and you want to do that That's the URL you can go to if you actually want to do this yourself But what I'm gonna do is I give a bit of a Short slide deck and then I'll go through and step for it myself Now Kubernetes, this is what we're demonstrating today It's the some fundamentals of using CUDA metanese using the command line to deploy an app you're going to go through and Deploy a front-end web web application Which is a blog site using Django and you're gonna have a back-end database and you're gonna go those up together Make it visible public so that people can use it Initialize the database and that's what the exercise is about So Kubernetes Presume you here because you have heard of it now when we get into Kubernetes Your head can very very quickly explode. It's a very complicated system when you look at the whole ecosystem And there's a big steep learning curve one important thing to say about that is that Don't expect to understand all of it. It's like a Linux operating system You don't go to Linux expect it to be able to learn all of it Kubernetes is the same Take the attitude that you're going to lose learn a piece at a time as you go along Because you'll never learn everything about it. Okay But what is Kubernetes and what it's all about and what is it trying to address? So obviously containers is the big buzzword in the last few years which Docker May very popular because they made the idea of packaging up these applications to these things called containers and running Running to an image and running them in very very popular When you look at What that technology is about we've come from a history of having a virtual idea of virtualization Which is the idea that you can run up machines multiple instances of machines within one physical machine in each in their own little compartments now containers is essentially the same basic idea But it's intended to be a lot less overhead because when we talk about virtualization We each of our little eyes sandboxes are running a full operating system when we talk about containers. We are only running The processes of the particular application you want to run So you don't have all that overhead of the operating system in each of your little isolated containers You are making use of the fact that underneath your container runtime There is just one operating system running which is being shared across all of your containers So containers is just a fancy way of putting a fence around a processes for your one application And that makes a lot more lightweight than using traditional virtualization So the general flow if making applications run in containers is that you're going to take a set of instructions of What steps need to be run to take the bits and pieces you need for your application to build into a package what we call an image It's like a glorified table So you might say that I need to have these particular operating system packages available I need this particular language runtime And then you're going to need all the particular packages for that language runtime So our case we're using Python application in the front end. So I have to install all the Django bits For that as well because I'm using Django web frame it So you're going to build that into using that set of instructions You're going to build up your magic table or image and we're going to put that up into an image registry a Place essentially where we can store that and get access to it then to deploy it Somewhere So the next step is whilst we built our image We're going to take that image pull it down from an entry tree to a container runtime environment And we're going to run it and that will run up your processes inside that container So essentially that's what it's all about Very lightweight of doing containerization Compared to virtualization where you're just running up your own application processes Now running a single container is quite easy Okay, Docker made that very very easy so that on your laptop you can just go Docker build Give it the set of instructions in the Docker file. It produces your image. You then go Docker run Great apps running The problem with that is that when you want to start scaling out your application So you have multiple instances of it even on your own single machine or across machines So that you can distribute many minutes across a big cluster of machines That's when things get hard and problems can start to arise if you try and do homegrown solutions yourself With Docker alone You're gonna have lots of lots of problems You're gonna have to solve along the way and it's gonna be a lot a lot of work And this is where Kubernetes comes in Kubernetes is a What's called a container as a service platform Essentially, it's providing you the smarts to be able to manage a set of machines And now each one of those machines will have a container runtime on it where you can run up applications And Kubernetes is going to manage for you all of those different machines And so when I want to deploy an app now, I can tell Kubernetes here is the image for my application Which I have packaged up. I need this many instances of it running and you tell Kubernetes and it will worry about Where to run it on which nodes in our cluster. It'll get up and running for it It will manage them look after it if instances my application die It will replace them it will migrate instance between nodes if it needs to because a node has gone down or it's getting Too much memory resources on it being used and it needs to rebalance it can manages all those things for you So whereas before we're doing straight Docker that would be a quite complicated process because you have to Write a lot of stuff to manage it Kubernetes is going to manage it all for you So that's what Kubernetes about how it fits into that that story of containers So that's all I'm gonna do with the slides that just sets a little bit of context for you for those who are following a lot of going to follow along rather than actually doing on the So I'm now going to go into the slides now just very quickly for those who may have come in late If you if you really want to do this because it is a hands-on workshop. Why am I having so much trouble with the mouse today? Okay, if you really want to try and do this workshop yourself to hands-on workshop you can go to that address you'll get a Blue button come up saying start workshop. Just click on it and you'll get to see this particular Workshop environment. So I've got here showing here and you'll be able to just go through And do this yourself. Everyone got that now who wants to do it? As I said, we're gonna leave this up for a day and the cluster automatically get destroyed So you can come back and do it later if need be How's that showing? Is that not too small? Okay, so I already said a bit about Kubernetes. So I'm gonna skip some of these initial slides The Kubernetes cluster we're using today is actually OpenShift. So OpenShift is a distribution of Kubernetes Which provides that container as a service functionality that Kubernetes provides But it actually builds a whole lot of stuff on top as well. Now Kubernetes is viewed as being a a more of an operations platform OpenShift adds on top of that extra functionality Part of that is what people would call platform as a service Which is an easier way of of deploying apps. It has in there support for Running CI CD pipelines using Jenkins Has support for being able to Take your source code and build it into an image for you So you don't have to worry about actually generating all the instructions and that's a thing called source to image So it has all these extra features in there that is still a Kubernetes distribution So we're using it today and the exercises all you're going to do is use the kubectl command line You're not even going to use the OpenShift specific command line So all the things you would do here today because OpenShift is just a Kubernetes distribution you'll be able to apply somewhere else and When you go through this environment, you've automatically got access. It's all been set up for you. So you don't have to worry about that So What I'm going to do first is I'm going to deploy my whole application my front-end application and my database And we'll see what it looks like and we're going to destroy the front-end and then I'm going to step through Deploying that front again front-end again It's step-by-step and we're going to explain the concepts of the different components I'm going to deploy which create my whole application so that you can see how all those things fall together So I have two two directories here one which is containing my my database code Now when we talk about deploying apps to Kubernetes We're going to provide it with a whole bunch of configuration For different things that can be defined using YAML or JSON. I'm using YAML today So in this directory, I have four different config files for my database backend Which are the bits and pieces of the different components to deploy So I've already pre-created them. So if you do have them already existing It's actually can be very simple to then get an app deploying in Kubernetes if you already got them It's it's creating them, which is the hard bit in the first place So I want to know what's going to happen when I deploy this There's a command in Kubernetes in kubectl is kubectl apply I said she can give it that configuration and it's going to take it and it's actually going to deploy my app based on What that configuration is so I want to see what that's going to do now I can supply this dry run option which says go and Pretend you're going to do this and tell me what you're going to do So all that directory had was these four files each of them had different resources Describing what's required as a deployment the system following claim a secret and a service But we'll get into what's inside of some of this So I did a dry run this time I deal for real I'll go and deploy that That is gone and deployed my application For the database and if I flick over to the web console we can see that is currently being deployed So that'll just take a take a moment So back over here from the command line we can also monitor the status of the roll out there We can use this kubectl roll out status and it will just monitor that deployment as it happens And and once it's finished setting that up we're done So we know we can move on So deploying apps when you've got the config already is really simple This is it's the creating of now. We've got our front-end app as well same thing I have a directory which has some different resource files in there and I can Just deploy that one as well, and I can monitor the the roll out of that Now while that's happening Let's dig around in this this console a bit more So I've now see I've got two deployments in my my project or namespace now when you work in Kubernetes You can it's not just one big sandpit You can actually compartmentalize applications under what's called namespaces So that way if you've got a billing system You might deploy that in one namespace if you've got a web forum Another namespace and you can work with them independently in this particular environment. What's actually happened when you went to that? Start workshop It's created each of you your own little namespace to work in and Create your little account which you can then use to access it So you're all working in the one cluster. You don't have your own cluster You're all working in the one and you can't see anyone else's applications that they are deploying Because that namespace or project provides you a bit of isolation from everyone else So this one should be up. So the database is up and running now and the Web front-end as well Now in this particular case Because the resoft definition it's got everything we needed it has deployed the database We deployed the front-end front-end database already been linked together So the front-end has the credential information for the database. It knows the name of it knows how to contact it We've given the front-end web application a public URL So we can just click on that and we have our front-end working Now there's nothing nothing currently showing there in the way of blog posts because we haven't loaded the database of anything Now these resource files if I go back over here Well These resource those files went on to create these different resources you saw there and This this comes where You know, this is where the part of the learning curve starts with Kubernetes and things get very good Very can get very complicated and messy, but everything in Kubernetes is driven by these resoft definitions Essentially they say what just my application needs to look like and Kubernetes essentially Makes the deployment agree with what your your resources all time. So there's some key commands in Kubernetes, which is kubectl kubectl get another on kubectl Describe which you'll be using a lot when working with Kubernetes to find out what the state of your deployments and everything else so we've done two Deployments here. We've done a front-end and our back-end. So if we say kubectl get deployment it can show us our two deployments We can also zoom in further and and actually look at particular one by name So here I'm saying I want to look at the deployment for just the blog And if you want more information, that's when we can start to use kubectl describe and It gives you a lot lot more information about the state of your deployment just that one resource If you want to get even more complicated now, we started off with a YAML file for our description you can actually get back out that YAML file which was loaded in and That's actually a lot more in there than what we started with and that's because when you create the deployment You create a minimal deployment definition in yours and kubectl will start filling that out with other information be they defaults for particular deployment resource resource memory and CPU based on Quoters that might be or limit range what's called limit ranges that might be applied to your program project So it looks complicated But what we're actually going to use to create this in the first place is a lot simpler than that and as well as the YAML you can you can get JSON as well. I personally hate YAML. I always prefer using JSON But everyone seems to like using YAML for blog posts and documentation. So I use YAML for this So we have a project here which has multiple applications in it One of the big problems with kubectl is how do you know what belongs to what? These resources that we're creating do have names associated with them So with the blog front-end was called blog the database was called blog-database But it's complicated and messy to be able to deal with multiple of these raw resources at the same time You have to use the names all the time. So a very key concept that exists in Kubernetes is the ability to label things This is not just for you to be able to make queries based on your resources to get information out Those labels are also used by Kubernetes internally to draw relationships between different Things so the deployment that's just saying this is the image I need to deploy and get running in a container But I need to access that so we also need what's called a service Which we'll get to in a moment and you need you've got a relationship there between it So you're gonna rely on these labels So the labels are something that when you create those resoft definitions you need to put in yourself So you need to make sure you set them up in the right way so you can do these sorts of queries to make it Easy so so I can this case can make a query using kubectl get to get the deployment for My front-end application by using a label And one very very useful thing one very useful Useful labels is deleting things now. I want to actually delete my front-end now so we can go back and do this again but my Deployment for my front-end wasn't just that deployment object It was also a service object an ingress object at a persistent volume claim And I need to delete those all at once I can delete them by name But it's much much easier to use these labels to lead it so I can just run kubectl delete Deployment service ingress secret PVC give it a label and I've gone and deleted all that application. It's all gone So we can now start over and we start looking at these things now piece by piece. I Lost everyone already. How many is actually doing this actually in the workshop? Having no problems So what we're going to do now is going to deploy this front-end in pieces We're going to leave that we've left the database alone. We're going to leave that one there We're just going to redeploy the the front-end So what we've got starting out is this deployment resource. So it's big hunk of yaml It has a name blog has these labels blog again And inside of that is then has what's a template which essentially? Defining what that deployment is to look like so we have a name of an image in there which we're going to use this for in this case It's this open-shift Cata Cata blog Django Pi so that is the image. I would have built previously from my Docker file So it has all the bits of my application unit my runtime and everything in there And that's sitting up on a registry in this case. It's sitting up on a registry called Docker hub and that image is On the internet and I want to deploy that So my deployment says that's the image. I want to deploy I'm saying that it exposes a uses a port 8080 for being able to access it for my web traffic I need to set some environment variables for the Weather databases credentials for database and other bits and pieces. I have a volume definition next I need persistence storage. So that has all those bits and pieces in there Now in this case, you know, I've created that I knew how to create that if you've coming along and Have never used communities before what most people do is they'll go find some article documentation and go I'll just cut and paste this I'll change the image name and and then use it and hope it works That's honestly that is actually how most people would be doing this Kubernetes does provide you a little bit of assistance, but not much on being able to create these deployment objects And they have two commands for this this first one called coupable create deployment and essentially you can give that The name of your image. Do you want to run? And it gives you a starting point Now unfortunately, that's not really complete Like there's no port in there. How do I do that? There's no environment variables There is another one which can get you slightly further Which is Coupe cuddle run blog and if I run that one, you can see I can get a bit more information So I can define a port I can set up an environment variable. I can Say how many instances of my application and number of replicas I need So you can use those to sort of create a skeleton where you can start out with that to be honest You're can't avoid in Kubernetes having to drop down and start playing with YAML or JSON files to deploy your applications That's again where that learning curve gets very steep very quickly But we'll start out with this second month as starting point and we will Try and add bits pieces to it. So I've already Taken that that one there from that Coupe cuddle run. I've dumped it in a file in my my directory here and I'm gonna Again use my Coupe cuddle apply now You can just use Coupe cuddle apply and give it a single file It's actually possible to have one file which has whole list of resources in there And you can have everything in one file now I've used a different strategy here, which is to have a directory where each of my resources is a separate file There's different ways you can approach this you can you can go to that and there's a link in in the notes there off to where there's a Discussion on the Kubernetes documentation about the different ways you can do things because with that Coupe cuddle run command I Did a dry run option just to see what it would create if I left that dry run option off It would actually go and create that resource in Kubernetes and actually get my image deployed And that's what's called an imperative command essentially you're making an action essentially telling I want to do this and it's going Going to do it The problem with that is that when you do it the only record of your configuration exists in Kubernetes So if I want to make changes, I have to then go edit in Kubernetes Now you can do that as a Coupe cuddle edit command, which allows you to edit But how then do I reproduce that on another cluster? You sort of like got to extract it and move it out. That can be a bit of a problem moving Configuration from one cluster to another so they're much better ways to always capture Your configurations in a file in the file system get it under version control so that you can track it So that's what I've done here. I've got a directory in there With my files in it and I'm going to have one file per resource instead of one with lots in it So I can just then she go Coupe cuddle apply give it the directory and it will go and deploy that So I'll have that deployment now Popping up over here again, but it's just a deployment. I'm missing my other bit. So let's let's Move on to those that's that's running so I've got my Image my application deployed. It's running in a container. It's just not terribly useful yet because I can't access it But let's have a look at the what's happened here first So let's lose our Coupe cuddle get command and we're going to list there Now I created a deployment resource, but you'll actually see that there's actually been a lot of other things created in there It's also created what's called a replica set and some pots now What that deployment object actually is is it acts as a template? So when I create the deployments acting as a template for the creation of the replica set The replica set is in turn acting as a template for the pots now You remember I created that deployment initially saying I wanted to replicas So the two things you have here is the pots the pods represent the instances of your application So I have my two instances there the replica set. It's like a It's like a little bookkeeping management thing in some respects It's it's there to drive some of the Mechanisms inside of Kubernetes which manage pods. So from that replica set Kubernetes will monitor replicas sets It's from that can see are you want to actually deploy that application? How many instances and it will then manage the number of pods it'll ensure you've got two replicas running There's some magic little things going on there internally I won't go into too much. It's just to sort of get you the idea that these things are happening So yes, I look in there if you did look at the replica set You'll see that there's a lot of overlap there with what was in the deployment And I said that's because it's acting as this this Deployment is acting as a template for the replica set and And similarly I can look for at each pod and I'm just looking at Hey, I've got both here same thing It's you'll see a lot of overlap because the information is flowed through so a pod represents an instance of your application The replica set is essentially a configuration a synthetic synthesize one which Kubernetes used to know It has to monitor application and keep how many instances running all the time So one of the things here is in that deployment We set the number of replicas now, and that's why we ended up with number two ended up with two so you can One of the good things about Kubernetes is that scaling apps is really really simple It is purely just that number in there which controls in the revincents that I got I don't have to go and create new resoft definitions for each instance that deployment replicas set Pods that flow means I just all I need to do to scale up an application is change that number of replicas So we can do that So I can run this command Kubernetes scale deployment replicas and and it is going to start up a new one You'll see how it's starting up my new instance there in a new container. So if I keep doing this command I'm now up and running Now I haven't had to make any decisions there about where that instance runs We're running a cluster here with 20 nodes because frankly we had no No idea how many people going to turn up. I asked the organizers how big is the room you've given us? 200 300 people and we had no idea big to conference was so we could really be cluster much bigger than we needed But I didn't have to make a decision about where that ran that Kubernetes knows I want free replicas. So it will look at What requirements my application may need if I've defined it in terms of how much memory and CPU it needs and it will Look at what resources are available in that cluster and it will go okay I've got a node over here in my cluster, which is got lots of resources available I'll go and start it over there and you don't have to worry about Now when I ran this cube cut or scale deployment or scale command I was modifying the Deployment resource definition inside Kubernetes. Now you remember what I mentioned before about difference between making edits in Kubernetes and keeping them in files My local configuration is now out of sync with what is in the cluster And this is where keeping your files is good because if someone does a mistake like that You do need to bring it back to what it was before I can actually go back and run the exact same main command I run before to deploy it and say Coupel apply again. Here's my directory of files Kubernetes will see oh actually you want to go back to that and it will go and Do what I essentially bring the state of the cluster my application the cluster back to agree with what my configuration said So it's already scaled that back down to two Because my resource file, which I'd used only had two in it. I scaled up to three manually. I reapplied the config It's put it back to two So this is why it's important. It gives you that ability to reproduce your deployments So always use the config files in your version control Rather than trying to make manual changes like manual changes in Kubernetes is fine in development But production the ability to redeploy things always use Okay Now I mentioned a bit about Kubernetes making decisions about where to run things one of the things I mentioned earlier was the fact that if An instance my application will die Kubernetes knows how many I want so it will take care of ensuring I already have always have that number So I'm watching here the the pods I have there Running already for my instances. So if I actually go and delete one You'll see that my pod is terminating and Kubernetes realizes. Oh, one of my instance application has died. So it'll go. Oh, you actually wanted to we're down to one now So we will go and create a new one for me And so always ensure that it has those two we're running if your application starts crashing And again, it will make a decision about where to put that and actually may migrate that to a different node if the resource balance have changed So I would for pods containers. So essentially the pod is the instance inside a pod The pod is actually as a wrapper around a container So we started off with containers pod essentially some abstraction inside of Kubernetes to wrap one or more containers Because when we were doing that scaling the number of replicas What the the scaling unit is the pod so I could have a grouping of multiple containers But when I scale it, they're all going to get scaled together now. I had a database and a front-end I have them as separate deployments I'm not running them as different containers of in the one pod or group because I need to be able to scale them independently Usually you're going to end up with one container per pod There are use cases for having more than one container grouped in a pod together And one example of that is things called sidecar containers if you needed to add in a special process for handling instrumentation for collection of metrics and so on you might use that but usually it's one container pot So the relationship now is container. It's wrapped by a pod Pod is managed by the replica set the definition of replica set was controlled by So that's the relationship So the node when I talk about in the cluster it Can be a physical machine or it can be a sensation instance of an operating system Right, so you can be deployed to physical hardware or could be deployed in a virtual machine in a virtualized environment You can do it So I have all these pods now you need to know what these pods are doing so it is possible to Get the logs out so you just need to know the name of the pod And I've just done a funny horrible script here to get the name of the pod out So I've got a name of one pod and I can look at the logs for that and we see that it's a Django application using modWiskey Everyone thinks modWiskey is great if you're a Python developer. I Wrote that one pity So you can get logs out By default it's like Docker. You have to go to every pod to get the separate Logs out but it is possible to deploy to a Kubernetes cluster. What's called aggregated logging? Which can bring the logs for all these different pods together into a common application and One solution for that often is I'm going to get this wrong elastic search fluid D. Cabana Combination of those tools, but there are other options as well And usually that or sometimes that can be integrated into the the web console And if we've open-shift that is the case so you can very easily get to that all aggregated logging mechanism Now we get to logging now each of these pods is actually it's like a little mini host And it is actually possible to get into them if you need to interact with the processors in that app So I can run a kube-cuddle exec command and run command in there to show my environment That one obviously exits straight away But I could actually also get in there and run an instance of bash and Get in there and what run whatever commands I want You know look at the files. I can interact with file system. I can look at the processors. I Could be nasty and kill processors But obviously if you if you kill that special one at the top number one your container will exit But then Kubernetes will restart it for you. It's not too bad Now so we have an instance of application. We can get into them. We can see logging, but we still can't access them Now each one of those pods I said it acts like a little host now each pod does get its own IP address That IP address is only accessible inside of the Kubernetes cluster Not outside But dealing with IP addresses when communicating between components of an application is a pain in the neck You don't know what these IP addresses are in advance and if a pod gets killed and replaced It gets another different IP address things will change all the time So Kubernetes has this idea of a service resource which you can create now If I run this command Am I still inside? Okay, I thought I pressed exit You'll see each has this different IP address now when you have model instances I want to be able to talk to these via one IP address Rather than multiple so this service we have a service idea of a service abstraction You can see it up there Essentially on creating a service called blog has a label again And then it has this definition here with that. I'm going to create a service the services own IP address Has bought 8080 and what's going to happen is it's going to map Any connections to port 8080 on its IP address and route that for automatically to one of the instances my pod for me Where which pods are used is is determined by this selector and that's where I mentioned before about labels being very important So I can apply this and it creates me a service I Have a service the service has its own IP address and I Can I mentioned the label so that selector in their app blog Essentially the same as looking up the pods by that label selector those are the things will get mapped behind that service I can actually look specifically at which ones are mapped By looking at what's called the endpoints And it will show me the two IP addresses for the pods and they are mapped now behind that IP address for the service Again, it's still an IP address. They're a pain to deal with So the name of that service is added into an internal DNS server inside of the Kubernetes cluster So I can access that using a host name of blog. I don't need to use the IP address So if I need to access it internally like if I'm in the same name space I can just say blog host name or if I need to access it from other namespace I have that ability I can use blog namespace name Service and port and so I can then kill that so I can access it But I'm accessing this because my terminal is running in the cluster. I'm not outside still yet So this is still not public so we can now expose that and to do that. We use what's called an ingress Essentially saying here's the host name. I want to make this thing available outside of my cluster F and This is the name of the service which I want though that Request to be mapped into and also the service has a mapping to the pods So what will happen now is the request will come in from outside Hit the router which is configured with this ingress and it will then distribute that will send that traffic to one of the instances of the pods So I can apply that I now have an ingress credit We can see that that's the host name was created now Internally to the cluster my service and pods were listening on port 8080 Now outside of the cluster using non-standard ports is not real good But the router is smart enough and where ingress is set up is that externally people will be using port 80 So standard port and the router maps that through automatically to port 8080 internally So that does it all for me One very important thing don't show here remember how we deleted our instances of our pod earlier and it created a new one It got a different IP address Cuminers will worry all about that when I delete a pod and it comes up with a new IP address the old pod is automatically removed from the list of endpoints from the service and from the pods that the router is sending traffic to and it will automatically add that IP address back into the service A new one and again for the ingress so it worries about reconfiguring the internal network with service and the router for you and you don't have to worry about it So I've got that up up running now and I can go visit it again This time I happen to have some data and that's only because I'm not using my real database yet This is using an internal test database when I deploy it without a real database So to link the database We need to set some environment variables. So so that was our resources for our database So, you know, hopefully now now we're deployment and services. We have a couple others of system volume and secret And I can see that on my deployment for my blog I currently have one environment variable now I want to set some others in there for the name of the host of the database now We know from the what is it before about a service name access the host name for my database internally So by my host name for my database is gonna be blog dash DB. I need to spy a user password and database name So I need to set those So Kubernetes has a command a kube-cuddle Soops set n which does allow me to set environment variables And that's okay for setting individual one environment variable For the host and I can do that And essentially if you look at the definition of what it's doing is just filling in some extra fields in my YAML But for the credentials for the database, I actually have them stored in what's called a secret This is a way of putting information and storing configuration for that as part of the communities now The database story using those I want to reuse those in my front-end I don't want to have to enter them in separately that way I can keep them in one spot So the the kube-cuddle set n command has a way of essentially setting up the definition Of my YAML to say I need to set these environment rules, but use them from this secret. So I'm not creating them separately So secrets is a way of storing configuration There's another thing in communities called a config map, which I don't go into who is very similar secrets is Essentially the same as config map but just provides some extra guarantees around about how information is saved in the cluster That's never saved to disk for example in a secret on a node at least so from those I can update my YAML definition Adding my extra hosts and also using it from the secret for the other credentials and again I can just apply that I've now linked my front-end to my database now when I'm making changes like this to the environment variables Kubernetes is smart enough to know that's a configuration change and is really deploying my pods for me So it's deleting the old ones and bringing up new instances And it's doing that all for me That's linked together Now whoever if you familiar with Django, you'll know that Great it will can auto configure the data figure the database for me But I need to set a credential and that's where I can Use the fact that I can exec into a pod to do a command and I'm gonna Have a little magic shell script here. We're just gonna do some things for me It's gonna check my migration is happening and I can go and add my credentials in So I've set up a Set up that that'll actually also pre-loaded some content in my real database now as well So I have everything going Now one final thing and we've got five minutes left. It's excellent This particular web application it's storing blog post information the database that's giving persistence for that now But I also need to store images for images uploaded with blog posts now for that I need persistent storage So what we can do here Kubernetes force concept of persistent volumes So you can have a whole bunch of storage available and you can make what's called a persistent volume claim That essentially says that I need storage of a particular size a particular type and I need it from my application So it supports a number of different types of storage or access modes Depending on whether you need storage which can be mounted on multiple nodes in a cluster at the same time Which is fall like file system storage or with a database for example We only have one instance so we know we can get away with a type of storage that only needs or only could be Mounted on one node time say elastic block storage in Amazon for example, so you can describe the characteristics of the storage you need by your yaml definition and We could create that essentially is then communities will say okay I've allocated some storage, but I still need to actually mount that in my application So I need to go back to my original deployment now and add actually a couple of extra bits of information And and that is the first one is that this deployment is going to need that Persistent volume claim that was just created that persistent storage and I need to go into each of the definitions for the container in that pod Describe the deployment and say I want to mount that persistent volume at this directory inside of that container So again updating my original yaml file on disk I've added those there at the bottom I Can apply those and I now have deployment ingress persistent volume and service blog and we're back now to what we had when I deployed it in all in one go So if I were to go back to my blog now I've set my my password my access to database. I could log into it. I could create posts I could upload images The images are stored on disks the post in the database If any of my instance my application die They'll get restarted elsewhere And they'll be already connected to the database When your pod gets moved from one node to another Communities will worry about moving the storage the persistent volume mount with it So it will handle that or for you and you don't have to worry about it So that was it if you're working through that there are some links at the end for Finding information about OpenShift, which is the cluster we were using here using Kubernetes So OpenShift is is okd, which is the open source upstream project, which is used Goes into OpenShift so OpenShift obviously is a product from Red Hat and Okd is the open source upstream project that feeds into that if you want to run OpenShift on your own Laptop, there is a project called MiniShift which you can use Or if you want to actually deploy a full cluster on your own heart up on Amazon We have try.OpenShift.com at the moment is where you can go and try the very latest version of OpenShift Which you're going to be releasing down the track soon which is OpenShift 4 And there's some links there to Kubernetes and a final one at the bottom Which is good site which is kubernetesbyexample.com Which you can go there and it provides you again goes through all of different resource types Explains a bit about them different examples of how to use them and so on and that's it That's the end of the workshop. So if you have any If you are still awake, I took a nap while he was talking So questions One container and one pod and that is That's what is preferred So if you want to put multiple containers in a pod then I should have a single Diploma to YAML with the different configuration. Is it how does it work? So you can put multiple containers in a pod but think about this as an example Okay, we had a front-end and a database Yeah, if we ran the front-end in one container and a database in another container of the same pod I Can't scale that up because you can't just take a database like Postgres And say I want ten instances now. It's not that simple So most of the time you prefer having one container to pod because then you can scale up the number of replicas separately Okay, but they said there are some use cases Sidecar containers and like this workshop environment is an example where I'm actually using multiple containers The terminal and the content we're running in From an application running in one container that embedded console in there was running in a separate container of the same pod But that purely that's because it was convenient in the way that I'm running up stuff for this environment Okay, so does that answer the question? Yeah, sure. Thanks One more question. I thought you were throwing the real microphone from it Hey Just does the shaking when you throw it turn it on No, so um my question is Anthony with regards to the workshop um, is the code available for the for the workshop or It will be gone after the day as you mentioned, okay, so this particular cluster. We're doing the workshop in That will leave that running until tomorrow because it's just it's scheduled to be deleted about 10 or 11 o'clock The morning so we live at running so you can get access to so if you want to try again You can do it now. Are you asking about the ability to do it again or the actual the workshop it's the workshop content itself Yes, it is up on a github repo. Okay But right now because we've been developing this the workshop environment over the last few months to get it all nice and polished The documentation isn't really there for you to go and deploy it yourself But it does require you have open shift to deploy it into a Technically I probably could get it going in plain Kubernetes, but I haven't done that yet That's just you're not from me. Yeah, I'll probably Talk after but we will we will probably try to post them if you follow him on Twitter We will try to post them the instructions with a false as a Twitter handle so anyone can can follow it Okay, I think we are running out of time Okay, so next thing is proof what are going to be taken here I think Michael is setting up things So, thank you very much Graham and Joel. Please put your hand together for them