 Maybe there are none. I'm trying to get to the bottom. What's your topic? Sorry? Your topic? Yeah, AP deploy times, do you mean etc? Yeah, I'll write that. Thank you. Thanks for being a good support for the chat. English or Czech? Both Czech? Do you have for make the adapter for VGA, somebody? Just for this one session? I may have one that's like, hold on. I got one out of the boot. Sorry? The mini display. Mini display to VGA. Because the AGMI doesn't work for some reason. Hold on. Yeah. But it doesn't matter the map. Do you have one? Yeah. What's inside here? Well, you can try this one. Do you have the same thing I got? Try that one. You have one. Oh, you have originals. It's not original. Slightly original. The original won't work with Linux. We have the official Apple one. Really? Yeah, this one does look good. You can see my screen, right? Good. So we were supposed to start like 5 minutes ago, right? 3.40. Was the beginning of the session. So we are already 5 minutes late. Which I'm sorry for. I was actually thinking to use the restrooms before starting. But then you have time for that. So, for people coming. Looks like nobody's outside. You can take a quick break if you need to. Yeah, go use the restrooms. Are you okay with that? Sure. Okay, I'm going to look my screen first. Here we'll do trivia. Yeah, we'll do some trivia. Okay, do trivia. I'll be right back. And the prize for trivia. They're going to hand out all of your stars. Is a t-shirt. Like this one. First question. What is the national animal of Romania? What? No. Dragon. No. Lakes. I got a t-shirt. No, I'm not. We could be in the Netherlands. How did JavaScript get its name? Why did they name it JavaScript? To make it similar to Java. That's exactly correct. Java had a lot of hide. And they wanted to ride the hide train. What size do you wear? L. I've never been to Piles. Here. No, I like to keep my arm. Oh, sorry. What was MongoDB's company name before they changed it to MongoDB? That's a good one. Yeah. That's correct. You don't delegate your character to someone else. I'll take an L. Another launch. Coming up. When Docker was created as a company, what was their original product? The company that became Docker, what were they formally before Docker? Not the name, but what was their space they were in? What were they trying to create? I don't know if they were using Gaby or... No. I'm going to give a hint. What was the space that OpenShift is in? Yes, who said that? What size do you want? So, Docker had a Piles that competed with OpenShift, but they created their own container system and their platform sucked basically, but they had this nice container format so they decided to focus on that. Back to MongoDB. What was their original product as a company? Hint. It's the same answer as before. Raise your hand. The same guy. No. So MongoDB started out as a platform as a service company as well, and their problem was scaling databases for the clouds and they just wrote their own. Let's see. Red Hat, primarily OpenShift just announced a partnership with to provide OpenShift on their cloud services. Google, that's correct. Size? Two hands though. What programming language is OpenShift 2 written in? What programming language is OpenShift 3 written in? No. That was a bad question. Thanks, Ruby. No. Alright, done with trivia. Thanks for telling me to die for me. So my name is Marek. I'm actually local, so if you are not comfortable asking a question in English, ask in Czech, I will translate and you can handle it somehow. I work in the OpenShift team. I work on the Evangelist team. So I'm the talking head. So I just travel, explain people. I don't sell. I can sell at tax, etc, etc. So I can be honest about things. If you ask me, I will answer honestly. Not only marketing. So my vision for today was to go to the more advanced topics. Have you been to the 8am workshop in the morning? The getting started with OpenShift? Who was there? Hands up. Who was to the 9am workshop getting started with OpenShift today? Free. Okay, so my idea was to follow up from that workshop to some more advanced topics. So, it's your choice. We can either go through the content that was there which was like basic usage of OpenShift, deploying some application, scaling and exposing roads which is very basic. We can do that. Or we can go to the more advanced topics. My idea was to do something like AB deployment, blue-green, some admin commands that you can run to manage your nodes, manage your ports. What else do I have there? Pretty boy hoops, something to run when you are deploying applications, et cetera, et cetera. So, what do you want to see? Can I make a suggestion? Yes. How many people actually are planning on following along and doing all the exercises? Good question. Okay. Only three. We have a program with ViFi. That's why you're asking, right? How many people want to see the advanced scenario and how many people want to see the basics scenario? That's what I'm asking. What people would like to see? What's your expectations? Advanced. Who wants to see the basic? Who wants to see basic? How many people want to see the advanced? How many people want to see the advanced? The other half. So, start with deploying the basic. Actually, I can do very basic because for doing a B, we need to deploy something. So, we will not go through the materials that we have in the tutorial. And we will go stride with the deployment. I will speak about different things there and I will explain also the basic stuff as we go with the advanced topics. Make sense? Agree? Okay, cool. Well done. I have some, you know, it should not be done but we have bribes here. So, if you answer questions, if you do activity in the session, if you ask questions, it's the most awesome thing. If you ask questions, I love it. So, I would love you to ask questions during the session. Don't wait till the end. Just raise your hand or shadow me. Yes? So, does it mean that OpenShift could be deployed on Debian? I'm quite sure it can be run on Debian. There should be no technical problem. We use Docker which is Debian friendly. We use Kubernetes that's Debian friendly. We use SC Linux. SC Linux is Debian friendly but Debian is not SC Linux friendly. There's small problem there. But compared to V2, in V2 we used SC Linux to build the containers itself. So, you really need complex policies to secure the applications. That was very complicated to do on Debian because you need to rewrite everything for the system. With V2, V3, because we are Docker, the only thing you need to do is to secure Docker. So, it's simple to write the policies. So, it should be simple to move to Debian. But I am not aware of anyone who would actually go that way. It's a matter of resources. On Ubuntu, you can actually run, I think, right now, if you do, if you take OpenChip and you would try to run it on Ubuntu or Debian, it would fail all some parts and different stuff. If you make the sim links and if you make it look like more sentos-y, it would work. But you will not get the security aspects that we have in OpenChip. The SCNX policies, if somebody breaks out of the container, he will be able to do anything on the system, right? So, it's a matter of what you actually are expecting. And the Debian is there. If you want to download a Vagrant, then you can download it for different systems. It's a mistake to have Debian there. It's simple. Okay. So, nobody is probably... You should not follow. Because we have a problem with the Wi-Fi and the whole session before, we have been fixing the Wi-Fi problems. So, I will just do it here. You can watch it. I will speak about during the things. So, I will just move it here. So, this is OpenChip. This is the login. You see the list of the projects. Just to make it simple, I will create a new project. I will start with the ABs. That's the basic. It's the simplest one. And I will create a new project for that. What did I do? Okay, over here. So, this is my empty project. So, who knows what is AB deployment? Okay. So, you have two different versions of your application. And you put some people on the new one. Some people are still seeing the old one. And you slowly move the people from the old one to the new one. So, you can test if everything's okay, if there's a problem, or something like that. Right? So, that's AB deployment. Okay. So, this is the new version of the application. Okay. AB deployment. Do you want a scar? I already have one. You already have a new one? I already have one. We have... We want a sticker, and we have something. Okay, never mind. Do we have somebody back home that doesn't have a scar? Okay, sorry. We have everything. We have been too active in the other sessions. I was doing my best. So, to get started, I'm going to deploy an application. So, the application is very complex. Extremely complex. There is no way you could implement the application yourself. I put so much effort into it. There is index PHP, which contains the whole application. And it prints H1A. It's the A application. So, whenever I hit the application, it will be big A on the screen. Nothing else. So, make it deploy. I will take the source code of the application from GitHub. I will go to my OpenCheck project. I will add to project. I will choose PHP application. I will name it somehow. So, APPA. I will go and I will click create. Go back over here. So, you see that I have a new application. It has some route. So, it's accessible from outside, once deployed. There is a service that manages that application. There is a build running. I can see the logs for that build. So, you see that I'm doing something with the containers. I have been using some artifacts over here. You can also read it. Yeah, I can read it. Okay. Now? Okay, cool. So, we built... We used the source code. We built an image using PHP 5.5 and REL7. So, we are using REL Enterprise Linux 7 and PHP 5.5 to run PHP application. And we created a Docker container. Everything is based on Docker and Kubernetes. Who knows Docker? Raise your hand. Okay. Who knows Kubernetes? Good. So, can you describe me what Docker is about? Container. Containerization. So, what's the difference between containers and virtual machines? One... ... ... Yes. Scarf t-shirt. Can you come and take it up? Okay. Okay, thank you. Yes, that's exactly the good answer. Because if you have a VM you are virtualizing the hardware and you are running an operating system on top of it, the kernel is communicating with the virtual hardware. What is happening with containers? You are virtualizing the kernel calls. You are virtualizing the kernel for every application. So, there's only one kernel and then there's some user line running in the container only. Not the kernel itself. So, the benefit is... Lower overhead. And the con is... The problem with that is... Isolation. Isolation is one of those problems but there are technologies like SCU that can help you with that. But there's a bigger... ... ... ... You have the same kernel and you have the same operating system so you cannot run two different operating systems on the same node. So, you cannot run Linux and Windows. You can take Ubuntu container and run it on REL7. That works. The user line works with the kernel. But usually it's not a problem. It's not that much recommended but you can do it. But you cannot take Windows container and put it on Linux. You cannot take Linux container for Windows. So, it's interesting to have Docker APIs in Windows Server 2016. Yeah, I've seen it before. Sorry? I've seen it before. Yeah. So, it's going to be there. So, it should work. Yes. I think Docker for Windows will bring in a Linux kernel using a VM in order to allow you to have that Linux container. It would be nice to run it on REL7. Yeah, and it's just a trickle down effect to run on REL7. So, it's running on it. Yeah, but it still brings in a VM so you can have a Linux kernel and Linux Gats. I'm not sure they would be taking it into the Linux kernel but they're definitely using a VM because they have some virtualization solution. So, in Windows 10 Enterprise Edition you can run every application in a single, light VM to isolate them from each other and see it as a normal window. And I think they're using the same technology to actually run the Docker containers. But I'm not sure if it's with the Linux kernel. But that's it. So, we are stopped there. So, that's why we are using Docker here because we are using Docker for making the containers for packaging the applications in the platform. So, whenever you try to deploy an application we take the source code into the Docker container and then we take the Docker container and run it somewhere in the cluster. So, that's the basic workflow. We have a tool called SourceToImage. I think I have it somewhere here as a repo. So, you can check it OpenChift slash SourceToImage This is our tool that we designed to it's open source. You can use it without OpenChift if you want. It is a tool that takes a Docker image that is called BuilderImage and it takes the source code that produces new image that is runnable in the end. So, it converts the source code into a runnable Docker image. What's the benefit? You don't have to write Docker files. You have the image, build once and then you just point at some source code and you get a runnable Docker image at the end. You don't need to always run git pool, run compile something, run do something. So, that's a nice dependency that we use. As well, you can use it without OpenChift that's mentioned somewhere here and it's extremely simple. Can you read it? No. Let's go bigger. So, you create a Docker container that has two mandatory and two optional scripts. It can be a runnable script, can be batch, Ruby, whatever. Assemble is used for building the container. So, this is run in the build process and then as an entry point for the new container it will use the run script. So, that's how you spin up the application. In assemble, you have the build process. In run, you have the startup process. Then you can accept artifacts. That's useful for Maven. You know Maven? You know what happens when you use Maven and you try to build something? Then you wait. You download the whole Internet. If you have to do it every time it's painful. The state artifacts allows you to actually cache these artifacts and use it. Not to download it every time you run the image. The usage is how to use this image. Because you can push into it some environment variables and those environment variables or some command lines which can change the behavior of the image. So, this is like the help explanation. So, extremely simple tool but extremely powerful because it allows you to do reproducible builds of your code and you don't need to write other profiles anymore. Which is nice. Okay. So, we built a Docker image for our application and at the end we should see that it was pushed into a Docker posturing and that means that it succeeded. And if I go over here I have one code running. So, most of you said you know Kubernetes. What the hell is that? You said that you know Kubernetes. What is port? Collection of containers. Collection of containers. Something else? Is there some specifics about those containers? It's a hack about Docker. Could be. So, port is a set of containers that always run on the same node. They share the same API address and they have completed the same life cycle. If you stop the port, you stop all these containers. If you start the port you start all those containers. If you say I want to redeploy this port on a different server, you take all those containers in the port and you move them all on to different server. It's a logical grouping around some containers on the same life cycle. So, we have a port with our application. When I open it on the web page I get A. Yes? The port is like only go binary inside another container. So... No, the port is absolutely logical, virtually. It doesn't... When I run an open chip and I start to deploy an application I can see two local images. One is named like the application and the other one is named like port and it's like another container with something is it. So... So, when I run an application If you get docker or docker Oh, okay. You can see those two images. If you run an open chip. So, the other one would be the spin-up container. It's a manager for other containers. So, when you run the builds for example, there is one container that's privileged and it pulls down the builder container and then pushes the container back to the... It's like a manager for the other containers you need to manage. During the running the I can see two containers for one port. Always? There is one. Just to start the containers. It depends on what containers you put into the port. There could be one container or more containers. Kubernetes will always start a post container in a port. And it's used to... Well, you can override it and you can have a container which starts the first in the port and prepare it. And this is usually not used. So, we start a simple container just sits there and does nothing. Really? Whatever. So, there is two containers in the port as you said. I'm okay with that, if you know. I really don't know that Kubernetes would do it twice. But I have my system running with the cluster and there are already containers running. It would be more difficult to do Docker PS or something like that from the node. But I will investigate it because it's pretty interesting. Though it's not that more important for our use case, right? Or is it? So, we have the application running and just to make... I have application A and to take this application I need to somehow expose it to the outside world. That's what I have the route for. But to make it... So, can I draw on the screen? We don't need this. We don't need this. So, what happens when a request comes from the outside world we hit some router that translates it into the boot address. So, there is the application running and it responds with the response and sends it back to the client. What we need to do to have a B we need to create some other route that will hit this deployment, the deployment A but also a new deployment that would be B. So, when I get the request it gets either here or there and then I get some response and back to the client. This one will be also accessible on its own route. So, I will expose only A only B or A or B. Does it make sense? So, to do that we need to run a single command that's a bit longer. CD O-C Project A-B You might want to bump up the pawn. Sorry? Again? Okay. And I will run a command like this. So, what I'm going to do is to use this router and generate a new one that will provide me the... I will copy this one into that one essentially. Nothing difficult. Now, when I switch back you see I have A-B service and I have the original service as well. Right? So, the next thing that we need to speak about is labels. Is there anyone else who would like to tell me something about labels in Kubernetes and OpenShift? Now, services we didn't speak about services yet, right? Or did we? No. So, services and so we spoke about ports. So, services are all balanced. If I have more ports or more containers running the service and I hit the... I hit this terminology because you use the same word for two different things. You have the application running in the port and the service is all balanced. If you have multiple ports you will hit one of those. It's a simple TCP-based balance. But how it works is it uses labels. So, label is a very simple naming convention for Kubernetes name and value and you tag a resource and then you can select those resources based on those tags. Does it make sense? Just say no if it doesn't make sense. That's completely okay because I was expecting more people coming from the morning session. So, we are getting a bit too high to more advanced topics without having the basics. So, we are doing the basics in a very simple squashed version. So, the labels tags that I can put on different ports on different services or on different replication containers which is the last resource that we need to speak about. So, what I can do browse ports open a port this is the build that provided the new runnable image for us and I can also have the port that runs the application itself. This is the PHP application I can see check logs so you will see that there was a request to my application and there was a request to favicon as well that's because the browser always asks for favicon as well as for the application itself. So, this is my port and you see the blue things over here. Is it readable for you? Yes. So, those are labels. These things are the labels. So, my port is labeled appa this is the name, this is the value this is the name, this is the value this is the name, this is the value. So, you see that there is three different labels. If I get to a service and my service is appa you will see that there is a selector the selector sells that deployment config has to equal to appa. So, this service will be load balancing of all the ports any port in the cluster in the project that will have the label deployment config equals appa. You can also put the multiple labels in that case you are narrowing the search for a more specific port. So, this service goes only to deployment config appa. When I go to my ap service this one is using ap equals true. So, is there some port that would have ap equals true? No. Not at this time. So, what do I have to do? Nice. So, I will go to my deployment config that this is the configuration of the deployment of the application and I will change it. So, this is painful. You are working on a more nicer form way of editing these files it was I think this lead Steve, do you remember when Jacob was committing the form editor for deployment configs and those configs? I think he said it was going in this release of 4 of them. So, I think it will be an enterprise it will probably be in the release. So, in the next version we will have a nice play key bay of editing these files but at this time we have to go on the raw Kubernetes level and edit the YAML or JSON file but in our case it is the YAML. So, my deployment config says that there is a template for my containers that will be deployed through this deployment using this configuration the containers are going to have whatever is it over here labels app equals appa and deployment config equals appa So, what I am going to do add new label ap true there is the one that the ap service is looking for any port that has this label I am going to save it and you will see that there is a new deployment running because I have somewhere here I am going to have triggers and when I change the config I am going to redeploy my application So, it is happening something is going on I deployed the application again and when you check the overview now I see in the ap service one port because now the port that was redeployed which is this one is also the ap true So, I checked Upon redeployment, the previous deployment is not supposed to be here Yes, so we created we deployed a new container we re-labeled it and then we teared down the previous one Did it happen automatically? Yes, because I have the trigger there that when the config changes I should redeploy all the containers that inherit from this configuration So, it happened automatically Right now, I have one service AB that goes to this port and at the same time I have the service APPA that goes to the same port There is only one port one container at this time So... When you scale the port Do you scale the port? So, I scale the port scale the labels are there so the service AB will consume both of those ports Why it's half way? Because it's still waiting for the port to start So, it knows that the container was Kubernetes said to the docker Hey, start me the container So, that's why I go for two and I'm waiting for the little container to actually start It's kind of slow It's kind of slow The docker was faster We have two problems here The cluster sometimes fails for us because we have been using for the whole day for different bunch of people for different demos So, sometimes we have glitch there The other one is that the Wi-Fi internet sucks And this is a WebSocket driven WebSocket connection drops It won't get an update here It looks like even with a reload Browse... So, let's check what's happening So, we are pending the port We are waiting for the port to start Nothing's happening there No Interesting Steve, this is pretty much the same problem you had before My son is dead You see? So, now we scale to free If I scale down one of those ports is going to be killed Let's hope it's the one that's problematic I do have that backup server but I know you've already created this stuff Yeah Is that concisette? Yeah, that's the only difference But let's check if OC gets ports We can list all the ports There's one that's being terminated OC Delete Let's see if this does something Was deleted It's the same problem We will have to wait for the time out For some reason I think there is some problematic node that happened as well When we were doing the previous one There was a user That was When Ryan was provisioning the nodes He provisioned Sample users And user 61 Had a problem during deployment And he was redeploying And redeploying the sample application that was deployed there So, since Two days ago, we did 729 Deployments So, we probably generated 729 different images Docker images that are lying somewhere in the cluster And when I was trying to clean it I cleaned all the deployments But I couldn't clean the images Because I didn't have access rights for that Yes There is some way How to clean up the Docker I'm running it All the rest of the containers Because you always Create one But not clean up the old one So, there is a prune command For the administrators Right now there is no scheduling Or anything like that You have to run it manually There is some automatic pruning If you reach some limit It can be triggered But it's not automatically run Every hour or something You can trigger it manually OADM Prune images It's a command like that You can put it into prune or something And run it every hour It's in OADM This is only for administrators The users cannot prune From a security point of view It could be problematic sometimes When users start pruning And also We try to keep as many Artifacts as possible for rollbacks So, if there is some problematic Deployments you can rollback With your application But I think that All the deploy is Is Like Image, not container Every deployment is new image Every deployment is Yes It doesn't have to be You can deploy containers without the build process In that case it will be the same image If you are deploying You can say deploy me a new container from Docker Hub In that case we don't build a new image We just deploy it And then we deploy it If you have source to image Even there it is not for every deployment It is for every build Okay But there were as many builds As deployments in the users So there was some problem with fetching Source code, I don't know But there was some problem The process is You Trigger the build When the build finishes it can trigger a deployment And the deployment then Deploy the containers in the cluster There is a One-to-one relationship between Deployments And replication controllers From a Kubernetes perspective Yeah One-to-one No, one-to-end You have one deployment config It can have multiple replication controllers I mean each time we do a deployment A RC is created Yes The deployment is pretty much a replication controller So Hopefully this will End up somehow our scaling process But you have seen that we can actually scale If we hit the correct node Try refreshing And see, I just print the images Okay Yeah, there is a warning, so it timed out So There we go What was I doing? Okay, so we have the application So by default, anything that you create Is not accessible from the outside world If I click over here, there is no route It just shows me the service itself I can create a route Route is an entry point from the outside world So I create a new route And I don't like it here Let's do it on the command line If you don't mind, I prefer command line Is it okay for you? It's fine OC get Services, so I see what services I do have here There is the AB service, so I can do OC Expose Service AB And this one creates me a new route So right now I have already the URL generated And linked to the service So when I open this I should be able to hit A And yeah, A as well Because it points on all the A containers that I have already deployed So the next thing is I need to deploy the application B The new one That will be the new deployment And I have my application A Still running And whenever I hit it, some of the people Will hit the A and some will hit the new B Okay My application B Is as complex as the application A Right So there you can see Was the source code for the application I will take The source URL I go back to here I will add to project I will choose This time I can choose PHP 5.6 It doesn't matter It was 5.5 before Now I can use Python 3 For example There is no limit on the technology that you want to use You can have Application A to be PHP Application B can be Java And you can use it just Just fine The system doesn't care about What's inside that container So that's nice So it's APP B This is my source code Let's create A new to overview And we have a build running I can again go to the log So something's happening Container finished I am removing I am cleaning up I am pushing to the repo Pushing to the repo Once that happens I will pull it to one of the nodes And I will spin it up as a new application Come on Be nice Okay, successfully pushed So I can go to overview And I can see that my APPB Has been deployed One port is there And my AB Service doesn't see the application Why? Because I forgot the label So what I wanted to do Actually I was thinking about something else Is do it over here And you can specify the label Just during the deployment I did not do it Which means I need to go Back to the deployments APPB I need to edit it myself And AB equals true You need to put the quotes there Because Go or the YAML parser Actually interprets true as The build value And this labels Expect string and string So the boolean is not compatible With the string that's expected So you need to put the quotes over there I save it And the deployment was triggered And I'm just waiting for the application To deploy And here we go So our AB service now points To all those in APPB And to all those in APPB And Now I have these two Just specifically Application access points When I refresh my browser With the AB AB-AB I always hit A Why? Session affinity Yes, so by default We have session affinity What I can do Take this URL AB-AB AB-AB I have two As And one B So I go straight down Now I can go back to my browser I'll Scale down A to zero I'll scale up my B Whatever And now I will be hitting only the Bs Right, so the AB deployment I had the application A Then I had AB And I could scale down slowly And scale up slowly And I would only have the Bs Can you change the ratio Of what you are hitting Which application if you want for example 10% of requests To land on the B And just 90% To land on A It's a good question Yes, and Yes, we can It's not that easy The thing is By default what we are using is HAProxy And it's a round-robin old balancer By default, the configuration So by default You just go 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 What you can do Is change the configuration into HAProxy You can put your own configuration In that case You can change the behavior of the old balancer From round-robin to something else You can change any configuration That is connected with the old balancer And in that case You can change the ratios Maybe based on some labels Maybe based on something else But you can do it It's just a use case That uses a low level of things To do some high level deployment And what you are asking for Is a even higher level deployment So you need to change something else In the configuration Canary deployments I always forget the name Canary deployments When you are deploying a new version of the application So you can have Let me go back When I go to my APPA For example Maximum unavailable and maximum search It configures How many will be taken down by default And how fast They will be redeployed Whenever I push a new version of the source code Trigger me a build And then when you are deploying The new build image Do it in steps of 10% For a time of I don't know, 5 days Something like that But these ratios refer to the number of Ports, number of containers Number of the ports That are connected to the deployment config So this can be done for the same application What you are asking for Just simple as this If you want to do PHP Or Java or two different applications You would need to go to the more complex way as I did In that case you would need to change the HAProxy router So the HAProxy router Is a container by itself That's deployed In OpenJet as a port It's managed by Kubernetes In our case it's running on a special Inferno So our nodes have different labels There is a master node There is Inferno Infrastructure and demo nodes That's where I'm deploying my ports And different containers can have different policies What nodes they should choose When they are being deployed So on my Inferno I have the container with HAProxy And I have there as well The registry That's also a container running on OpenJet As an administrator I could just go Delete the port And it would be deployed as any other Different port If I want to start my own OpenShift on Amazon Is there some kind of Is this the way like running CloudFormation template And just having everything I am not sure if we have already CloudFormation slash Heat Template But what we have and what's quite nice Is OpenJet Ansible project So do you know Ansible? Everybody knows Ansible? Who doesn't know Ansible? Okay So do you know Puppet? Do you know Chef? So to manage servers On a scale You don't want to SSH into single one And change it by hand You want to have a tool that will push the configuration That will manage the configuration of the servers Automatically So there are different tools that could do that I think is one of those Red Hat actually acquired Ansible half a year ago So it's now a Red Hat company And we are using Ansible For management For like I think the official installation method is the Ansible script There is a wrapper That looks like a command But it runs Ansible Underneath And there is a configuration To use it with AWS And local VMs So you can just configure Your credentials You say how many VMs should be provisioned For deployment For infrastructure, for EDCD Etc. It will create the VMs on one of those And then it will configure The VMs as well So that's probably the simplest one And you choose if you want to use Enterprise or Origin Origin is the open source project So that's upstream to Enterprise And Enterprise is the product So if you are the brave guy Who goes to play with the All the latest cool stuff Origin is for you If you want the super stable version That is used for enterprises Then you buy the Enterprise and you One of the nice benefits Is when it doesn't work You can go Red Hat and he let us That it doesn't work So there is Enterprise and Origin And these are open source The scripts, so if you want to run And deploy OpenSheet this is probably the simplest step If you want to do it by hand It's not that difficult If you try OpenSheet v2 The previous version It was between Ruby And it had so many dependencies Over the place, so it wasn't that easy to deploy it With v3 It's a Go binary That pretty much contains everything The whole OpenSheet can be one binary You run it and it does all the If you run it as root And it configures also the system So it's the only one deployment that's possible And if you want to do it by hand You can download the binary For the systems You spin up one as master You spin up the other ones as nodes You connect them together But you then need to also configure the networking You need to configure storage, etc And these more complex deployment things Can be done using the outside of the mic Like deploying the OpenSheet part That's easy Deploying the underlying technologies Like OpenVSpage Or some other software defined networking For virtual isolation of the networks Some distributed storage Like Blaster, SAC, or NFS For using the persistent volumes So you can write on one node And it's accessible on the other nodes Or actually if you want to just Have data that's That's persisted between The restarts of the containers You need the persistent volumes for that So for that you need some different Technology that provides you with the Capabilities to do the persistent storage So that's the most challenging part Of the most challenging part Of deploying OpenSheet Or the underlying technologies And then it's not just binary I think the biggest part is EDCD Yeah, those certificates Yes, that's fine Again, if you are Ansible It generates all the certificates And distributes them on all the machines And configures everything so it works That's right So the Ansible scripts are the best Ryan, I think you have been using this For provisioning these VMs right That we are using for the workshop This is the best for The upstream Origin code There's also another repo If you want to deploy Using Enterprise OpenShift Enterprise Talk to me I'll give you a URL to a different There's another repo that I use For Enterprise This is upstream for you This is the upstream, yeah And mine uses this anyway Test simplification For specific deployment It will be in future Possible to install Like run in FireBallD Because this create Shutdown FireBallD And start iQtables And FireBallD is like based on RL7 So it doesn't make sense I think you can run FireBallD The problem is not the FireBallD itself The problem is that FireBallD Sometimes does things that we don't want Because we need to create Virtual network, we need to create The routing between different pods And the V lines, etc. Which FireBallD sometimes Interfers with So FireBallD is problematic A bit In our case Because FireBallD is a nice simple way to Manage Firewall We need a complex configuration that changes a lot Like very often If you do FireBallD and you make sure It doesn't interfere with us It's okay, we can run it I saw that It works as to FireBallD It's working in 1.9 Of Docker? Yeah The other thing is That Docker is probably using the LIP network Whatever they call it Which Kubernetes is not using So the networking stack that Docker is using Is not compatible with Kubernetes one So you need support in Kubernetes for FireBallD To actually make it work Not on the GoPro level Because there is a blog post If you want to Google it, why? There are some technical and political reasons Why not? The end of the blog post is that New Docker will be DNS built in And no one's go It should be 2.0 So aren't they going to switch From Docker to Rocket or something? They're asking me? I don't like my personal opinion I would love to I don't like Docker Not just Docker's Politics and behavior I don't like it Speaking for myself personally From your head point of view I don't know I don't want to Because I always say something In these presentations That it's sometimes confidential And I should not say It might be there It might not No idea What's the time? So we are in half of the Workshop Maybe a bit more What else would you like to see? So you saw You have seen the AV deployment We have spoken about a lot of stuff That's low level high level Some other questions you have that I can answer Or maybe try to answer Maybe you Could you like Do a real quick walkthrough Of how you create a set of services And talk to each other Like a little Node.js app Or just something That's kind of I think we can do that If you don't mind I would use the materials for the workshop But I have prepared The basic go through Using OpenShift It's a Java application That's being deployed to EAP And it's using MongoDB On the back end To store the data and then shows a map Of all the MLB parts In the United States And I would love to ask Grant over there If you can explain What MLB parts are Because we in Europe don't play baseball that much I've got an alternate version Of Node.js with I don't like Node.js Can you do me a round for that? No, what's MLB? Just explain MLB So I want to take a little bit back Using a spatial Application Using OpenStreetMap All the baseball stadiums Depending On the longitude And latitude coordinates In the corners of your bed browser Each time you change The map you drag around It does a REST API call To get the longitude and latitude That's displayed in your browser And it does a REST query To MongoDB And I know that sounds complicated So what it is, it's just a map Of all the Okay, good So we're using the source code Because we don't have a pre-built image And there is a source code Somewhere Let me check Over here And the username can be changed For somebody else's username on GitHub I'm using gshply Which is the guy that just explained To you what MLB parks are about And who created the application So I will use his source code And When I go to Over here In this project Where is my jboss JOS EAP I will call it MLB parks The The source code And create So what's going to happen Is I'm going to It will start building the application Or pulling the source code Building the application If I go to the logs I can follow So you see that we are downloading All the internets through Maven So again We need to download everything Once this happens We will have the image We will deploy the image The image can consume some environment variables That authenticate Against the MongoDB service That we will also deploy And then We will deploy the application And connect to the MongoDB So we need to wait for this To happen Is there Okay See MLB parks And This is the result So this is the map And the points are the parks That the US guys play Baseballet And if you click it You get some basic information About 73 million dollars Is a team payroll For Rockies in Dallas In Denver Some basic information So that's the result By zoom in Each time the map changes Like I was telling you It gets the coordinates from the four corners And makes a rest call The data within that Location Within And some You can see it at the back But it's doing the bithin query With some position And give me everything In this rectangle Some geoclary On the MongoDB How is our build going? Go back to ABs Still going Oh Pushing image I can I can But I I don't really need to pull Mongo I will just go here I will loop for Mongo Because I don't have My Persistent storage Configured at all in my cluster I will not use a persistent MongoDB But the container Whenever the container starts It loses all the data But it's okay for our simple example There is a MongoDB And we can Populate the information The environment variables That will be pushed into the container When it starts It configures the user This username, this password It creates this database And creates an admin user With this password as well Going to check in the In the tutorial Because I don't remember it The application Expect something For me to do And Choose some specific values Which is MLB parts for all of those MLB parts MLB parts MLB parts Some labels and need that Period So There is my MLB parts application And there is my MongoDB one So I deployed Yes I have built My source code I have deployed the application If it's probably still starting And I had The MongoDB image pre-test On the node I do it just to make it simple And doesn't take that long All the time And I spin it up It created the users And the admin user with the specific information That we put there Into the form What about the MongoDB cluster You spin up the MongoDB Just makes the cluster And you have two or more nodes Yeah I think in Origin There is MongoDB cluster Some work on it With just the charting And they work on it And they curse all the time The engineers So there is work in proper progress For that We have also my SQL cluster In the Origin repo Masters life We have example Something like this Just some control And we have the cluster On the hand And we have storage With MongoDB I don't really know how they handle the storage Because I am asking previous talk Because we are Trying to solve this And I will answer this There is no There will be no Some work This is not the way to do it Like We want to have MongoDB on OpenShift No How to run In a clean way We do have If you do add to project There is two types of Mongo That are listed There is an ephemeral And a persistent template For Mongo uses a persistent Volume plane For performance reason I need to run This is on the road You can use a node selector To ensure that that pod Always lands on your Big hardware with a faster disk But based on Documentation hostbox is for Only for testing So I don't know For MongoDB I look with our engineers MongoDB is quite challenging I know that We had the winter of code That you were getting During the day And guys from Spain From a company called Progruban They created a Cassandra cluster So when you spin up You deploy a master It should probably work With just everything on the same level But they have a master And Slave nodes And as you scale slaves It redistributes the data So there is no need For persistent storage in there Because when you create a new one The ring is changed And the data moves in the cluster By default So in that case it works quite nicely The other problem is We hide everything behind the load balancing services In Cassandra You usually want to connect to the closest node And then it gossips you To the correct place So from performance point of view You have the old balance Into Cassandra So you solve one problem And you create a new one So all these persistent storages On the databases Are not yet as much suitable For horizontal scalability You still need to take care Of this my nice shiny machine That is running my storage There have been people Who have been running HGFS and the thing from their HBase On-oven chip Using open chip to scale it HBase What is that? It is Directly from the HEDU project I have already told some On origin Not underpressed custom If you click to the scale button On this MongoDB It is like broke the application I don't know what is going to happen Do you want to try it? As long as the storage is there You get two databases Yeah, with a little balancer No, we don't do that I wouldn't do that Ah, okay So our application over here Should be running We have a map without any data And we have the database that we are running Over here And if you click on the port You should be able to see It is not here It is not here All the environment variables They should be somewhere Let me check Ah Never mind When you click somewhere in the user interface You will get list of all the Environment variables that are pushed inside That container On this one? Is it here? No, good, good Environment variables That is the things that we configured Using the form I clicked On the MongoDB So MongoDB was configured With these And I can click over here And I have no environment variables So not yet My application doesn't know how to connect To the MongoDB yet Is it possible to provide some environment You can use the secrets And then the secrets Will be mounted inside the container Not through environment variables But as a file on the file system And you read it from there Yeah, it's possible to bring Some secrets to some user Because I need to create a secret For every project again Just for a specific user For example I think so because When you want to use A private repo for example for GitHub You configure The sssecret You configure the sshkey For a specific user And everywhere the user is being used It uses this particular key So it should be Possible to do it per user The next step is to link it together So what I need to do I will run the command On the command line When I change my deployment config Change these environment variables When I change the deployment config It will trigger the deployment Of the container I know I have a wrong name there It will trigger the deployment Of the container so that the container Can consume the new environment variables Because you cannot push environment variables Into existing processes They need to inherit from the part So I named My project So I need to Rename My deployment config to MLB parks The deployment config Was changed and my application Is being redeployed and the new container Is going to have These environment variables Pushed in So once my application starts It will be able to connect To the MongoDB So that's the basic workflow That you do The environment variables Can they reference Other fonts Yeah If you can, I will show you something If I do OC new app Sorry OC new app Kubernetes slash guest Guest I deploy a new application That's a Docker image Now Really? Okay Who else? You know, I am really inter-appreted And I have all the O's everywhere So I am deploying an image from Docker Hub It will be pulled, it will be started And I will have a new application available Is this guest book Is already running So I create a new row for it I don't like this OC Guest OC Exposed Guest Book No, Service Guest Book So now I will have Guest Book has URL There is the application You go to environment variables You see that for every service That we have created in the project You get Port, you get host These different naming conventions So every service that is inside this project Is my application All the connection information Port and host Is pushed into all the containers In the same project So I can just Discover it using the name As well, if I want to collect More OpenShift I have the information how to connect there And I have a secret Saved on the node In every container So I can authenticate as the user who deployed The application and see what's happening there In the cluster So you can connect out back to the cluster So this is used for discovery The other possibility Is Give me the port This port terminal Again Now I am connected to the SSH Inside the container So I am working internally In the container And if I do Get ETC Hosts There should be There is my username And as well Get ETC There are different name servers And these name servers are internal to OpenShift And you do different queries On the DNS servers to discover different things So I think Definitely you can discover By specific names You can discover different Services that are in the same project And I think There is already SRV Entries as well So you can also discover not only the IP address But also the ports By default, but using DNS you cannot Discover the port You can only discover the IP address In the environment variables you have both By default Is it shared? No, it's shared It connects out to the It's part of the platform itself I am quite sure That this can be configurable So if you want to use a different name server Than we are using internally You can replace it If you want to use something other than HAProxy for the old balancing You can just replace the container with some other container It just has to be able to do All the stuff that you are expecting it to do Because You couldn't search something you shouldn't know Yes The information about other ports It should not I am not sure how this is handled I haven't had time to play with it This is quite a new feature So I just know it's there This was in the latest This is part of the latest version I think You can check What is the source address Of that particular query And based on that you can know who Is asking for what I am not sure There is something like that probably So That's for the discovery So you have two different options Environment variables The Vapor images they use XIP.io to handle all of that Yes Pretty much all those Different components that we are using Are replaced a little by something else If you don't want to use HAProxy You can use the big F5s Or some other hard battle balancers If you don't want to use OpenV Switch for the software defined networking You can switch for something else We are providing some default Configuration But if you have already some existing Deployments in your company And you want to hook into your existing Infrastructure It will help you with that Were you saying you didn't want Certain services to be seen You could put them in a separate project That might work You only see things within your Project scope And there is also support for Application for project If you want you can do a dedicated VXLan for project Where is my application Never mind ABs ABs MLB parks And I have the points there So I connected back to the MongoDB And I was able to read the data Now the question is Where did I get the data? It just appeared somewhere By building the application There was some initial data Not really When I am building the application itself I have now I didn't have but now I have access to the services In the same project But I cannot be sure That the MongoDB is already running When I am building the application So if I would populate Some script or some data State through the building You should never Try to populate the data from In the building phase Because you have no idea No building phase It's not in the build phase It's in the running phase So there is some Okay I think it's over here So there is no connection To open ship with that It's as simple as If I connect And if there is no No documents in the collection With the parts I download this JSON file, parse it And upload it back to the database So when the application connects to the MongoDB It checks if the data is there If it's not there It populates the database This is great with the fmrl storage Because whenever I connect I get the data there It always seems to be there somehow So it's a trait To make it work But this is not a good practice At all To do it like this In the opening For example When you have Rails applications Or Django We have pre-deploy hooks Which you configure for deployment So whenever the deployment is going to be started You can trigger Use this image And run this command in it And that could be Rails migrate Or Django migrate So before the deployment starts You run the migration And if it completes Correctly with the Zero exit code Then the deployment starts If it fails, it will not start the deployment at all Then you can have post deployment hook So when all the Containers have been redeployed You can trigger this This command in the container If it doesn't fail, it's okay We are keeping it If it fails, we roll back All the containers back to the previous version So you have two different ways how to do it But you should not do it in the build phase Because in the build phase You have no idea what's inside the cluster So in the build phase What we do is to generate Again, I'm a Rails guy So actually For example, what you do is You generate something You compress them into a Gzip Or something You do all those things that generate Something in the application But you don't touch the other things in the cluster Not yet That's part of the deployment phase The same thing is You don't have to trigger the deployment When you finish the build You don't have to trigger the deployment automatically You can have the new image build And when you want You can manually trigger the deployment If you would do the migration As part of the build phase You would already migrate your database But the code itself Could be deployed months later So you want to do the migration As part of the deployment phase When you are moving to the deployment So did they answer your question With the linking and deploying stuff? Yeah, sure I think I have to look back Do you remember the story on Linux? Is that about environment? Or is it something I have to do To make it pull those in? You configure them from the deployment config In OpenShift And then we make sure to push them In all the containers So when my app will be running Okay So how is it timed? Eight minutes left? Some other questions? Cool Do you use the system Like the resources using OpenShift On my own? My own? Yeah OpenShift itself I am not sure if this is part of the metrics If it We have a metrics project That sends back All the metrics information Like memory, CPU, etc Back to Cassandra And then you have a UI to read it from there This is if Like etcde and qblast And these things are included in there But might be I am not sure I can't see the usage in top Or something else But I can use it The application Yeah I think it could be possible to send it To the metrics, but I am not sure The same is If you want to do the logs Containers, C logs from The nodes, etc You have the integration with 2nd That sends all the information to Elasticsearch And then you can use something We, by default use Cubana To go through the logs So you can analyze them and see what is happening In one place One thing What is really missing You can run all the C logs And put them on the port And I need to follow Three ports at the same time And check what is happening You can do that with the Elastic part With the logs Because what you see, say Is the basic part of OpenShift If you put the fluendi there And you use the Elasticsearch You will have real-time stream of the logs And you will be able to look it up From the Elasticsearch Question Main page with the pricing Limits and I don't see How much I could use And how much the system use That's a good question, I have no idea As I said, I am not a sales guy Leave the price How could I see How much memory I have And what's the limit in the system In the system, so again You can use the metrics to stream it Into some user interface The other thing is Because we had the problem with the cluster before So I was trying to see what is happening With the applications and our cluster So I was Are you asking on a user look? So in the user, if you go to the settings Settings of what? Ah, it just goes to my web console Apple Shift tab OpenShiftDishmaster Apple Shift tab, if you hold down Shift You can open the last And then you Utilize the percentage of what you are able to use Go ahead to the project And settings Yeah, but you have quotas here We don't have them enabled For our cluster, but if your agent has them enabled It will say like 2 gigs memory, 2 vcpu Or something like that And then you can allocate those to your Containers, does that make sense? Yeah, great, thanks I didn't, for all the last decisions Yeah, we have a disabled In these sessions you are able to Use as much as you Because physically We don't actually Even know So if you care Actually about how you are like What the Cluster looks like Not from the user perspective, but from the Admin perspective So this gives me So there is a command That I ran Oh, I lost it Somewhere here So OCDestribe Note, so I am trying to get all the information About note, and then taking Some selector, some label Where region is demoed, this is the first line And then I have a note And I can see Somewhere here I have how many quotes Are on that particular note What quotes are there And there are the Limits of the quotas But I don't have them enabled at all So I don't see it here But this is the perspective of the CIS admin user So the simple command I can see Every single note And how many quotes are running On that particular note Just a simple direct Over the list of information Yes Sorry CI, you mean Quotas integration? Jenkins? It's unfortunate that Michele went home Because he was working He's the engineer who actually built the It's not yet Anywhere in the project I saw something In the Origins, I think There's a plugin for Jenkins To be able to go to OpenChift It's based on the Kubernetes plugin But how it actually works I'm not sure I just know that you are able to Use OpenChift to provision the Build slides for your builds So you don't provision VMs And then you can collect all these Information and you can also build All these crazy workflows Like if something builds and tests run Then you can do something or deploy something But how it is implemented It's not there, so I don't know For Ansible? So We mentioned it before I think There is the OpenChift Ansible project So that's where all the Ansible scripts live In the OpenChift And you choose Just VMs If you want to use AWS or Google Cloud Engine And you can also provision the VMs On those cloud providers Or you can build your own deployments When you specify this is the IP address Of my nodes This is the IP address of my master ATC should be there, etc And then you run the playbooks And they do everything for you So www.github.com If you are interested in that That's the best place User management In OpenShift It's not really easy What? I don't really know, I've never tried With the user management Did you try something Provision new users? So I usually There's an ht password file On the master, you can manually Update that, but we also have Support for integrating with a variety Of identity management solutions There's the Red Hat IDM We have Active Directory integration I think through IDM We can also integrate With Keystone from OpenStack And so that gives you a lot of options With the IDM integration You cannot You cannot Define groups Of users, like admins Viewers I'm not sure how much Is encoded in the platform And how much is outsourced to the Identity management solution itself But we do support Support role based Access control so you can have A team of people That are on the deploy team Or only the release team Can push to production Something like this So if you go to Opencheap.org You open the documentation part And you go to installation configuration Configuring authentication There's identity providers And how to work with them So essentially there are different ways how to do it And they have different paths Based on what they do As well as Somebody was asking about the metrics There is an enabling Custom metrics So if you And there is Aggregating container log That's it, yes And you get routing from Hload balancers That's like for switching to HAProxy For F5 and etc So all the different things that you need to There you have it Describe the recommendation Yes From the administrative Perspective You can use Cubana to get the logs In the elastic search It's not integrated in the UI for the end users Is there any limitation How many calls I can run on one row Because I saw the 40 But I don't know how to increase There was a Kubernetes actually had it hard coded At one point That the limit was 40 per node It was hard coded in there And so I know I've done 100 per node It really depends on how much memory you say I wouldn't recommend Doing more than 100 per node How would you If there is a hard coded Oh I think That was 8 months ago I think we've made some changes And allowed it to be configurable now There has been some I think something With some cluster with a thousand Or something on the node And they are streaming The changes they did to make it have Kubernetes From my experience And from what I've heard from the operations team They have said that Kubernetes Can become somewhat unstable At more than 100 per node So it's Probably a limitation in Kubernetes That I'm sure we have people working on To help I think the limitation is on the Docker level That you need to pull from Docker So it doesn't push the information So you need to ask all the time The Docker API to get all the information back So if you have too many containers There is too much information flowing down All the time If they would be pushing their face from Docker It's supposed to be a bit simpler But they don't want to So that's politics You could pull only one image Of time I'm not sure I don't think so I don't know what the limitations are On the Docker engine You need to wait Yeah, I know that The Docker Is a container spec That many people are very excited about I have also heard that That Rocket Will allow better density And that there are tools available For translating I think there's something called Go ACI That will translate Using a somewhat Open spec from Docker to Rocket So Theoretically we could Allow you to ship a Docker image And then have us translated On the fly into Rocket internally And get better density And perhaps a better runtime engine So I know if we're looking into that I don't know road map why So there will be a chance to run Rocket Open ship It's theoretically possible I know we have a lot of R&D On a lot of different topics I'm not sure which ones of those topics Actually get merged in It requires testing A lot of evaluation But we have been running with 100 quads over here And it was semi We had one of the nodes failing For some reason, but the other Four nodes are just fine How do you configure it for me? You like to rewrite the code In the Kubernetes? No, this is configured From the Ansible script So this is something you've been This is all generated from The Ansible script And this probably is passed in To Kubernetes and using this Variable where before it was Card coded somewhere So I think it's configurable now I think we have Overflowing the other session, right? The last one So now is the free Q&A session So these three last questions You show that The port sees all the environments Inside the project Inside the project The project is a namespace How do you get the environment Variables from another Container or information About the old container? No, it's just the AP addresses The port, it's how to connect there Not the internal information of the container I'll tell you if you want to Push some environment library Let's start with the container Exactly, since these images Are meant to be stateless Any time you want to Update the configuration Inside the container We have a Config change Event that files Which triggers a new deployment In order to pre-provision the image With new environment variables These environment variables are pushed When you already deploy the application So if you deploy a new service It'll start a new deployment So you change the environment Now, it doesn't trigger the restart Of existing containers If you deploy a new service If you deploy a new service