 All right, all right, all right So if you happen to be passing by the red hat booth like us to please welcome and join me I'm gonna do a open shift presentation Open shift is a container platform. I hope that this will not bore you Or I'll take more than 25 minutes. Thank you very much All right very good So what I'll be doing this I'll be using two different open shift versions One is a version upstream version that you can just go to open shift org is less VM and download a virtual machine and play on your on your environment no need to Buy anything just have fun another version that I'll be using is called open shift online It's the development preview version of openly open shift online So the exact same version of open shift that you would likely install in your environment We have it installed on Amazon, right? So that shows how much we trust the platform So and we've been doing this model for a while So we have a online version of the product and that's the same version that our customers get to install on their own environments, so it's it's it's we've we've had on the open shift online more than I think Almost three million apps deployed and I think it today gets around four billion requests a day So that shows just how much power for the the platform is and again the same that runs on AWS Can run on any cloud provider and can run on your environment, right? So I'm gonna do a quick quick demonstration of open shift here No slides at all, right because we're all tired of seeing slides here So everything an open shift starts with a project and you have to see project as a Kind of a sandbox offer resource constrained area for you to deploy things, right? And when I say things because you may be deploying a database You may be just did deploy a set of scripts or maybe deploying a Application platform like JBoss or wildfly or or even we have people that deploy WebSphere and web logic on open shift you can deploy all sorts of things essentially anything that runs on a Linux machine Can run an open shift because all all that runs on open shift. It's Linux containers Wrapped in a Docker image so we can run any Docker image you bring your own Docker image If you have a Docker image already on Docker Hub that I could try I will very glad very happy to try your image here running on this environment, right? Very good. So I'm going to create a project. My project is going to be named Thomas, right? Yeah, so this is the Thomas project and This environment they're running runs an open shift online, right? So it's a development preview running on AWS and a good thing about this project is that it is already resource constrained So I can deploy just a set a number of applications or I can deploy I can use just I think two cores And two gigs of RAM so imagine yourselves You're gonna give a project to your developer and you say you are happy to do anything break Do whatever you want play delete with this set of amount of resources and they can delete and can try again Right, so it gives developers freedom They don't need to come to an as I'm gonna need a VM and then you say open a ticket and then two months, right? That's that's what happens, right? Some people actually happy to have in just two weeks of yeah, so I'm going to create here as you guess I'm going to be creating a wild fly Java application. What flies are upstream version It's the community version of our JBoss enterprise application platform We actually have some of the wild fly Engineers around our booth so if you have more let's say deep questions about wild fly We'll be happy to answer those and to show that this is a let's say I'm not afraid of doing real demos everything here is going to be real So I'm going to be deploying a Java application a very simple Java application just to show the platform So my Java application is going to come from these repo here get repo. I hope you can see them I mean in fact, I'm going to clone this repo and deploy it from my own So I'm gonna go to GitHub right now Let's hope our conference Internet allows us. I'm going to fork this thing Awesome. I'm going to fork this You already have forked this repo. Okay, so let me go here. So this is the repo that I have forked There it is. So instead of using that one. I'm going to use in my own. Okay, so let's Let's come back to OpenShift and I'm going to be using my own, right? So this is my here my repo the name of my application is going to be simple app Because it is now the objective of OpenShift is to automate Every single step possible step between code and production, right? I know that some companies are regulated So they have let's say regulated constraints and they they cannot take automatically code production Some people have to sign off have to blast the image there Let's say some compliance regulations But ideally with OpenShift you could take from code to production with no manual interaction, okay? And there are multiple ways we do this as you can see here, I'm referencing a Source code repo. So that means that I'm going to get the source code from that repo I'm going to compile that source code. It's a Java source code Compile generate a war file as I said in the beginning everything there runs an OpenShift runs and Docker I'm going to create a Docker image with that that source code compile push that Docker image through registry And this is really good because on the registry is where you're going to verify Your compliance checks so you can have compliance checks on the registry Who could push images through the registry or we have a development registry and a production registry who can access the production Right, so this is where normally we see people adding the compliance thing in compliance aspects on the registry itself, right? And then we want to automate right so one thing we're going to do is I'm actually going to Uncheck this box here and talk about it later, right? So we're going to build the image and a very important thing I want OpenShift to configure a build web hook trigger that means that whenever there's a source code change It's going to do build everything again, and it's not going to build again Remember not only my war file or your file or anything. It's going to be actually the Docker image Why that's going to run my application And then I have another configuration that build an image whenever the builder image changes This is very very important right because once you start deploying and developing applications using the Docker model Your artifact your application artifact contains both dependencies from the application itself But from the operating system itself, so you need to be very much Knowledgeable of that that your application now contains every single piece of dependency So if there is a vulnerability in Jalipsi like we had in March that affected that one third every website or like Open JDK, which we had last month You have to update the whole image. So that's why we have this build trigger when the builder image changes The builder is the image we're going to use to Layer your application the war file on top and we red hat we maintain a set of builder images That we maintain the life cycle red hat is has always been about life cycle management of open source software, right? Making sure we deliver so source code and and binaries that have passed or tests and we maintain a set of base images So you will be building your application on a base image that we will always keep updated with the latest security Updates the same way. We've been doing Linux forever, right Awesome. So and then after the image has been built my next question is should I deploy this image because one thing is Is building the image you may want to send this image to another system to test the image, right? You may want to send to another system to check the image, but here I'm just going to say yeah as soon as the image is ready. Go ahead and deploy that image. Okay, awesome. So Another thing Most of the applications that we're seeing being developed today They are essentially smaller applications, right? And if you think about a VM like what's the smallest amount of CPU resource you can assign to a VM? It's one core, right? You cannot assign half of a core to VM You can do over commit on course, but you cannot you cannot say to a VM You're going to use half of a core But since we are doing with the isolation and resource limiting on process level We can do like millicores and millicores is one core divided by a thousand So for this Java application, I'm going to it's going to do some calculus and memory base But on the other environment I'm going to show so for let's say the Node.js applications that I play with I use one tenth of a core 10% of a core to deploy my Node.js applications That means I can have like much more density and density equals cost savings right there, right? Let's have this in mind. So I'm going to leave this as it is Right create. So lots of things are happening great things though As I said, we're going to go to Thomas project. So we're now going to do to git And we're going to clone that repo as we can see here in the logs We're going to clone that repo somewhere here it cloned it Let's see where okay. Yeah, that's cloning that great to get the repo right here cloning source And it's going to build that source because it's a Java Application we actually guess so we have a command called no new app or see new app That you point to pretty much anything and we can guess what that is So if it's a PHP we'll figure out how to build PHP if you know We'll figure out how to package node and do npm install if it's Java We're gonna figure out if it's maven or grad and we're gonna do so the platform is pretty much smart to Guess things right of course if your environment if your source code is very different you may Want to teach the platform how to build your own source code? And so it it built here my source code, right? This build was actually quick We didn't have to download much maven dependencies But in any enterprise environment you would already have an axis or a j frog a repository artifact Where you would be going to get right so in this case We're going to the internet because this is a cloud version but in any environment what you'll be doing you're going to a blessed set of Repositories right and it's very easy to set up its environment variable maven proxy URL that is set up and it's going to use that as a proxy So right now what's doing? It's pushing the image, right? So let let the push while it pushes the image I'm going to switch to my other environment here and this is the environment which I mentioned in the beginning that is my The old the running on a virtual machine very important thing So this environment that I'll be showing here. It's running on this machine here, right? So it's a full complete open shift environment that it is the same environment that you'll be running in production Now I ask you how many of you have in the developers machine the environment as close as possible to running production? You don't right because it's hard to replicate now if open shift open shift was rewritten in go So it's a very modern language. They actually the binary of open shift This is small everything written open shift is written in go lang we were built We decided to rewrite it again a little over two years ago And so it's very modern technology using modern language. So that's why I can run everything on this small box here And to give you an example I have here a my cockpit project This is cockpit is just an application that's going to show everything that's running on a platform, right? So this helps give us an idea of how many things we have running on the platform So this shows that on this box on this single machine that I have here, which is my open shift installation This is my node, right? So this is just one computing node in any environment You would have more computing nodes to run the applications but these are all my deployment pods that I have and pod is a in another way to represent a docker container, right? Because sometimes you want to have more than one container that shares the life cycle For example, you have a application server and a log scraping tool They should always run together Doesn't make sense to have the log scraping tool without the application server and you should not want the Application server will have a log scraping tool. So a pod is a way to encapsulate multiple containers in one single Atomic entity if the product goes down Everything goes down as the pod comes up. Everything comes up, right? So let me switch here what pushes my image Find here awesome So this is just and then again, this is just an application, right? That comes with open shift that shows how things are Now, let me switch here to another application that I have that my friend built So this is a microservices application that was written by my colleague Raphael Raphael somewhere And this shows that open shift even though it's docker and people say that docker It's essentially for ephemeral things now This application here has a mysql database and this mysql database has what we call the persistence volume claim, right? So people say like I don't like containers because it's just for ephemeral like if the container dies I lose my state. No, that is wrong, right? What we have here for this mysql database is that I have a volume a Persistent volume attached to the container So if the container goes down for any reason when it comes up It's going to have the exact same persistent volume attached and the way it gets mapped is just a folder mapped into the container So if let's say I put I bring this mysql down and let me do this And I open this front end for us Okay, so I have two entries there, right? As you can see Raphael and Benavides who enters and what I'm gonna do is I'm brave enough to do this live I'm gonna bring mysql down. Oh And you saw how hard it is to bring things down as well, right? And then I'm gonna bring it up Oh, let me just See if anything comes up here. Okay Yeah, saying something went wrong. Of course. I just brought down the database. I Hope it comes back up. Let me see how smart this is There you go, right? So I brought the container down and I brought it up again And it when he brought it up he read from my volume persistent volume claim and then my data is still there, right? awesome okay, so I'm going back to the other environment here the image was successfully pushed and Here you can see that I do not have a route for my application Now, how many of you take long time to get DNS routes created for applications like talking to either network database or Infrastructure guys it's always a pain right because they they are not like I'm a developer So they are not as responsive as developers would like them to be right not saying there are bad people There are lots of things to carry but so For an open shift you actually delegate a set of DNS address to open ship so for this cluster here It's very small, but there's a Set of there's a DNS enter delegated to this cluster And if I want to create a route for my application, I can just come here create route I can make it a secure route or a unsecured route It's up to me right and then I create a route for my application and that's that's it I already have a route created for my application. So that means that That's how complicated it was to create a route. So if I come and access this application here, there you go I already have as you can see was doing this demo for another Body, which I would not like to talk about it And There you go. So what I'm going to do is I'm going to simulate a let's say a Co-change so because I had configured the build trigger right so let me just figure out how to do that I'm going to be to my builds and I see that I have the simple app built Configuration of my build and I have the webhook github webhook URL show so I'm going to copy this guy here I'm gonna come to the github repo settings webhooks Add this this thing to One two three one two three update this update the webhook and I should be alright. So my webhook should be fine. That means if I make a code change It's going to trigger a webhook on OpenShift. OpenShift is going to build the app and do the whole thing. Let's let's test it, right? Oops Alright, I think it's here Should be here So I'm gonna add this file right here, and I'm gonna do this right here like so just make it easier for us Where is it? Alright So I'm going to change the title here Okay, I'm going to say Java one instead of US Quartz and As I'm a very good developer I'm going to of course comment my code right Very good developer, but as I am a very bad developer. I'm going to commit on the master branch. Don't do that. Let's tell All right, so I have Committed the code and then if I come back to OpenShift Let me come back to my overview page. I see that there is already a build happening there, right? So that means that a webhook was triggered and again, this is cloud, right? But it could be the exact same behavior on your environment, right? It doesn't work with only github. We have also generic webhooks Which is a URL that your build system Jenkins Travis CI Any any build system could trigger an open ship say like I changed to the source code go build that thing, right? So right now what's going to do is going to do the exact same thing. It's going to build my application, right? And while it does do the build here, I'm going to switch back to this environment here So this environment here is very interesting, right? It's it's running a new version of OpenShift 3.3 which we will be launching in Seven days six days in fact And it has some very nice monitoring features as well, right? So as you can see here on this UI, I already see how much memory and how much CPU my application is utilizing, right? And I'm going to do something that normally takes time for you in your environments. I'm going to scale this So this is 12. So right what's happening here is that I'm actually saying, you know that container that has my application I want 12 copies of that So it's spinning up 12 copies of that container and this is only a one environment But you can also have scheduling policies that say you know what this application here that has the label PCI compliant We'll always have to land on nodes that are PCI compliant or this application here that needs to access data on an SSD storage We'll always land on nodes that have SSD storage, right? So here I have my application and very important thing We have built-in load balancer So that means that when I added those 12 containers, 12 pods I already added those IPs to the load balancing, right? And I didn't need to do anything. Just write it I just added more nodes and boom your load balancing right there, right? I'm going to do actually also another very interesting test here for us. I hope it works I'm gonna bring this guy down here. I'm gonna bring the database down. No, I'm not gonna bring the database down As you can see here, we have the database here, right? So let me Open a new window. I Hope I can log in Right, so I have two windows with the exact same content, right? So and now I'm going to test another very cool thing that OpenShift does which is auto healing Auto healing is the capability of verify the state of things and also maintain the state OpenShift is declarative. So you don't tell OpenShift go do this You actually do this is the state. I want things to be so for this MySQL application There is a JSON file that tells my application should have one copy and should be running So if anything tries to break that state for some reason OpenShift will bring it back now. All right, so I'm going to actually Simulate a failure in my MySQL database, right? So right here on the left What you have is A shell that I have a terminal that are running inside that container I can only run a terminal inside because one a Plasma administrator allow me to and because there is bash inside this container. So I'm just a PS and as you can see here there is a I Think it's this one here. Maybe it's this one guy here Feed one maybe that that one is the one that's keeping the service running. So I'm going to kill the process One. I think he's another. Let me see Maybe it's the other one 63 Yes So as you can see after I killed that process my terminal window was killed That means that everything that was keeping that terminal running died See here for some reason not killing the right process Yeah, it should be the one too bad. Am I on the right guy? I'm gonna do it with another container not killing. I know but Let me see. Maybe it's running on a privilege mode. I don't know Should be here. There's two more. Yeah, it should be this one oops Okay, now I did finally right as you can see it killed the process and created another one. Let's do it again What did I do before? Yeah, for some reason I did something before that worked So but as you can see that the house check works, so I'm going to do the same with another application here Just because it's likely going to work better It's the guestbook service Okay, there's a Java app running here. Let me see with this one. I'm not lucky enough today. It's not killing Weird Okay, I promise it kills. It did once Yeah, I am running out of time. Okay. Hmm. So I want to show while this doesn't kill I want to show the trigger before So this is that app application that was built very cool because the as you can see here I have even the comments that I apply to my source code and This is the change that I made right for some reason that environment was not as good as this is so there you go Questions should have killed see Yeah That one is killing you see this one kill. Yeah, sure I Think it will work with Oracle when Oracle says that Oracle is supported on Docker containers It will work with Oracle when they say Oracle says it is Oracle is supported on Docker containers because The fact that you're running a container does not change the support policy because it's still a operating system Right remember a container from the operating system perspective does not exist a container is a set of techniques that you apply to protect a process Container you cannot say new container on the Linux machine. You can just apply techniques to limit who access to process what files and network and System calls that specific process has access to so it's a set of techniques. It's a set of Protections you read around the process from there. Let's say operating system perspective. There is no such thing as containers there That's why Docker. They did a great job and wrapping all these techniques That that's that protect operating system process Thank you. Thank you