 Když máš to nebyť kůl na letku, kterou jsem kde nemohla se hrad. Kterou? To pět. Nevím, já jsem rád, že do tom stánku ona tam ležila, tak co si jí záv. To je opravdu na rýchli a to je můžeme mít všechno záv. A nebo kdo je teď opravdu? Je to díky můžeme. To je na 340 i d207. Kdež jste to nejde? Také to nájsí. A musím se poznávat na wi-fi. Díky, můžeme zvukat s výkorek, co se spoufnaje. And if you try, I think that's enough. I will show you some cool stuff, I guess. OK, I have some slides for today. Trying to get them to display correctly but this screen resolution is very quite small. Let's see if I can fix this ... Not far to go. This is what happens when you have a high DPI display, and you plug into a low resolution monitor Tohle se potlap hodně jít. Už jsem mám mít všechny dělávou důvodní a druhé tyšté důležitice. Když bych mi výpnali? Albo hodně lidi. Ahoj, než se to zvukat. Jak se nám než jen vzloučil? O, než můj lidi! Děkujeme. Děkujeme na klás. Vzlušeme nejštězné vzlušení. To je hodně vzloučil s systémem. Až jsem zvukat vzloučil vzloučil se než jen vzloučil. Zvukáme se, když můžu dělat. A ho SoundCloud světování směje těch dneštých dneštých vědět. Jdyskou se začala výquadu USB v lékuně, a jsou ta vymelící. Nevím, že jsem nežekl které. Měl jsem ktevě zlepštějí v třeba. Měl jsem nežekat kteřit ve doma a potí měl jsem kteřit to třeba. Takže jsme se pro to předstí. Jdem si proč když si sem zvoušit. Až jsem kteřit jeden a umějte střiad Produce 1 do helpu. a je to začnout, protože vyskává je velmi služitý. Oto je tady všechno odložit, nemůžu jít, že musím se všichni důvodit. Všichni dnes se otevří, kde jste možná vám nejlepší kopie od výzvodního výzvodního komandářa. Když jste do výzvodního výzvodního originu na GitHub, větábu je, All these branches, this off- everyone's screen in our releases tab. In the future, we have jist rolled out one dot one dot two release for our upstream code. This also has a lot of information about API changes, new features that have been introduced in this latest release. There's also some screenshots. This is Jacob's new features that are coming through. Good to see he's available. Mám to závodit všechno jedna USB a nám je to dělal. Mám to závodit všechno jedna USB a nám je to dělal. Když máte already the CLI tools, jsi měl dobrý, jsi měl dobrý, jsi měl krátký, půjď se na výstření. Některý, to je to, když se vytvořilo, když se vytvořilo. ... ale ... ... OK. To bude ... ... nevěději, kde je to. Páte to na dvej, když máte. OK. Už se to potom se závodit. Já chcete se vytvědět, když se to závodit. A to ... ... představíme na závodit. Proste, že ... ... tady je hodně. Nic na to? OK. Můžu bych taky vždyť. Já vidím výstřední výstřední, a v tomto výstřední výstřední výstřední. Takže jste musíme tomu když vytvážené? Jste jste které? Jste tady? Ok, sílojíme, že máme zvuky s hledním výstředním. Jsme když můžeme vytvážené. Přeci nám to vytvážet. No, zvuku do náshojíců, můžeme se to zvukovat. Jestem po Mohci, když bych můžeme zvuky zvuky výstředný. a to je tím, že se to dost. Díky, když se to díky. A hodně Jorge může hodit, když jsi může vytvědět, když jsi tady hodit, když se Jorge může hodit. Je to v občasku, díky, díky, díky. Vypadáme, když se to díky. Díky, díky. Vypadáme. Vypadáme. Díky, díky. Jsem vždycky, Ryan Jarvným, než začínáme, a závědějíme. Já jsem s Red Hatžačným tým svojím svým zvížením. Já jsem vždycky jen závědějí. Abychom, než jste. Abychom, než jste. Abychom, než jste. Abychom, než jste. Abychom, než jste. Potom se potom děláte se zvukovat a zvukovat se z nás výdělí, které se vytvoří, když je vytvořil. Můžeme přijít tím, aby se významy na kontainerizáře s tím, že se vytvoří nejlepším, které je vytvořil nejlepším, které je vytvořil k containerům. Máme tady o zároveň a zvukovat s nejlepším imagí, z významy zvukovat vyznamy vyznamy vyznamy vyznamy. A jsme se o replikace a hlavě s kubernetesem. Můžete mít kuberneti? Všichni všichni. A když jsi chtěl udělat, co se o kuberneti děláš? Kde je to dělává? Když je to dělává? Všichni? Borg. Všichni kuberneti. A když je to dělává, co se o kuberneti dělává? A bych to byl dělávý, že jste tím, když mám všichni dělává, a všichni dělává, co se o kuberneti dělává. Jsem který, ale všichni dělává. Everything they do except maybe some android. Everything from Google search, Gmail, Google apps, Docs. Everything at Google runs inside containers. It's not Docker containers internally. Docker containers are a little fatter than what's available inside Google. They need very high density per machine. They are not currently using Google, Borg je to průvodní práci, které se věděli, aby to vzpět vzpět vzpět vzpět vzpět. Kubernety je to vzpět vzpět, které se vzpět vzpět, ale vzpět vzpět, které se vzpět vzpět. Takže se vzpět vzpět vzpět vzpět vzpět vzpět, takhle vzpět vzpět vzpět vzpět. Když jsme se vzpět vzpět vzpět vzpět vzpět. To bude brzit třeba to, které se vzpět vzpět vzpět, které se vzpět vzpět vzpět, a vždy, v mojí celé vzpět, můžu si vzpět vzpět vzpět vzpět. Můžušu se vzpět vzpět, Můžu se proč nejlepší typografie, že můj sým závodních environment. A můj sávodní environment, můžužit to, když můžu mít vývodní database, ale jeho pro stážení a testovat, nechud nevětně zaštět, který se vývojí závodní. Můžužit, když se vývodní environment. Těžké sávodní byla nejlepší ryský. Tento je to, které je to, které je vývodný. když tam je to nejlepší výzvědět, který se je vzřudit. Na začivném cestu všechno je to OC command line tůl. Jovali to byli občas udělali ahojí U.S.B. Kto jde, nebo nebo na to OC command line tůl? A je to, které jste. GitHub.com, OpenShift origin releases. Měli se výstředný na jednému líbě, nebo... ...a výstředném těchto. Když jste výstředný výstředný, děláte jen hodně, ne? Ne, nejde se na USB. Mě jste. Mě jste. Mě jste USB, hodně? Mě jste. Mě jste. Mě jste. Mě jste. Mě jste. Mě jste. Mě jste. Mě jste. Mě jste. Mě jste. Mě jste. Já saní to this is an open-source community. You can solve this problem. I am confident. So let me show you how you can do it. Go to OpenShift Origin. Look for the Releases tab. Yeah, the download is very slow for you. A nebo všechno je to zvonit do wi-fi ahoz. Zvonit na vás nebo všetké větění zvonit. A to rád mi zvonit. There are client tools for 32-bit linics, 64-bit linics, Apple Windows, a tu nemůžeme sejt hlavní výjty na dň. Tu nebo se jen 64-bit linics. Ahoz, když jen je zvonit? Když je to? Když je Apple? Jsi OK, jsi OK. Ještě, když je to všechno. Pěkne ještě nejlepší za tím. A když můžete, když můžete všechno USB. Když je to všechno? Je to všechno? Je to všechno, když můžete všechno všechno. OK, jdoužím to všechno a než se to všechno. Jsou všechno. OK, tady jdoužím to všechno. Všechno, že jsou všechno. Jsi to všechno, když můžete všechno. Všechno, že jste všechno. Jako můžete, když můžete všechno. Je to Apple a jste všechno. Všechno? Jste všechno. Jdiš jste všechno, když můžete všechno a než se to všechno, když můžete všechno. Před 64-bitlinix. Je to to, kdo můžeme. Úloží, že jsme všechno dělat všechno. Jsi můžete všechno dělat všechno. Všechno, že jsme všechno. Všechno, že jsme všechno. Proste, let's go try to log in. We have a different server name for today. I believe, let's see if this one works. I think we do. OK, so, the server name for today can DevKonfCz. You need HTTPS, OpenShift Master, DevKonfCz, OpenShift3roadshow.com is where we'll be working. A všichni můžete jít úzorování tady, takže to je něco, který můžete představit příjde. Prvním rovící tady. Zděláme z 0, ne? Jsme rovící 0. Prvním rovící 1, rovící 2, 3, 4, 0, 1, 2, 3, 4, 5, 6, 7. A zděláme z námi... Zděláme z 0, 1, z 0, 2, ne? Prvním rovící 1, 1, 1, 2, 3, 4, 5, 6, 7, 7, 7, 7, 8, 9, 9, 10, 10, 11, 12. Prvním rovící 1, 1, 1, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1. Oh, no laptop. OK. Můžete mít s nám z nás jen. So pick a user number. Iíll be your user-0, 0. OK. Is everyone able to reach this webpage here? In расс Вид now. Is this readable? Not quite. OK, letís go through up here. Open Shift-Master, 되면. Could update my slides. Yeah, yeah. Je to nějaký klas? Tak. Já vyslyphuji, že je všechno, takže... To je... To je tady, takže to je... To je. Taky, děkujem. Potom jste mi, že je ještě tady. Je to závodně páržačne a teď řekáš. A... A, jsme měli kvělou. Závodně, já bychom to vypěděl na měch jí. Vám je to závodit. Vám je to závodit. Závodit. Závodit. Závodit. Závodit. Závodit. A hověději máte úžitě závodit. Mám tady přijít mi. Užitěji 0,0. Vám je to závodit. Vám je to závodit. Je to závodit.ake OK. A z rá�ka. Vejka úžitě pose. Víme? Jo. invest metres, A nejen cítil s ním? . . . . . . . . . . . . . . . . . . . . . Někdy bude se hlavnout pro mojí tímhle pár aplikace s tímnicovou vysvěření. Někdy bude se hlavnout pro mojich vývod mortarů a kvílé děláví. Když se když se děláví na kvílé děláví s qA. Zrysta se hlavná pro projekti a prvnejstvání. Vy chodí se než vývodni, a pro těch d나� vývodni k당沒 se vyvodnout. To vám je to, co se můžete závodit. Je to, co se svoje, které je závodit a co se závodit, ale je to vždycké vývod, a vývod, který je vývod, dveček, které je závodit vývod, je to, co je závodit pro projekty a vývod, co je závodit pro projekty. Až pro projekty, třetí prošletí. Běháme si, že v nějvědějí závodní výstředí dneví, ale než v nějvědějí závodní, než v nějvědějí srdostí, ale jsou vědějí, aby je dnes vědějí, takže můžu bych bych byl měl vysvědný, že už nech jsem si vysvědný ve nějvěděj, ale můžu měl měl vědějí prěvávě počkáte mít prívodní na vysvědném vědějí ambdává nás nejlepší dálky. Můžete mít vědělávac pro dělávací a promětí, než můžete dělávací, ale to je tady, které si dělávací a promětí je velice do dělávací. Můžeme cítěnit svou, jakovým vám je zvukovat? V méto dělávací a jakové mě je nevám vypravit? Okay, alright, I'll keep it moving. Let me know if I'm going too fast. But I don't want to delay too long. I will get to this content. So I'll open up, here's a basic application that should be pre-deployed for each of you. How many people have heard of a smoke test? Smoke test? You familiar with this term? When I was traveling, I've done this, this is a term in, maybe more popular in the United States. Je to, že se plukujeme na vlád, a nezapravili to, že se to nezapravili, nezapravili to, že to je třeba vásho testa. Ahojéto, že je to nezapravila, ne? To je velmi minimálný test. Jsme vzpozvávali velmi minimální aplikace, který je vzpozvávat svět, které se vzpozvávali vzpozvávat vzpozvávat a prožekty se představili. To je nejvěděto, jak ta díky s koncelu zvoužit. Jsou vědětovává výstředla. Výstředla vyskává výstředla. Můžete vytvořit vědětová vyskává výstředla. Můžete vytvořit výstředla. Máte výstředli výstředlí, teda výstředlí výstředlí. Můžete dít výstředlí. Vždyšte do rovních mikroservices,ame které se nevášové námi nepovrátí.ardoučí 호váře na дítky, půlου se dotý Huh, můžu to být, měny zvuky, závodní mladéle. To bude největom Kubernédy. Také to bude které zvuky, když se jedno z svých nevůvůvůvůvůvůvůvůvůvůvům v tom podgrupě, to všichni se všechny závodí. A když se všechny závodí, to všechny závodí. To je všechno všechny všechny závodí. Jak se to všechny závodí? A kdež se opušlíš? Nekvělá se, že se jste mělo by apačekotně závodit, že se podgrupě závodí všechny závodí. A teď se pěkní podgrupovat a větkáš, kdež se je závodit závodí. A tak z výšem listy, že se prváváte. uvidjete, že největné konfigurace se kiperujení nejlepa na nájdu selektoru a který teď umoží takový slyčený pod Muchas Memory když je to výřekt. Takže se zvukáváme šlapný nášlock, který ta vlastně nekazá nezapravití, nezapravití rychle a zvukáváte, jak se to zvukává. Jak se to tady jste přijet, který to je to mnohem, které je to přijít? Takže když jste přijít, vždyjíte v poděžitě, a v poděžitě s větším důtem pochodům. ...we can get a quick look at... Here's a topology view here. ...so there's a couple of pieces that we have, one of these pieces here, this is called a route. ...and if you click on the route, you should see more information in the right hand column here. Je to ráda typu, je to ráda námí, a ráda je třeba publiková hodinává na ráda. A ráda je hodinává na ráda námí. To je, myslím, vysvěděvá, ale to je hodinává ráda. You can see which images the container is based on. You can see if configuration has been injected into these container environments. You can also, if you want a persistent disk, you can attach volumes. Because these container environments are meant to be stateless, easily destroyed, easily scaled, easily recreated. And so if you want persistent disk, you need to add it. OK, but that's really nice because it's all Docker concepts, right? All pure Docker concepts, Kubernetes concepts. And the extended types that we have, the deployment config, we're actually adding as a new object type in Kubernetes. It's an experimental object type. After the Google team took a look at our deployment config, they said, wow, this is actually really great stuff. We'd like to merge this upstream. So when it becomes an upstream feature, Red Hat's very, very actively contributing on Kubernetes. I think we have the second leading contributor and we're the second largest contributor as far as companies go. So very active on the upstream of Kubernetes, very active on the upstream of Docker. It's not just our open source, it's the community's open source, right? So the deployment config is becoming an upstream feature. I'm not sure if it will be called deployment config after it's merged into Google's code. It may get a name change, I'm not sure. Depends on, yes? Just deployment. Ah, OK, yeah. That's the ground name, just deployment. Just deployment. So we may need to do a small rebase around. Essentially it's the same. Yeah, essentially the same feature. And who knows this feature best? The Red Hat team, right? OK. So I'm going to go back to the overview and we'll scale up this environment. Please don't scale it up to 5,000 or go ahead and try two or three. And you should be able to see our requested allocation and then it should be fulfilled, hopefully. We've had a couple stability issues with this environment, but aha, OK, we're up to our three containers. Hopefully you all have as success with this demo. Let me know if you have problems. We're eager to collect your feedback and if you would like to pay us, we're eager to collect your money as well with OpenShift Enterprise. We, of course, have the upstream code available, but you can get support from Red Hat, from the team who's working on the core concepts. So now it's hard to see on this screen. Here we will see if I could change the zoom here. Now you can see we have three pods managed by the replication controller. If I click on the replication controller, you'll now see the requested replica account is three. We have three available. And this is called a service. This is in Kubernetes Tourist. This is basically a software load balancer that will distribute the load across these pods. Within OpenShift, we provide a flat network map across all of these pods. If I click on one of these, actually, I'm going to go to a different view here. This is not quite what I wanted. If I click on one of these pods, I should be able to see here that there's a particular IP address on the node and internal IP address. This is a 10.4 here for this particular pod. And if I go check the pod next to it, this one is on a 10.9. This probably means it's on a separate node. It got placed randomly across our cluster of 10 machines. And you could kind of tell between these that here's a 10.5. So they each get internal IP addresses. The pods can communicate directly IP to IP, but it's more preferable since you don't know if these IPs will stay if a pod is removed or rescheduled somewhere else. The best way to communicate with this group of pods is via the service or the load balancer. Let's see where I'm at in my slides here. Now that we've seen a little bit about the OpenShift web console, anyone still working on logging in or getting the command line tools? You need help? OK. Well, at least you know where to go and which user number. You're 32? 22? Sure, 22, all right. OK. You can use that online too. I'm using it, and it works. OK, good, good. So, good to know you're all ready to continue. We'll talk really quickly about how to dockerize an existing application. If you have a source repository today, you could add a docker file. How many people have never used a docker file? Only a few people. OK, good. Docker file, I'll take a look at an example really quickly. So, this... I wonder if I have a... This might be easier to read, slightly. So here the docker file starts with a line that says from Fedora21. That's the base operating system. I have a maintainer line, and you can see in here I'm doing a yum install, right? Is there anything... Can anyone point out... Is there any potential issues with... I don't know how to ask this correctly. I'll just tell you that one of the issues with doing a docker build is that there is this either a yum install or an apt-get install as part of the build process. In order to carry out this command, what do you need? You need root, right? That means if you are running builds for other people, you're handing out root permission during the lifecycle of this build. And that may be a risky move. For OpenShift, we have a hosted version of our service called OpenShift Online where we say, hey internet, anyone with an email address, come at me. Bring your worst, right? And so we have trolls, we have hackers, we have fraud, we have all kinds of people coming in and trying to take over our machines. And for that reason, when we run image builds on OpenShift Online, we probably will not have the docker build, build strategy enabled on our system. You can build straight from docker files on your own, outside of the system, and then push your resulting image into OpenShift. That will work. You could also use OpenShift to build using the docker files if it's something where you are managing OpenShift. But when Red Hat manages a public OpenShift environment, this might be a build strategy that we disable for security reasons, right? And if you're working in a high security environment, it depends on how much you trust your developers. Do you trust every developer with root? Maybe, maybe not, right? Maybe it's just the operators who do this type of work. So along those lines, we actually, we do builds of the base images by the operations team. And then since docker is a layered file system that you can commit new layers and then do a diff in order to see what's changed. We can add on the application code on top of a known good base, and that's how we do something we call source to image. So we'll show a source to image build next. If you're doing, how many people use JavaScript? A couple people, all right. I'm wearing my Node Interactive shirt here. I do a lot of JavaScript programming. This is a tool that I wrote for helping auto-configure your JavaScript application to work in OpenShift v2, OpenShift v3, Heroku, Modulus is a Node.js hosting. Basically, this gives you vendor neutral configuration strings to help configure your application. If we get a chance to look at some of the source code later, I'll show you how I use this. But inside the container, this is one of the tools I use for auto-configuring my application. We talked about adding a docker file to your existing repo. That's one way to get started. Also, yes? I would love to hear. There might be, I'm not sure. The main thing that you need to know for v3 and for docker, if you're really just targeting docker, the main thing you want to do is expose your web service on port 8080. And also when you bind to an IP address, you want to bind to 0.0.0.0. And then regardless of which IP we assign, it'll work. So really, this is 8080. This is really the result of any time I do that auto-configuration in a docker-enabled environment, this is what my, so for Python, you could shorten it just down to this if you wanted to. It wouldn't maybe, this might work on Heroku as well or other platforms. But if it doesn't exist for Python already, maybe someone should make an auto-configuration library. Another way to automate your builds and to outsource that risk of handing out root permission, you could automate your builds on dockerhub. If you have a docker file in your repo on GitHub, you can do an integration between GitHub and dockerhub. And then any time you push to GitHub, they'll notify dockerhub. Dockerhub will run your build and host a copy of the resulting container image. This is pretty good for a free solution, but my builds take about half an hour here. It's not quite fast enough or rapidly iterate on code. In OpenShift, my builds usually take a minute or less. So let's look at building and shipping on OpenShift. Here's one repository that you can, if you're going to follow along and do a full build and deploy, this is a repository that you can fork on GitHub. You can also just deploy this directly. This is really, it's not even a good, if you want a good repository, I've got better ones. But this is just a very basic repo that I forked from one of my co-workers. Slow network here. Some of our UI really relies on web sockets in order to update the web console. So when the network is slow, it may seem not quite as responsive as what you would see on a better network. So here's this smoke test application. This should be what you have deployed already. It's really just an index.php file with, I think, some text inside of it. It doesn't really execute any code or interpret any code. It's very simple. So not really worth writing down, but yeah, feel free to try it out. You could also, the slides will be available long term, whether it's on a website, which I'm hosting in a Docker container. So you can fork this project as one way to get started. If you want to do Node.js, I would suggest RyanJ slash HTTP base. This is the one that I will be using. HTTP base. GitHub.com RyanJ HTTP base. So if you hit fork on this project, you'll be able to configure an automatic deploy any time you push to this repo. So let's take a look at source to image. I'm going to use the add to project button. That should be in the top right hand corner. And just in case this is, well, let's see about this network. Aha, okay, finally loaded. This looks kind of crummy because I have such a high DPI monitor and as you can see from the browser tabs up top, usually these icons are a little bigger. This looks a little nicer. So hopefully if you have your laptop open, you should be able to confirm this. This is actually a very nice UI. So I'm going to find my Node.js base image. This has already had the yum install already done. All the risk of handing out root permission has been handed out to my operations team, who's already built a standard base. This allows them to have throughout the entire workshop, they can make sure everyone is using the right OS. There are correct dependencies. Everyone is patched up. We don't have a shell shock or heart bleed or any of these major low level exploits at wild within our network. Did you have a question? No, okay. So I'll use Node.js. I'm going to copy this HTTP base repository. I'm going to name the service. Dub, dub, dub. Since it's a web service. I could just hit create here. It's really that simple. You tell us what source code you want. We build an image and hand you back a host URL. Really simple, right? You can't really get much more simple than that. But let's look at the advanced options. If I wanted to start users on a particular branch, let's say I was developing a new feature, I could start them on a particular feature branch or tag, or if I wanted to deploy master from last week, I could look up the commit, check some, and put in the commit hash here, right? Then I could build that particular commit and deploy that. I can also, here is routing. Do I want to expose a route? These routes work kind of like Apache virtual hosts. We look at the host header of the incoming traffic, and then based on the host header that's incoming, we'll assign the traffic to a particular service, and the service will pass it on to the pods, right? If I wanted to have a specific host name that I knew, I've already pointed the IP address or the DMS to go into my cluster, I can sniff for that host name and make sure it gets passed on to my service and then to my pods. If this was a back-end service, a database, I would make sure that this box is not selected. We only want to expose our front-end services, our front-end web services, we'll leave our protected services without a route, so it can only be contacted internally. And if you deploy that base, you would attach persistent storage to that? Exactly, yeah. Would you do that here? No, no, this is for builds, and we wouldn't be running a build with a database. We'd be deploying a pre-existing image. This build service? Yes, yes, it's built into Kubernetes. There was a previous service, they called this load balancer a service. I don't like the term, because that's also what I call my web services. So I'm not too happy about the terminology, but the Red Hat team took a look at the service code in Kubernetes and figured there's some improvements we can make and so we replaced it with a HA proxy-based load balancer. So the service is a HA proxy instance that's running within Kubernetes. It is included by default in Kubernetes. I think these routes are... We'll see whether this was a Red Hat contribution. I have a list which ones were Red Hat, which ones were Google. We'll see that in a minute. Build configuration. So I want to automatically rebuild whenever my code changes on GitHub. Wouldn't that be slick? As soon as I push, let's rebuild a Docker image and deploy it right away. Also anytime the operations team makes an update to our base image. Let's say they find out there's a new exploit, Heartbleed, Shellshock, one of these things, we want to patch our whole cluster. All they have to do is push an update to the base image and if I have this box selected, it'll rebuild my application container that has the base image as a dependency. Really slick. Dual trigger mechanism for rebuilding. Either from the dev team or the ops team. Either one can trigger a rebuild and for my dev environment I don't mind if the ops team rebuilds my containers. No problem. For a CI environment, same is true. Also since these images are stateless, anytime we push new config to the image, usually we do that via a redeploy. It's probably the same image, we're not rebuilding but we're repushing. Oh, actually this is... Yeah, this is build configuration. We can put in specific keys just for the build here and we can also have in our deployment configuration we can say auto deploy in certain conditions. So I leave this auto... From a dev ops perspective this is my continuous delivery checkbox. Check this box, continuous delivery done. At least for my dev stage. Once I have an image built then it's simply a matter of promoting that image across each of the remaining stages in my deployment pipeline. I'd probably leave this unchecked in production. Maybe it's only the release team that does that final promotion of the image. But for every other stage usually I auto promote or auto deploy. We could also say if this was a high availability service we may want a minimum of three right from the start we can add that into our config there. I'll hit create. We'll get this one started. Here's our new service. The build is running. Let's take a look at the build blog and see how well our network is holding up. Usually this page will stream in the results as the build happens. Let's see how streamy this is. Hey, it's moving. That's better than... Good, good, alright. Better than not moving, yeah. So we can watch the build as it happens. When this is done we'll see that it will start pushing the resulting image into our internal Docker registry. OpenShift has a integrated Docker registry and it's... should get there pretty soon. I'm not sure if the build has finished and I don't have the next message or if... oh, here we go. We got some more messages. Lot more messages. If we had errors during the build process these build logs are kept for a short amount of time that's configurable by the management team. They can say keep the last 3 builds or the last 10 depends on how aggressive you want to be about garbage collection. Let's follow. Usually this finishes in generally under a minute is my experience with Node.js. Other language types it may vary for Java users we have something called binary deploys so if you're already building war files or ear files you can use binary deploys to ship your resulting Java environment and we'll wrap that in a Docker container. If you're doing a full Maven build on our system that'll be a little bit slower but we can also do a full Maven build so you can do just a standard get push to get hub get hub will fire off an event back to open shift trigger a new build and a new deploy. The webhook, yeah, I'll show the webhook next. So I'm going to go back to my says the build is completed. Let's go back here and we should be able to catch the deploy happening here. Here's our hostname that's been added or our route. So if I click on this right now I'll see an error message because there aren't any pods available to service the request. My request is going into the cluster on this URL. It's hitting the routing layer that's then passing it on to my service. The service is trying to contact the pods but there's no pods yet and so it should I was expecting a 503 error but the network so slow that it was able to deploy the pod before the request went through. Here's my resulting application that's been built and deployed built into a Docker image from our source using the standard base from our operations team. And then I can go in and scale this up right away. Hit the up arrow the down arrow, scale this up and down. Take these guys down. So here we're up to two containers for our dub dub dub service and I should be able to see all this on our topology diagram here. This is our dub dub dub our route, our service the pods, the replication controller and the deployment config. Any questions? All good so far? Okay. We talked about source to image. Okay, so we talked about, there's a couple pieces. These are two particularly interesting pieces. This will be called deployment in the future. I've just recently learned. The build config is still something really unique to OpenShift and really what makes part of what makes OpenShift a much more developer facing project. Kubernetes is primarily a tool for operations teams who already have images and want to ship those images but it doesn't give developers a whole lot to do until after the image is already built and even then it's really a more tool for operators. So I think our web UI and this build config really add a lot to make Kubernetes much more usable for real users. We have some change triggers that we'll see inside the deployment config. We'll redeploy on environment change or on image change. So if a new latest image is available in our internal registry that will fire something called image change trigger that will notify our deployment config and that's one of the hooks the deployment config uses basically to trigger those deployment events. Docker build is one of our build strategies. Source to image is much more secure and I think faster and better for a variety of reasons I think. Webhooks. Next I'll set up a webhook and then we'll do basically a make a get change in order to build and ship. So here's my HTTP base repository. I already have a fork of this. If you want to follow along with the webhook example you will need to fork this repository. Or you could use the smoke one and then use a PHP base image up to you. There's also some Python examples available if you prefer Python. I could give you a repo URL for that as well. But let's take a look at our builds. I'll go here to my www service web service and take a look at the build configuration. Here is a generic build trigger. This could be used in maybe with Jenkins or even with curl if I really wanted to trigger a build right from the command line or automate it. I could use this generic one. There's also a github webhook. So I'm going to copy the github webhook and go to the settings area and click on fork. All right? Pay attention for this part. I should have more slides for this because this is a little tricky. So if you're following along go to your fork of the repo and click on settings. Try clicking several times if you like. So in here you could see there's a webhooks and web services section. Give it a second click and cooperate. It's really counting on the it promised me a hard line for the presenters and it doesn't seem to be working. All right, here's our webhook. All right, so I will I'll remove my previous webhook and I'm going to add a new one. Here we go add webhook. I'm going to paste in the payload URL. This will link back to my OpenShift Master server. One thing to note here since I generated this environment yesterday and you probably already noticed this when you contacted the server we're using a self signed SSL certificate for this environment. This is because I generated this environment using our Ansible deployment scripts and I haven't bothered to pay for SSL for this one day workshop. So if you really want the SSL certificates I encourage everyone to have real SSL whenever possible but I just generated it myself or really the automation tools generated it for me. So since we have due to that fact I'm going to click on this disabled SSL because one of the steps you'll need. Make sure not to forget this step. Disable SSL. Good question. So I went to browse and then I went to builds I clicked on the build that I want to automate and then there should be a configuration tab within that build and the github webhook should be there. So I think I still need to add webhook. Just you can leave the secret blank all you need is the top line and you need disable SSL. Otherwise everything else leave with the defaults. My webhook is created I should now be able to make a small change to this repo I'm going to make a change to the index.html file and I'm going to edit it so I don't have to get clone and pull down the project onto my laptop I'm going to use this edit the file right on github. This will make it very easy to commit a change trigger our deployment automation and watch the result next section is good how much time? 25 minutes better hurry OK I've got my editor here I'm going to change this to I shouldn't be making a change like this to one of my let's see I'm going to make a more minor change I'm going to change this to I'm going to change this to HTTP base and a container that'll be my new title or new H1 on the page so before we had welcome to HTTP base I'm going to say HTTP base in a container so I'll commit these changes like a good developer I'll add a commit message and like a bad developer I'll commit directly to the master branch if I was committing to a feature branch we could definitely you saw earlier how I can automate based on a feature branch OK commit made let's go back here and see if we can watch the result that should automatically trigger I'm going to reload the screen just in case the web socket connection has dropped looks like it's likely it works well for me oh good good very fast as soon as you make that commit it's going to make that post across to our service in Amazon it's not going to rely on the slow network so it should be right away much faster my build might be done by the time this page loads let me know if you see a failed build in our workshop yesterday we had a couple failed builds not all the deploys happen super smooth on Amazon I often see people operations teams say they use Amazon and whole machines just drop out automatically as part of how infrastructure as a service you're supposed to assume that these things are easily destructible but it's not always convenient when your workshop instances disappear halfway through we really focus on the platform layer and so if containers disappear we'll see that they automatically get replaced but we're not really dealing with infrastructure layer that's really we consider that to be a different problem to be solved by a different team of people and it really depends on what infrastructure you have if you have your own bare metal you could bring your own bare metal you could use Amazon you could use Google compute you got a variety of choices as is there a functionality in OpenShift automatically provision number based on load for example I can spawn 10 new EC2 instances if I need and at the same time if load goes down I can just migrate containers to few nodes and shut down the rest that would be a Amazon specific integration I don't know if we have anything to automatically add nodes that can automatically add a lot of cost if you get your script wrong you can do the same with just to there are APIs to spawn and kill the machine we do have auto scaling based on CPU load and memory not sure if it is in the 3.2 release but so it can scale the node there is automatic scaling is it present not sure but that's what you need to change your core if your initial build that's what you need for your initial build launch is all defaults it doesn't keep any instance running there's a so what we have is that's very configurable in Kubernetes there's something called a replication controller that manages the lifecycle of the pods we create one replication controller for each deployment and then we have a variety of strategies for how to migrate the traffic from the first replication controller to the second so you can do scale down one as you add one and I think that should be the default if you have more than one pod it should be slowly switching it from one replication controller to the next but that's very configurable with your deployment config really not having much luck with the network I tried reconnecting and it's still having a hard time connecting here yeah if you want specific libraries available to PHP or specific C libraries that maybe your application uses you would bake those into your base image and so if I would make a request to my operations team to say my standard base for PHP needs to have image magic and some crypto libraries and some extra things like that to make a request to our team give it a try okay I'm going to give the I'm going to officially give up on the wifi for now and try try the network again see if this is oh yeah I know the thing is when I do I click on auto ethernet here and it says connection established but then hey success okay let's see if my deployment has happened by now I should be well into HTTP base in a container so this amazingly fast a split second after my network connection resumed it was already deployed so this should be very easy make a change on github immediately get a new image and a new deployment and you could see your result right away environment and rerun the test for the next stage of your pipeline good tip thanks Steve okay so now I'll do really quickly replication and healing with Kubernetes hopefully I can move this through this part faster the network holds up I'm going to add some pods here I'm going to say let's scale this up to six environments and then I'm going to use the OC command line tool if I can find a terminal here I'll do OC login against our server URL I'm using a self sign certificate so I'll accept to the additional risk here I'm going to be user double zero password is devconf cz and I'll take a look at the list of containers see if I could move this up a bit I'll do OC get pods and we'll see all the containers that we have I have my smoke build and my one smoke environment I have a dub dub dub build a second build and all of these have a dash two these are from my second deployment the rebuild I could also run if I was using the upstream kubernetes tools kub control get pods let's see if there's any difference in the output exact same output we actually pass through to the kubernetes command line tools for anything that's relevant to kubernetes so now that I have a list of pods I can actually do potentially cause some damage to our cluster and see what happens I'm going to do OC delete pod and I'll pick a couple of these guys this guy this one and maybe this unlucky pod so these three will delete and let's see how quickly this recovers it looks like we're still waiting for one to scale up here but let's see what happens either that or the network dropped that's still waiting for one let's see what happens when we cause some damage to the environment so right away we recognize that there's an error and we reallocate auto repair our environment immediately I'm not sure why this one's having some trouble scaling up find out which one that is and kill it you killed some pods I could also do that from this view here let's see this guy is pending let's go in and delete this guy see if that fixes us up ok something happened there was a reaction that one was destroyed another one got scheduled so the idea with these replication controllers is you ask for a certain amount and kubernetes helps enforce that you always get what you asked for kubernetes kubernetes is a very early project it's evolving I think they're out of beta now but we're working really closely with them to help ensure stability of the underlying kubernetes platform and the docker engine is another thing that we're doing a lot of work looking at ways to make the docker engine really performant it's been a challenge sometimes but I think it's a great toolset to start with don't you have any quotas defined this project does not have any quotas defined if I did have quotas defined I'd be able to click on settings here and then we should actually see graphs based on my allocation from the operations team so the operations team says I get 10 cpu cores and 20 gigs of memory I should be able to see how much of my quota has been used on this page here the quotas or the oh yeah anywhere that you have hardware anywhere you have hardware we can target it using ansible so that was replication and healing we did a scale up earlier here's how you would scale from the command line here's how you can do a list of get pods in order to automate or to demonstrate that auto recovery here's a list of terminology for later I know this is a lot of terms it's a lot to think about especially if you're new to the topic so I tried to make a nice list of follow up I know we've covered a lot of this already but this helps kind of give you a standard idea of what we mean by a node what we mean by an image it's not a VM image it's a container image what we mean by containers volumes, pods I tried to detail all of this so that you would have easy references and this will link through to our OpenShift documentation so you can learn more about each of these topics build configs here's a map of some of the pieces that were contributed by Google and some of the pieces that were contributed by the Red Hat team so Red Hat's really been doing a lot of work to make that image notification trigger that will update our point of config and cause a new replication controller to be created any time a new image is added to the Docker registry we also have from the developer side any time the code, source code changes our build config will fire a trigger to create a new image throw it into our Docker registry that fires an image change trigger and then creates a new deploys a new environment there any questions about this no, make sense what exactly is meant by deployment so I have a question about the terminology deployment in OpenShift terminology what do you mean by deployment what we mean specifically by deployment is usually the creation of a new replication controller which manages the list of pods so you may deploy a micro service thank you it may be usually it's a single micro service that's being deployed but a deployment config can also mention multiple images or multiple containers and target a larger deployment the first version of your application you click deploy so you get this running and it really is up to you it's up to you to define what is in the deployment configuration but this is a object let's take a look at what I have here I'll do oc get dc and the deployment config I want is for www if I put type of JSON or I could do YAML I like JSON better so I'll use that and we'll take a look at the deployment config so here's the object type is deployment config this is the standard Kubernetes object that will be stored in etcd we can see every Kubernetes object every single one has a spec and a status any time you request any data from the API you get two responses here is the actual state and here is the requested state that you've asked for so here's our spec what we would like to deploy and at some point we'll see the status of what is actually live so if there's a mismatch usually the replication controller corrects for mismatches and the deployment config only triggers when there's a change from the integrated Docker registry composition so I'll do one last example where we'll deploy a more complicated application that has multiple pieces to it there's a couple ways to do a composition I know Docker has a swarm I like these Kubernetes templates OpenShift is basically a Kubernetes template with support for the new object types that we've added deployment config is a new object type that you'll find in a OpenShift template but maybe not in a Kubernetes template yet if you have a OpenShift template that only has replication controllers and pods then you should be able to deploy it on any Kubernetes environment not just an OpenShift environment but really any Kubernetes environment should allow you to deploy one of our templates as long as you don't mention one of our special object types one of these advanced object types like a build config they don't know how to do a build so leave that out when you have a Kubernetes but OpenShift gives you support for additional object types what's that yeah, yeah I don't know very much ask the atomic team I need something that works today I use the OpenShift stuff because it's ready right now and it's based around Kubernetes they try to have a more open spec that may target a variety of these environments depends if you need to target multiple platforms you could look into their spec I stick with the Kubernetes stuff for now yeah, yeah CoreOS is another good spec to look into this is the main one that you want to know though this is the link to our templates so here's an example app this is what I'm going to deploy we could take a look at the source code really quick here is the template inside this template you can see there's a list of objects the first is a service standard Kubernetes service I could deploy this to any Kubernetes environment there is also you can see there's some of these strings here we're actually going to inject some variables into our template before we deploy or before we install it you could see one of the containers that I'll be adding is a MongoDB container this is going to be my database this one just uses a ephemeral disk so if I delete MongoDB no trouble but this is really just for demo purposes and for testing my development environment in my next stage when I move to staging I'd have a different template that maybe points me to a larger size database probably a persistent disk so we could see that we're going to inject the database password the database name and a variety of other details we're also going to pass these same credentials into our front end environment so it knows how to authenticate against our database where are they stored they're stored in ETCD in our build config or our deployment config here this is the template before it's actually executed and what you can see at the end of the tip or somewhere in this template we'll see a list of parameters these are going to be automatically injected in some of these will predefine a default value and some of these we'll use an expression to generate so I could have a for every user in this class we could say I want you to start with the name user and then have a 0 through 9 and you know 3 numbers and we could generate users 0 through 999 if we wanted to give it a specific range I should probably have more than 3 characters here but you could see how it's very easy to generate a semi random username and a password here we've got a little bit more characters 16 characters for our password a little bit more secure but we could bump this up if we need to maybe in our production stage we use 100 characters I don't know so there that's the basics I'm going to take this template I'm going to do oc create dash f and I have a local copy of this template I'm going to install the template this will make it available thank you this will make it available internally within OpenShift so let's take a look at that this allows me to really make easy one click installers for you don't even need to be a developer this could really be anyone within my organization I want them to spin up or deploy a specific solution and now we have a way to iterate on those solutions and maintain those solutions so I'll do add to project and here we see in the list of databases in this list here is my new installer Restify this is a Node.js web service we're using MongoDB and we'll show a map with some data on it so I'll click on this we'll go right into this workflow that allows us to customize the source repository sorry it's hard to read here it's because of the screen resolution if you want to make it a font smaller the old guy in the back can still see it there you go you can go more if you want I'm a good judge for not being able to see oh hey thanks we could change the web service name I'm going to let it automatically generate the database username and password I think we have good defaults in our template and I'm almost out of time so I'm just going to hit create and we'll see if we could launch the whole template it says application created we've got some information on how people can follow up and log in from the command line I'll go back to my overview page we could see already MongoDB is being provisioned it's being, here we go scaling up we've got our MongoDB available the build is running for our front end web service the route has already been defined we could take a look at this log and watch the build but since I'm low on time I'm going to go through the last bit of my slides and then hop back and hopefully we'll have a big reveal of the resulting app so ways to try OpenShift of course we'd love to have your money sign up for OpenShift Enterprise OpenShift dedicated if you want us to be your operations team you can have the guys at Red Hat actually admin in your project with OpenShift dedicated in dedicated we set up a pretty good size cluster I think it's ten machines or so the upstream releases are available on OpenShift Origin releases this is where you got the CLI tool we also have an all in one VM if you want to run the whole thing on your laptop we have a virtual box file and a vagrant, virtual box image and a vagrant file so really follow these instructions and just run vagrant up and you'll get the whole cluster all in a single machine on your laptop this is great for developers to experiment or a way to demo this environment for other people you won't be able to do webhooks here since it can't call you back at local host but it's a great way to demo all the rest of the pieces you can also build your own cluster using OpenShift Ansible I set one up this morning 20 min to provision a group of ten machines this is an example of the commands I used if you want to do specifically OpenShift Enterprise in Amazon you want this repo here instead this is some experimental stuff we'll be merging back down into OpenShift Ansible very soon if you have specific questions about this feel free to reach out, I'm Ryan J I'd be happy to help with some free ebooks for you guys if you want to learn more about Kubernetes click on this link, we got a free O'Reilly book another free O'Reilly book about use about Docker security the OpenShift docs Diane here from the OpenShift Commons effort is a great resource if you have a company who likes to communicate be involved in the conversation at Red Hat we really believe OpenSource is about being involved in a conversation about technology that's why it needs to be open if I have to agree to a non-disclosure agreement I'm cutting myself out of the conversation so bring your company into the OpenShift Commons participate, give us some feedback we're not going to ask you to sign an NDA we're not even going to ask you for money it's a very open great way to participate we also have some official training courses from Red Hat and some extra white papers some references if you need to convince your boss so I'm out of time, I'm going to go back and let's see if our build has finished and if my environment has deployed ah it still says it's ah 503 well I think you guys got the basic idea and you should be able to deploy a whole map mapping application any follow up questions if I wanted to deploy an application written in go is there a bare thing to deploy it application written in go I think he's asking if there's a straight up docker and he can drop a binary in exactly what's up yeah I don't know we don't have any builder images any base images for adding your go repo code on top of a known base but if you're interested in contributing a source to image GACO locally has some information yeah OK, sorry I'm out of here if you have any more questions I got to get out of here meet us at the OpenShift booth in the main area we'd be happy to answer any questions thank you guys