 So what I'm going to do is I'm going to talk a bit about deploying Python web apps to OpenShift. I'm just going to show some of the basics of how you can use the source to image strategy in particular to very quickly get a Python web app in. And I've been playing around with that and I've learnt various things from that and the current support in OpenShift is a thing that's still evolving to be improved and what I've been doing is working on my own source to image builder images for Python so I'm going to go into some of the things I've done with that and some of my own ideas of where I think OpenShift itself should go in that area and how it can be improved down further if you had all got this up and it's gone down. Hey Graham, can you increase the font a little bit? That's right, you're up the back aren't you? Graham is also a little modest. He created a modest WISD server for Python. You may have heard of that. That's not the way to explain it. The way to explain it is I killed this thing called ModPython. We still deserve to be go away as people should not be using it. It's got big security issues in it which no one really understands and appreciates. So deploying applications to OpenShift. There are three primary ways that one can deploy an application to OpenShift. The first way is that you already have a Docker image already created from somewhere. You could have created it on your own laptop and you've loaded it up to a registry such as Docker Hub. So you're able to tell OpenShift that I want to pull that image down and it will just run it up like any other Docker image. If you're talking about OpenShift products from Red Hat then there's certain restrictions that you may find. We don't like people running as root for various reasons. So not every single Docker image you find on a bottom Docker Hub can necessarily work as long as people are set up so it can run as a non-root user, it should be okay. The second way is that rather than you build a Docker image yourself you can effectively point OpenShift at a Git repository which has the code required to build a Docker image. So it's going to have that Docker file in there. And you can get OpenShift to do it on your behalf. And that saves you having to push images around which can be really annoying or using automated builds on Docker Hub which drive me absolutely nuts because they don't work half the time. And the third way which one can do it is that rather than build a Docker image at all yourself you can what's called source to image, which is called source strategy. And the idea with this is that someone will build a base image for Docker which has all the bits and pieces in it to require for programming in the particular language you're using. And today we'll be talking about Python. And what happens is that OpenShift can be pointed at your Git repository again but in this case what's in your Git repository is just your web application code. And OpenShift when it runs up the builder will pull that Git repository down essentially incorporates it, it's going to create a new Docker image for you and that's so you don't have to know about how to do Docker image yourself we'll pour that into a new image which is built from that base image the source to image base image and we'll look in that source repository and in the case of Python we'll install packages and so on and it creates your final image for you and so what we're going to do is we're going to go through the source to image one in this I'm not going to do the other ones, so manage the Python source to image. Now I've got various web applications up for I'm going to be using up on my Git repository I'm going to focus on a Django web application and I've created that by essentially just creating a fresh repository I've done a Django start project and created a basically Hello World app it's not much to it, I've not done anything special for OpenShift so we're just going to go through here most of the notes here talk about things in terms of D from some of the command line I'm actually going to jump first into the web you are and quickly show you how it will be done there so if I just copy the URL from my app ooh I need to create a project first so if you are following through you need to create a I'm using user 00 so if you do have the existing account sorry we didn't talk about giving anyone else any user IDs so these are all the source to image builders that exist in the OpenShift installation where you're using and maybe some quick start apps in there as well so what I'm going to do is I'm going to search for the Python one I'm going to select a particular version of Python here, Python 27 I'm going to call this Django and give it my Git URL there are some other options you can go here in set and one of them in particular I'll point out is this thing down here which is the routing there's actually a slight little difference between when you do things through the UI whereas you do it on the command line if you do it through the UI it by default will expose your web app automatically to the public internet whereas if you do it from the command line that's a separate step it used to confuse me no end like I would keep forgetting that so we're going to create this and that's going to go off and start deploying so what that's doing now is that's pulling down that Git repository creating a new docker in which it incorporates that source code and that's what we call the assemble phase and it's controlled by this assemble script that's part of the source to image builder it's the thing that's going to go look at the requirements.txt file in your Git repository which has a list of all the Python modules you need it's going to go and install them in the case of Django it's going to it's going to realise you've got a Django app by looking at what the code is and run pythonmanage.py collectStatic which creates all those static files which you use in the Django app for the admin interface and any other things you're doing that's going to take a little while so we can look at what it's actually doing we hope it's doing something so the first time you've actually one thing is which is really annoying for demos is that the first time you use an image in a cluster it also has to get it down from Docker Hub that shouldn't be doing that because I've done it in a separate project so maybe it's happening because I'm in a separate account I did it in a different account last time anyway while that's going on let's instead do it on the command line so the case of the command line I'm already logged in so I've got all these different projects here I need to be in the correct project because I'm most likely in the wrong one at the moment because I just created it and what we can actually do here first off is even look at what that other one is doing so it's still waiting for the build so because I did this in a different account when I was testing on it earlier it looks like it's having to build from somewhere first actually it's a built-in image so it shouldn't so hopefully there's not a problem going on here so this time I'm going to create with the command line now interesting distinction here when I did it from the UI I specifically chose that I wanted to use Python in a particular version and here when I've done an OC new app I have just given it the Git repository and nothing else I haven't told it it's a Python app and what's happening here is the OC new app has some smarts in it that when it gets down that Git repository it will actually look inside of a Git repository and try and determine what language you are using so in the case of Python it's going to look for whether you have a requirements.txt file which is going to be a list of modules to be installed by PIP or it has a setup.py file which is an older way that used to be used back before PIP became popular it's a way of installing an application as an actual package itself and you can have your dependencies listed in there so in this case it's sort of knows because I've got a requirements.txt file which is listing Django that it was Python and it's going to get on fire off Python for me how it will find out if it is Python 2 or Python 3 this is about to mention OpenShift installation is this now supports Python 2.7 3.3 and 3.4 no 3.5 yet and the 3.4 image essentially is tagged as being the latest so if you go and do a deploy it's going to use whatever is the latest one and so in this case it's going to be using Python 3.4 the alternative is if you know that you want a particular version is it Python 2.7 we're going to do the dash I've got it in my notes down here so Python language protection yeah, no Python colon 2.7 so yeah so that's why you can actually select a particular version and we're going to actually use that ability so essentially what we're saying here is here's our source code and this is the particular builder that we want to use in this case it's Python 2.7 we'll see that later when I start talking about different builders and replacing the default one so let's see how this thing's going okay so our original one's up oh no that's interesting our original one is still going for some reason but the one we just created is fine so let's look at the one we created and here is what I meant you look at the top one here it's got a this is going to be the actual URL and it's running very slow because we had the previous workshop running maybe someone's been doing some interesting things with scaling up a lot of instances of an app and it's all running a bit slow we'll see how we go so that one had a URL exposed URL already this one didn't so because we did it from the command line we're going to need to expose it and that's done using this oc-expose so exposed service and let's see if we can get to this one yes and for Django app if we go to admin you'll see that the admin works we have style sheets working so our static files are also being hosted automatically that must just be broken now while I'm doing this I want to talk about application deployment work because it's all well and good you can get an app up there but what does that mean to the way that you're going to work day to day now it doesn't change it too much in perspective relative to how you're used to just doing database development with Django you've got the development server which is built in so nothing changes there if you're going to work on this thing you still make your changes use the Django development server use the fact that you can do code reloading to automatically pick up changes if need be to change there but we still need to get our changes up and deploy it publicly so publishing your code changes we created this originally against the Git repo so I'm going to check that out so the workflow is basically do your local development with Python Django development server when you're happy, commit your changes and push them up to the Git repository now if you're familiar with OpenShift v2 I find that the way how v2 works a little bit confusing actually because you could just create your Git repository locally on your disk well actually OpenShift itself was running a Git repository for you Git server like a GitHub inside of OpenShift and so when you created your app there's a few things that could happen but generally you created your app in OpenShift and then checked that out and then started making modifications in that or alternatively you could do that initially but you might already have an existing app so you're going to be just messing around with changing it to then push to OpenShift and do a forced push initially to update it but anyway the thing is that to do deployments previously in v2 you would do all your code changes, commit it and you would push them directly into OpenShift and that would cause your web app changes to then actually be deployed in OpenShift v3 you've told it where your GitHub repository exists or whichever other private repository it's linked to and instead of pushing your changes direct to OpenShift you're just pushing it to your normal repository and at this point there is no linkage between that repository and OpenShift that means that if you do want to trigger a new deployment based on code changes then you have to trigger a build and that can be done on the command line OCStartBuild or you can do it through the UI and I'm really hasn't to trigger any if this is going to well that's it, this has failed and I'm going to assume it's a timeout can you do that? well I'm pretty sure it's going to be a timeout yeah this is the problem we've been seeing occasionally on our own cluster we haven't seen on this one we're using for the demos but let's use that fact we can go in here and go build so if you've made a change I can go in here and go start build and that's going to pull down the latest version of the source code from the Git repository build your new image and deploy so that's the manual process but what we can do is that we can use what's called webhooks and we talked about this a little bit in the previous workshop so I wasn't going to go into depth with it but the idea here is that you can get what's called a webhook URL from the configuration in OpenShift your project or application and you can set up GitHub with that webhook and what that means now is that every time that you commit to the Git repository up on GitHub it will automatically notify OpenShift that there's a new version of code there and that will automatically trigger a build so you get the same effect as what you could do in OpenShift v2 of when you did a push there's a change that's just going on a different way it's going via where your Git repository is this time rather than directly into OpenShift let's see how our build's going at the end yeah, this time it works so we've got two deployed apps now, what about if you don't like the fact that to get this now deployed I'm having to always commit my work and push it up to GitHub if you're doing lots of little changes like you've done all your work presumably with Django Development Server but you get situations where your deployment environment is going to be different to your local development environment and so sometimes you can have a problem where you can't replicate an issue you're having in your local development environment you can only see it occurring in your actual deployment environment and in that sort of situation what you is a pain is having to make a little change on your repository, do a commit and wait for the build to occur and then find well that didn't quite work I'll go and make another change do another commit and when you do that you end up with all these little commit history in your Git repository and some people don't like that sort of little trail of all those commits you could do rebasing and squashing and all that sort of nonsense and I personally don't like that so the alternative of what you can do is what's called binary so you've got your application already set up and when you, we did that start build previously it's going to cause OpenShift to go back to your Git repository and what we're going to do this time is circumvent that process and we've got this command here, OC start build we're going to build our particular spring in we're going to say, but take the source code from our current directory on our local box instead of from Git so let's where's that, which one we'll use this one so I've got my hello world they should make that change now I have not committed this and I'm going to so this time what's happening is rather than getting the source code repository source from the Git repository it's bundling up the directory in which I've run this and pushed that into OpenShift and this time the build is coming from there so let's see if we can see how long this one's going to take so this you can see where the build's happening let's just dump it over the UI it's probably just easier to watch over there so that's already built it's spun up that new one just a warning the default at the moment is this is using Django development server and I don't know what this is about Django development server but it doesn't respond to the term properly and it's taking 30 seconds to shut down that's what we're seeing here but that's already running that should have been taken out of the routing so we should now be able to go over here and there's that change so I never put that up to the Git repository so that's one way that one can do it and let's try another way now that doing it from the current directory only affects that one current build it'll go back to using the Git repository it's not a permanent linkage so if you're happy you've done the right thing you can go commit that push up, start a new build and it'll go to Git repository next thing we can do is live sake source code changes so what we're going to do this time is you can actually get into the running container now if you're familiar with so for example Peroku there was an ability with Peroku to get access into a dyno but it was a special dyno that was the run up just for your shell access it wasn't the one where your web app was running so you couldn't do live code changes in OpenShift you can actually get the information about which particular pods they're running your app in this case we've only got one we've had more than one this gets but difficult we've got one we can actually get inside of that running container and make code changes so here's our running app down here so I'm inside my container and for some reason I always have to set terminal so there's my modified one oh I use this cat and there's my change now that has worked because it is using the Django development server inside the container as the default that's one of the things I'm not this really happy with the default builder because using Django development server if you do Django that is not a production great server you should never use it for various reasons it's obviously not going to be performant it's a single process with a single thread and yes with OpenShift you could scale up and create more instances with one process and get current to that way not a good situation now there's yet another way we've got of working more with this and Grant touched on this in the previous workshop so we can build which is triggered from our current directory we can get into the pod and do stuff the other option is this OCRsync it allows you to actually make all your changes locally on your boxing decision just Rsync that change code directly into the running container so let's get this one to go so what are we on, we're still on capitals get out of here what letters will you use this time yeah zeds so I think this subdirectory which has my app in it I need to specify the pod now you do need to know where in the I'm bound to get that wrong so let's just check it opt at root source for their world I haven't got the right I don't know which part to boot you need to replace the apt with a p opt slash apt minus root go to the left more left stop okay so let's download that directory push it up there because it's rsync our images have rsync there was a question oh okay so in this case I've done an OC get can you see that right on the bottom there are different types of pods it's the ones which actually have your running app and you'll see these other ones which are the build pods and you'll see ones that pop up and deploy so it's a case of just you sort of understand learn it over time which is the one but we've got that one pod there I think there's other ways of doing it OC describe I'm doing it from memory here now so I'll probably get this wrong is it service or application control oh anyway there's other way of doing it but I'm doing it that way so I'm just linking it from there now someone said that yes that failed or it appeared to fail I've got the wrong one have I that for it I must be giving it the wrong directory then let me just cut and paste that bit okay interesting that was the one I created with the web app earlier that's not the one I'm using now this is one thing about doing askings that is important now build is a setup for this one you need an askings command in there but all the permissions have to be set up in a way that when you do make changes in the pod you can actually do them our ones are set up to do that if you take an arbitrary docker image docker and try and do this it may not work because the permissions may be set up as root or something running as a non-root user so you might be able to do it I can't see exactly what the problem is there I think it worked oh yeah okay yeah you're right I've done it once before and that first time we saw all the files changed yes this is only on the directory and I need to talk to someone about why this occurs but I have noticed this it can't update the permissions on just the directory all the files get updated but you get this strange message Grant gave an interesting example from the last talk what Grant does is actually set up his editor so that whenever he saves a file locally he will automatically go and update all these things and one of the questions is how do you find the pod and actually here is the way that I came up with I've got this little script here actually you use the osu.getpods command and use what's called a selector so I'm going to select just the pods which are for my particular application and I'm going to ask just for the name of the pod I'm on the basis that I only have one single pod so I'll get my pod name like that and then I'm going to feed that into the rsync command so you could if you're in an editor environment set your editor up so you could run this little script and just give it the name of the app where you're going to do it from however the editor works it out and what your destination is and that way you could trigger it off of doing it off the editor does anyone tell me when this is meant to finish for about 10 more minutes yes so that was one of the key things I wanted to talk about in this one was the different ways you could work with it because people look at OpenShift and they already pass like that and they think they have to change their workflow around totally and do everything in a much, much strange way there are some things which are different but the basic things still work like the development server and commuting and so on you're still doing all that but you can try and set up those webhooks to try and automate things and you do have these ways of doing live changes still you aren't excluded from doing that which a lot of people think and now a bit more about the default Python builder and we'll finish up there after that so it's doing a lot of automatic things here like magic is good when it works because you don't have to think too hard you can just get your repository it'll hopefully just deploy it you don't have to worry about it now what the default builder currently does is it goes through a strategy a few different steps and try and work out how it's going to deploy things and the first thing it will look for is whether your repository has an app.py file and in that case it presumes that you're going to supply the whole Python web application so in this case you might for example be using Tornado basic server or something like that so you actually have your Python app.py it's importing the Tornado module files setting up all your handlers or you could even using Tornado I'll still be using Whiskey app but just using its Whiskey container the next thing it will do is if you've got Unicorn installed so if you in your requirements.txt file have installed Unicorn as part of your packages it will go through and look and see whether you have a Whiskey.py file somewhere in your repository which is again to be containing the Whiskey application entry point and if it finds that it will run up Unicorn against you so you haven't had to define how the server has actually started up finally if you have a managed.py file it assumes you're running Django and it will just use Django Run server that's one of my issues with the current default build at the moment that's a bit difficult to try to overwrite everything your only options are basically Unicorn or an app.py file that means if you want to run Uwiskey for example you would have to have an app.py file which does an OSXL of Uwiskey binary which is a good start the other thing is it does push you towards Unicorn which me personally I'm very biased because I wrote my Whiskey but I have very good reasons for thinking that Unicorn is not necessarily a good choice for containerized environments it has any single request worker per process which means if you need to scale up you have to create more processes which means more memory and in a past environment yes you've got to worry about you should be at Python, Globe and Turpin lock but you've also got memory constraints and so having to scale through using model processes is not necessarily good if you've got an IR bound app it's often better to scale using Freddy because it's not a problem so that's one of the reasons I'm not too keen on Unicorn so I'll flick all past this so alternative Python S2I builds so authorship does provide a default but you don't have to use it you could write your own S2I builder if you wanted this style of environment you might just say IK I don't want to do anything to do with source 2I images and just build your whole Docker image from scratch if you wanted it and most people out there are going but source 2I image is good especially if you're coming from a dev ops background because you can say to the developer you just need to worry about writing your app here's the S2I builder you can use now it may be default or they may have developed their own one and that allows them to incorporate into that builder image all the things that they as ops people want to see in there that may for say for it might be a particular Python Whiskey server that they won't use they might want monitoring such as New Relic embedded in there to do monitoring they can settle that up automatically so you as a developer don't care now obviously I'm keen on mod Whiskey so I've been playing around my own Docker images for quite some time I only started with Red Hat middle last year yeah last year and I was doing it for that and since obviously you got into Red Hat I've taken my existing image I've sourced the image enabled it and been playing with it and my latest incantation I'm just going to show you where I'm at with that so this is where I'm going with my image I sort of same issues you're mentioning is what if you want to take over control because I want to be able to do that so I've provided a lot more options in my image the idea is that I've done this and I'm talking to the people other people in Red Hat are developing the default and I'm trying to get them to do changes to incorporate some of these images so they can improve on it so the default one's better but also you still have this option externally and I'm sort of hoping that with my one because I want to do things that Red Hat would never want to do in theirs I like to use source tables for Python in the interest in my image and perhaps get web community or co-operating to have a web community to manage one and that way we can come up with a best of breed one and that'll be an option In OpalShift do you have the possibility to choose a default do you have more defaults? Do you create profiles for example in OpalShift and you can choose what defaults you would like to have for Python? One of the things that exists is that if you use OC new app on the command line and don't tell it to use the Python image it'll do that language detection and in that case it then once it sees it's Python will look for the image stream which is essentially Docker image called Python Now if you wanted to use that language detection but didn't want to use the default then you can create an image stream local to your project which maps Python to mine for example and that way you could override it Hopefully that works I've found an issue with it in the last version I'm not sure, I haven't tested the one we've just released as to whether the overriding does work but that's the idea but besides language detection you're going to generally specify the ability you want to use anyway It would be nice if in OpalShift you have the possibility to choose maybe some of this Yeah and the only way is to override the Python image stream that's probably the only way I can see to it So in my image if you want to use UISC or just totally control, you won't run any command line UISC server I'll allow you to provide an app.sh so shell script and you can run whatever you want it looks for an app.py it looks for a paste.e file if you're using paste or with pylons or pyramid which you use that sort of thing and it will just go and automatically run up so I'm using modbusky express there to do that that's sort of if you're not familiar it was a bit different to straight modbusky so modbusky express a little bit of confusion over it modbusky express is to install that you basically go pip install modbusky and it's not it's not this totally new busky server I've written it's still modbusky but what modbusky express does is just generate all the Apache configuration for you on the fly so you don't have to do it so you're relying on the fact that I know how to set up a patchy and probably going to do it better than what the default config is that you'll find on your operating system which is generally set up for PHP there's nothing special about it I look for whiskey.py file and if you come from OpenShift 2 the default builder sort of doesn't have a sort of upgrade path my one I've been fitting around with it will sort of do some of the things that OpenShift v2 did with knowing that if you've got a whiskey slash static directory it's going to be static files I'll mount that as slash static or if you've got a whiskey application file it's going to be .py file equivalent so I'm trying to do that equivalent so we can have an upgrade path set up .py file again look for and manage.py so the manage.py is Django the important thing is I'm not using Django development server here at all it will actually query using manage.py to find out what the configuration of your Django app is and it will automatically run it up under modbusky express actiongrade server and all of this how are we going for time one minute so very quickly then to get to the that question of taking control so by default it works in an auto mode so I'll look for shell.py shell.app.py paste.jango that works in an auto mode I'll generate all of this yes if you want to take control you can instead set in a particular file I want to use guinicorn or I want to use modbusky or waitress or you whiskey and in that particular case when you do that and you're taking control then it's up to you to provide all the configuration and I've got various examples there and yeah I'll leave it at that if you go to the page it's very same as you can dig around there there's a particular file which you set called warp drive server type do I see it mentioned no okay so there is this warp drive server type so that's where you can specify what it is if you need to override and circumvent the algorithm what it chooses and there's a server art file so what it will do is that you want to use guinicorn but you don't have to run guinicorn I'll run guinicorn for you and I will supply the minimum options that are needed to guinicorn or you whiskey or waitress to have it work properly inside of the docker container so it'll set up logging to ensure it goes to stood out it'll bind on the correct port for listening to connections all those sorts of things will all the minimal will be done for you but after that you don't have to provide the options you need to tell it where your app is and all that sort of thing so we'll leave it at that you can dig around here on the git repo there's lots of other things that my builder can do which the others can't do which I think is more useful being able to configure it for environment variables and change various things like that Alan Mogwitzki Express still has an auto related mode so you can't enable that if you still want to do live editing any questions? I talked food fast didn't I sorry I I had that issue when I came over here since my Australian accent does cause confusion for some people it would be helpful to understand it the static farm also handled through this whiskey in my case with Mogwitzki Express it because it's actually is starting Apache and Apache can handle static files very well then yes so what will happen is that you can point at a directory of static files the Apache part will do that and your python web will be handled by Mogwitzki and if you're familiar with Mogwitzki there's two modes there's an embedded mode and a demon mode demon mode is always the one you should use even though it's not the default but Mogwitzki Express will always use demon mode that's that link up there if you want to quickly write down so so what does in that process you started another program you said you're starting Apache how does that happen because I don't like it I've never done it before but you will be able to not do this before so you'll be able to try so it means like you're leaving and you're leaving so actually I said Mogwitzki Express but it wasn't done before the software it was so it was sold all the way so I'm going to run Mogwitzki where I pick the options and use the static files so for you not bad a little bit tired that's always the case how long are you going to stay here well actually I came in Monday we were having the time so it's supposed to be actually it was all running so it's two weeks two weeks straight when are you leaving I've been there one day I went there and then I went to and then to and then to we didn't actually I'll call you to at least give me some places where I can eat see you around I think we should have a contest real quick what year was OpenShift first released which one the first one OpenShift what year did OpenShift online go GA any other guesses three years ago three years ago three years ago three years ago good enough what size do you wear large do you have any other trivia questions ask another question on the spot like that what is the open source virgin of OpenShift origin what size do you wear what size XL I think XL more t-shirts how do you know where I'm from do I know you I know you what is the national animal of Poland buffalo buffalo it's a bird eagle the color black eagle what are you talking about he doesn't get his shirt who said white eagle I heard someone say white eagle I can ask you Okay this is a mythical one. What is the national animal of Greece? It's mythical. Phoenix. Wrong. Go and get your shirt. What's the national mammal of Finland? No. National animal. The national mammal. I'm excluding the birds. I'm giving those people who don't like birds a chance here. No. Who would pick a reindeer as a national animal? I'm going to step all over us. They think of something strong. What is the national animal of Finland? Fair. Brown bears are long. Everywhere. I'll keep it in Europe just to make it easier for you. Although obviously this is not working. Where is the largest red-headed cherry office? Grant, you're the good problem on the back. Steve, can I start? You looking at me? Because you are talking. Ask him what you've done. National animal of Netherlands. Who said you were born here? Last size. Can we do that during the break? I'm not doing my time. People want t-shirts more than me. Steve, enough. Oh, we're starting? Oh, this is great. Come back. You're sorry. That's the next question. So for the next 40 minutes, 30 minutes, if I'm fast and you'll have tanto. For questions, I'm going to talk to you about building your own origin. Origin, as you have said, is the upstream for OpenShift. Who I am? I'm Jorge Morales. As somebody said, I'm from Spain. I don't know how he knows. But I'm from Spain. You can guess it. Once I start talking, you can guess it. Until I start talking, probably it's not. Because I'm told probably I'm from any other country. This is my picture. So if you reduce my presentation, you will have to modify it at least. What do I work? I work at Red Hat, of course. This is a Red Hat conference. So we are here, most of us, we are here at Red Hat. And what do I do for living? I work as a developer advocate for OpenShift. That's why we are here talking the whole day about OpenShift. And I just invite you to join all the other sessions around OpenShift. What do I do mainly for living? I do demos, workshops, talks, conferences, blogs, and travel. We travel a lot every day on the road. But the good thing about traveling is that we drink beer, we meet people, nice people, and we are in countries with nice weather. Not this one, but with nice weather. Okay, let's go to the talk. I wanted, what we wanted, I wanted was an OpenShift in a VM, in a VM for doing my talks, conferences, workshops, whatever. Okay, so I started doing some research on what was available when I joined the team, because I'm very new to the team. I'm only here for six months. And I started doing some research of what was available at that time. There was a whole lot of options. So I went through all of these options. The first place to look, of course, it was the official origin repository. They have a background file to allow you to create a VM. It was community version. It was maintained, obviously, continuously. They are working on it, so it's always up to date. But for me, it had some cons. So it only set up the VM. So there was a lot of manual steps to do in order to have a full OpenShift origin VM working or process working. It was not easy to update. There was a lot of scripts that they are hidden in a hack directory. And there is a lot of stuff that wasn't simple. You need some code direction to access your application. And there wasn't too much documentation. Why? Because engineers don't like to document. So they know how to work. Why? Because they're chatting the IRC. They know each other. They see the site. One or the other, they know how it works. But for external consumption, it wasn't easy. There is another VM or effort to create a VM that's also from Red Hat, which is the container development kit that is something that we are creating to allow developers to interact with OpenShift. As we are developer evangelists, we want the developers to work with OpenShift. So this seems like the best approach or the best option for us. But one of the pros is that it's a full image. It's ready to work. It's usually up to date. But on the con side is the enterprise version. So we couldn't use it. Why? Because the enterprise version is not meant to be used anywhere for free or give it away. It's just for customers who pay it. That's not very true. But that's pointless. It's not yet there. So they are working on creating this. So it didn't fit my needs. So I looked at the community version for the same. Why? Because as we are Red Hat, we develop everything upstream first and then we create the enterprise version out of it to sell it commercially. So this atomic developer bundle was the upstream version of the CDK. As a pro, it was the community version. Of course, that means that it was more suitable to use. It was the full image, ready to work, up to date, maintained, documented. But it has some things that I didn't like. It requires some background plugins to set up for DNS, for setting up the VM. And it's also a multi-purpose project. That means that it's not only for OpenShift, but it's for a set of projects that we are creating in Red Hat, like Atomic Enterprise, Platform, Neela Q, and something else. So I didn't think that it was tailored only for OpenShift. So we started digging more. There were more demo environments available there. Created by probably Red Hat consultants that whenever they go to customers, they have to create their own stuff. There were no pros from most of this. Why? Because it was the enterprise version. They were not maintained. Once the guy did it, every time new release comes, they didn't maintain it, so it wasn't an option. And there was another thing that looked really cool at the beginning, which is OpenShift in my container. It's developed by a Red Hat engineer. And it's community version. And it was really cool. Why? Because you run it as a single command. So you do O-I-N-C, O-I-N-C, Oink. And you go up or run. And then you have an OpenShift installation. But the pro, the con, was that it wasn't using a VM. So that's something to test on your host. I don't want people to pollute their host. That's one of the main reasons. And also on Windows, it didn't work, of course, because of that. It was written in Go. It wasn't easy to understand. So that wasn't an option for us. And then we have the Fabricate, which is another upstream project from Red Hat. But this one, it was really cool. It's really good VM for getting started with OpenShift. It's community version. And there is a lot of documentation. But then you have Fabricate on top, which is not what we want as a show. So we wanted to show OpenShift. And Fabricate is adding a lot of stuff. Cool stuff, to be honest. But that's not what I wanted to show. So none of this worked for me. Well, really, when I say me, I'm saying it didn't work for the team I work. And to be honest, it didn't work for just one guy who is the guy that talks the most and always complains. And you just saw him talking. Steve. Do you want me to get me a shout-out? Yeah, but I'm promoting you. So if you want to follow him, he's very chatty on Twitter. Don't follow me. I only have 38 followers. Don't do it. I don't say anything interesting. That's why I promote my colleagues. What we wanted. So we wanted, after doing this research, I had with a pretty understanding of what I was looking for to create a VM for us. So I wanted to be based on the community version of OpenSea because we are developer advocates. We want to promote the community version. I wanted to have the process that builds this VM. I wanted to be open source. So to be easily accessible for everyone. So if you want to just change and adapt to your needs, we are right-hand. We want to be open source and allow you to modify whatever you have. I wanted to keep it up today. So that means that with every release, we wanted to be able to have it to release one new version of this VM. It's not only the process that we are following to build this VM that is continually updated and key up today, but then out of this VM, we are creating a package VM that we ship or we promote in our OpenSea.org slash VM site. So that's the official VM that we promote as a community version. We wanted to be easy to understand so that anybody that looked at the code could understand it. So we didn't want to base it on Go or any other language that was really complex. We wanted to be flexible in options. So whenever you create the VM, it could suit your needs. So if you want to do things like install a different branch of the code, you can do it easily. If you wanted to add additional capabilities to the VM, it was easy to do it. So we created or at least we think we have created in a flexible way that allows you to provide with more capabilities or just the capabilities that you want for the VM. It's packageable. That means that we create the box that we ship in OpenSea.org. We ship it with this process. We create it with this process for our own output. We wanted to have the ability to peak into features in progress. That means that we talk constantly with engineers that they are developing in origin. But the only way to look at whatever they have done is to build the latest code on the origin inside or even a branch. They are working and they are trying to do a pull request or merge a pull request into it. You need to build that to create or install an OpenSheet installation out of that. And that's really a hard process. So we wanted to just look at what they have done so if we have any comment to make on the developer experience out of it we could make it before it really gets into the product. We wanted to have the ability to peak that's the word you just said we wanted to be able to build it whenever we wanted. So that means that it's just a matter of oh I need to build a new version. You just run the build commands it's wrapped with vagrant and you just have an updated version. Of course if you are building OpenSheet it will make a big internet connection to pull down a lot of things out of the internet so I don't recommend to do it here today but you can build it anyway. That's why I will show you some how it's been built but I will use a video and that's because the internet connection here sucks. Also it takes more time that it should with a video I can slide fast forward. We wanted to be usable in different hosts why because developers most of the developers usually use windows maybe not in this room but usually use windows so we wanted the VM to be usable in windows back or in Linux. So here I am presenting our own version of the Origins Vagrant VM that we're creating. It is a CC app that's doing a vagrant app. Once you do a vagrant app and wait for a while you get a full VM ready to work with OpenSheet in it. There is no additional plugins required so that means that apart from vagrant and of course visualization technology that you are using whether it's a virtual box or a little bit you don't require anything else there is no additional plugins required you don't need anything for DNS setup you don't need anything for configuring different technologies or whatever just vagrant. There is no fancy for RayDirection we are using the service Cpio which is a hosted online service so we use this service to be able to access the applications that you developed on OpenSheet that means that is not suitable for working fully disconnected or not completing so you will not be able to access the final application you will be able to develop applications on top of OpenSheet but it works very easily and there is no configuration so it's fully configured for you you don't require to do anything but and it's also fully maintained and supported by me and by the community that I hope it will become the community. We have a lot of provisioning options that means that if you want to customize things like the origin repo or branch to use let's say that there is an engineer and he's done something and you see his pull request you want to test it so you just create the vagrant image or the vagrant image with the repo and you test his pull request you can set up the VMIP so usually we ship with one IP but if it conflicts with whatever network you are in you can change it and add up to your needs and then all the ship IO and all this stuff will get configured related to the IP you configured it configures also the OpenSheet domain for the applications that means that you can provide whatever name domain you want for the applications and it allows you to specify what additional capabilities you want to add to the image we'll see what are we working on what additional capabilities we're working on but there is a lot of set of things so you can have a base image and then you can have base image created with a lot of stuff in it that's additional to OpenSheet itself We have runtime options which runtime options we have memory and CPU, that's it that means that if you create a VM and you package it and you provide it to anybody the VM will work just by itself it will be a very small background file and they just need to tailor the memory and CPU to their needs it works with LiveVirt and LiveVolvox because we want it to be able to run in different hosts for Windows, Linux and Mac users and it's scripted in bus that means that it should be easy for anybody to understand to tweak, to change, modify, adapt whatever they need I have middleware background and that's also one reason because I chose bus and not any other thing, more complex like Perl, Python, Go, whatever other scripting is there is the creation of the VM follows four simple steps configure the OS layer configure the Docker layer, configure the Origin layer and then provide with anything additional on top of that on the OS you can configure all the basic packages that you want we provide the minimum, you can of course tailor to your needs and we try to make the image as small as possible so we things that are tunable in size or that can grow in time in size, as we learn we try to limit them like the journal size so we provide with a very small journal size because a developer usually will not look at the journal the thing is that once you have a VM working probably the journal starts growing and your VM starts behaving slower on the Docker end we just configure Docker and we started one thing that we do is because we are using as a local environment, we are using the loopback file system because it's the easier one to set up we limit the size for this loopback file system for the containers so it doesn't grow as much as your VM and it doesn't get to a point where you cannot use the VM anymore then on the Origin end it's as easy as go to the Origin repo, whatever you have there, check it out check out or update if you already have it with the latest that's in the repo build it, then configure Origin, then start it so you have the full VM with it started and then add some services on top like the OpenShift Registry the router and some templates so whatever is required to have a minimal installation ready to use for or by developers and then we have an additional script that will grow within time that it will provide with additional capabilities to this base installation things like the metrics maybe you don't need the metrics at all because you are a developer so you can select whether you want to install it to have your VM with it or not, the login capabilities there is a centralized login capabilities that's using the Elk stack you may want to have it in your installation or not because if you add things to the installation the VM will get bigger the installation will run slower some templates so we have the enterprise version of the templates if you want to use it for demoing in the enterprise that's fine also there is some work in progress to add some additional users with different capabilities with different roles so you can test things with different roles to show persistent volume capabilities using the NFS you will set up an NFS server in the VM and then it also allows you to cache or to pull down all the images so every time that you want to work everything is already in your VM and you don't have to wait for different images to pull down from the internet and bonus, we have some scripts that we are using to show openshift that we also put in there so if you want to let's say show or try with exus installation ready to work in an openshift environment there is a script that it will just you just have to run it and it will install an exus instance in your environment and we are a lot of the demos and there is stuff that we do we will put it in there so you can just try it so now I have two videos one it shows how to create how to create the VM from scratch so I just pull down my github repo and I do a big amount of I don't have to do anything else and this will start doing following all the provisioning process so it's creating and using Libert for this one and it's creating and just checking that I don't have anything running on that IP and it's running all the scripts so I will just you'll see that it updates Fedora it installs Docker this takes some time then we see that it's executing so all these fancy numbers are for then all the scripts that are run whatever they are run by as these numbers and then you'll see the execution of it so we are cloning origin we are building origin and then we are creating the service and then we are creating the router whatever the base templates that are and after we are installing some additional capabilities and after we install all the additional capabilities the installation will be up and running we'll have a origin up and running in minutes I have a decent internet connection at home so this for me takes something like 6 minutes to build from nothing to a full OpenSheet installation building from source code and to be honest the first time it takes is to install Docker and to build origin then this is the installation that you have as you can see there you have the IP that I am accessing the VM is already preset by me and I'm going to create an application and I'm going to and it's showing also what's the URL to access that application using the domain for the application that I provided which is the default one so this is creating a PHP sample application with the sample repository it's building it you can see the logs how it builds the application then you see that it's already been built and then we can access the application and this is here this is the where your applications will be exposed and this is why you require an internet connection but at the end if you are using or building something in origin you will require an internet connection because you will be pulling down dependencies probably for your application so that's something that we assume it was something required so we prefer to use this way and now I don't know what's doing now and now we see that we have in this VM the application itself and then what we see here is that it also has the metrics capability so that's something additional usually not installed by default if you want to see the metrics because you are the application that you want to you are developing you want to have this kind of ability you can just add it and then you can just log in with the regular command line to the VM and it will just work with the OC client as some of you have seen in previous workshops this morning there is there is there is in this other option when you want you have already your you have already your VM created but you just want to test some capabilities you don't need to go again and create the VM again from scratch because that will take a lot of time so you just run the provisioning step with the VM so this is another video that I have so what I do is I was looking at this pull request from Fabiano France for a new feature that was going to go into the product so this feature what it does it's add an about box here that it will allow you to download the client from the web console so I wanted to test it how do I test it I just reposition the VM with the repo username of the doing the commit and the branch so this is something that you will get in the pull request whenever you are watching to a pull request you will get this information and you just background provision instead of a grant app because your box is already running you just say ok let's rep provision this it takes some time that I will skip so once you access the VM you can see it's done so this is a way of having the latest and greatest or checking whatever you want to check of what the developers have done so during this process at the beginning I told you what were the requirements that we needed for the VM we wanted to be open source so this is hosted on github so you can use it and access and tune it and modify it we wanted it to be based on the community version that's using the origin we wanted it to be maintained up to date it's maintained up to date because we are using it to create our own VM so that's why we are keeping it up to date we have to provide this level of quality it's easy to understand because it's written in bus it's flexible there is a provisioning option although you don't have a lot of provisioning options but you can do some provisioning you can package it so there is one way once you create this installation you can just do a vagrant box package and create with one VM that you can reuse with everything that you have set so it will just start with the VM pull it configured VM you can look into features in progress you can build it whatever you want and you can use it in different hosts so check it out please that's the link to the repo or use the package version don't worry there is a link to the slide at the end where you can get all the presentation links to the videos and everything so use the package version if you want that's our version in the OpenSit of org site and read the usage docs so for the VM that we provide in the origin site we also provide with the lab something similar to what Grant has done during the morning that can help you walk through all the usage of OpenSit creating an application linking one application with a database deploying templates, watch templates and all this stuff so don't kill me if you don't like it do some pull requests to meet issues help us make it better or just don't use it thank you for listening to me and if you have questions now is the time for them if you want this presentation it's there so this is the only link that you really care because all the other links are in there thank you questions? I have a question how you do the libber provider because on the link there is only virtual box provider published we don't provide yet the package VM for libber we are working on that but if you are using the libber provider if you want to use the libber provider just clone the github repo and make that up as long as you have the libber configured it will work I think we have an issue filed for that particularly hopefully soon we have an issue to publish the VM with libber on the origin side so the person who makes the VM now that Jorge has made his script basically all I do to make that VM honestly I do a little bit of cleaning the file system with writing it all to blank so that they can compress it when the package is in and I make a new main group file and that's it there is instructions on the readme file so if you go to the repo there is instructions on how to create your own package version of what is your maintenance policy in OpenShift probably when I will use your master I always get something relatively new and you will continuously go to the next or the master of OpenShift but if I wanted something more stable like OpenShift 3.2 do you have branch so this is origin this is not enterprise but it relates to some version of origin the thing is that what we are doing is because we are maintaining it we are keeping it up to date to the newest releases so probably within time if you want to use like last year's origin you will have to go back in comments for last year's version we don't have branches we don't have releases for origin if you navigate to GitHub if you click you will see like 32 releases or something those correspond to tags so basically if I go when I am building the OLM1 VM based on origin I go into his script and in his script you can set an environment variable for what tag you can set both the repo and you can set the tag or branch so I can find the what we will be doing that is something that we have this session because we will get some ideas right now it is only there is only one master branch but what we will be doing is every release that we publish on the origin side we will tag it so you can go back to that official version and use the 1.1.1 release of origin right now if you just clone the origin side you can run master or you can just specify as I did for this branch you can say origin branch master branch version 1.1.1 and you will have the 1.1.1 version the only thing is that it will not get it from the package version it will build it for you out of that branch no we are not using ANSI Waste as far as I know that is only for Fedora 23 no it is for for it is for Fedora 10.0 ok then so here is the deal his point was we as the evangelists wanted to test features when they were going in your RPM solution does not work for us it is a it does not matter because I cannot go and say Mikhail has done a new he is testing a new feature I want actually I don't even want TIP I want Mikhail's branch where he is checking in all sorts of stuff that might be broken this script is generic enough it builds from source right so we don't want the RPMs because we want to build from source so we either build from a in my case when I build the all in 1PM I build from a tag version that was released like v1.1.1 and then I can link that to client but when I am testing Mikhail's stuff I want to build from his source tree to build an all in a vagrant image so we don't use RPMs or anything package we don't use Ansible none of that stuff because we had a need that's the whole presentation was our need and probably it's not the best way we are not engineers we are not doing the proper approach but we had a solution that worked for us if it works for you which I just explained then fine use it if it doesn't work use whatever you want if you do want notes on Ansible we have a separate set of links for staging of a cluster of machines and a repo it's available and the point of this also is not to set up a cluster all in 1PM that runs everything you want that you can easily just do stuff with you don't get into all that DNS stuff it's just 1PM running and it acts like an entire server more questions? then you are relieved for me thank you sorry? sorry? but then you will say like what is better then this is correct hey, I think we are going to eat somewhere no we have to go out someone else but not now, so we have 1 hour something like that we made a break good because we move as a team we made a break but this is important yeah, I should say I made a break I want a break I want a break and also the same I wanted a break so it's up to you I will do this I will say the 100% I will say it when you share your idea you will understand I'm following your staff That's basically the... That's basically the... The vagrin and vagrin, so I can... I can... Right? Yeah. And then when I want to actually... I'll use this for the... The test and out of the... The next stage... Now, this is the... Wifi. Yeah. Okay. This is the Ansible installer. Just set up your own environment, your own computer engine, or on bare metal. If you have a cluster of machines that you've learned, you can have something called an inventory file that just has a list of IP addresses. And as long as you have a valid SSH key to access, it'll go log into all the IPs, Ansible to provision of a cluster of maybe 10 machines. And... Is there any way for the local environment to communicate? Like pushing all the changes? If I... If you've already built an image, then you just do Dr. Push. And then hopefully the... That grows and it fires. Anytime a new image arrives, the deployment will listen to that event, and then possibly deploy based on what they're doing. So for CI or staging, I always auto-deploy for production. I maybe wait until they press the button and manually deploy for production. But whenever I have a... safe time, right? And for building the image, you have to do it locally. Well, we already had... Oh yeah, yeah. Well, I'd use OpenShift to build it locally. So I could use OC Sync into the environment. Or I could actually, as a variety of ways, to build locally. Yeah. Maybe OC Sync. If I was using a hosted environment, I would just do a Git push to GitHub. GitHub would listen... to that hook that would do a build in the cloud, and it would should produce the same image, right? It should, right? Yeah, yeah. But on the other end, I could have a base builder image on my laptop sent to us. And then maybe what they build in staging is on rel, right? Maybe they don't want to pay too many rel licenses. I get sent to us here. Rel in the cloud, right? They still can build Docker images. I can push and hold Docker images based on sent to us, but when they go to production, it's rel. Right? So nice to have different ways of producing the images and allowing people to enforce a standard base. For compliance and consistency across your environment. Thanks. Yeah, you bet. Yeah, it's not going to be my presentation about notification. Like iPad or something like that. Go ahead. Yeah, thank you. I'm going to do the improvisation first. Should we prepare something before or... No, no. We're going to go and just improvise. Yeah? Yeah, just improvise. It has to be good before I need to know who to love. My speak. Yeah, because you know... It will be... 2PM, so... Yeah. It's just time to learn something. Ready to start? Yeah. It's a crazy evening. Yeah. I want you to learn how to improvise. Yeah. I want you to learn how to improvise. Yeah. Yeah.