 Thank you. Now obviously you're all here in Audaries and Dany's talks a few minutes ago. Now I want to say a few comments about what they talked about in that. Now they obviously gave some really good suggestions about getting involved in projects or creating your own projects. I am the best example there is out there of the total opposite of everything that Dany and Audie probably said. If you remember from what Dany was saying and Audie was saying is that if you've got something you need for yourself, go and write something little for it. I have a habit of picking projects of which no one else will take on and they're often infrastructure projects and with infrastructure projects it's often the case that you can't just do something little and then put it out there and hope it will come along and help you out and build it. Because the problem with infrastructure projects you actually have to do a hell of a lot to actually get it to make it useful so that people can actually use it. So I'm a really bad example for what they're talking about. One of the well-known examples of projects I've done is ModWiskey or ModWSGI which is the ability to host Python web applications in Apache. Now that's one example. That's been going what seven to ten years or something, can't remember how long. I am still the only contributor to that. I've never managed to attract anyone else to work on that project. I have another project called RAPT which is one for decorators and monkey patching. Again, no other contributors because it's a technical area which is really really difficult. And this one Warp Drive, this is my latest one and it probably is going to fall in the same category. Hopefully not. And I have had a few other projects which have failed which I've done talks like this before. So hey, here's a really good idea. How about we work on this and try and attract people and I've never been able to get people interested. And hopefully this is not going to be another one. So this talk is about Warp Drive and I've got this background in Python web application deployment and well to be frankly I think it's too hard. Now, who here has written web apps or tried to deploy them themselves? A fair few number of hands. Now, except for the experts over here, you can think back to the first time you did it, okay? Was it easy to get your web application up and running? Yes or no? As up was no. No, a lot of trouble, okay? It's not simple to get out there and take your first web application and get it deployed and running. And to me that's a big problem. I don't like that. I want to come up with a nice, simple way of doing things. I'm going for the pie in the sky which is again, it's not the little thing. So let's look at an example of what is involved in getting a web app deployed and we'll use the example of Django. So how many Django users? A lot, good. Okay, so you know a lot about what I'm going to talk about and what you have to go through. So first off, Python version volumes. If you're not using one, you should, okay? If you are in the habit of using system packages for Python or installing Python packages into your system Python, please don't do that. Use Python virtual environments, either the virtual end or the virtual end tool or in Python free you've now got PyVN. There's a lot of good reasons why you don't want to do that and a lot of us just do keeping you isolated from whatever system packages are already installed for Python because if you don't, you can start to get conflicts. So rule one, use Python virtual environments. So that's our first step. We're going to create one for Django here. We're now going to install Django. So we're going to install some packages. All the packages we're going to need. I'm just going to hand you have Django here but obviously you're going to want other packages you need to install. And Django, what's good about Django is I'm not looking on Django here by taking as an example. Django is actually one of the web frameworks out there which provides you with actually a reasonably easy way of getting into doing things. Some of the others are a lot harder. A lot more piecemeal, you have to construct things yourself. And one of the things that Django provides you is inbuilt server for actually starting up your website. So we've created a virtual environment, we've installed our packages. Now I've assumed we've already got our app here. I haven't got into how you construct your app but you could have created one with Python Managed PyStar project. We're going to run it up. Now this is where things start to sort of get a bit more complicated. The first warning there is that you have unapplied migrations. Now that's all to do with your use of a database by your website. And when you see that it means you have to have known to run PythonManaged.py migrate. And so that is the actual thing that's going to take your database model information which is part of your Django app and creates all your database structure. And you need to do that. And that's not just the only hidden thing that you have to do. You can't just launch and run server. There's a few other things you need to do potentially. You need to create a super user. If you don't create a super user you will not be able to log into your admin interface. Another one is needing to collect together static files. Now this is something that is not needed if you're running the Django development server. If you are deploying to a production grade server, a production grade whisky server, whatever it may be, then potentially you have to do that. You definitely had to do this in the past. Django now has a middleware for handling static files which if you don't use a separate server to host static files will come into play and host up all your static files. So your style sheets, your images, and so on and things like that. Technically you're better off not relying on Django itself doing it as a middleware. You should use a separate server if possible because it's going to give you better performance. And there's a whole lot of things around that which you can do to increase performance if you're going to start scaling out your website and be worried about lots of users and so on. Finally we can go back and do run server. And this time it works. Now also with run server here we've had to know what port we were specified, what port we're going to listen on. If you want that to be accessible outside of your own box with the development server you have to bind it to a different IP address. So normally it's only going to allow you to connect to it from your local system. So as part of the configuration you have to need to provide for that inbuilt whisky server. And we have it running. We've got our admin sign up. We credit our server user. We can log it in. The styling looks correct because static file is being handled in this case by the development server but otherwise the whisky server. So what have we learned here? And that is that things can be broken down into two major phases. There's the build steps of which they were creating a virtual environment, installing required packages and then collecting the static file assets. And depending on your application you may have other things there that you need to do. You may have data set which you need to import and pre-populate your app with so it's got available. When we come to development you need to set up your database initially or if it's a redeploy you might have to do more further migrations because you've got to change your code and change the database model. So you have to apply these migrations to make the changes to the model and migrate that data. You have to configure the whisky server for the environment. So in that case I was dealing with what port you want to run and what interfaces you want to listen on. And then finally you're going to start up the whisky server. So you've actually got to make a choice of what whisky server you're going to use. Now I used a development server here which is not for production use and that is because I believe it's still single process single threaded or is it now multi-threaded I can't remember. But you should not be using it for production. So you should be looking at alternatives. And the alternatives are ModWisky which is the one I've written. You have Goonicorn and I should point out ModWisky is tied to Apache. Goonicorn is a pure Python whisky server so you can it's a very simple one to run up but it's not integrated into any existing web server environment that your system may already have running such as the case with Apache or this Uwisky. And Uwisky it does support different ways of running it. You can use it with Apache but very people rarely use it with Apache. They tend to pair it with the Ingenix web server instead. Now when we talk about trying to go to a production grade server like that then very quickly things turn into a mess. This is just some tweets I have pulled from I think the last month of my Twitter feed which I watch all these different servers. And the problems people have are many and varied and as time goes by it does not improve. Nothing gets better. And part of the problem has been historically, especially in the Python web community, people don't cooperate very well. This is why there are billions and billions of different whisky frameworks. Well not billions but there's a lot. They say to create more. We need millions, right? And whisky servers is sort of the same thing. And I've tried to get people to agree on commonality and it's not been possible. Everyone goes off in their own little silos and likes to do it their own little way. And one of the problems is that once you make a decision about how to do something one particular way, you generally can't go back on that. And so different whisky servers have made decisions about working in a certain way and they can't change that now because people are standing upon it. So you can't, at the whisky server level, try and get anyone cooperating on anything anymore. And it's not a good situation. And some whisky servers are sort of like Apache and ModWisky. My philosophy is that it's there to do one job is to do work well with Apache. And there's only a certain amount of functionality you can implement in the confines of the whisky specification, which is the specification for bridging your Python web app with the server. You whisky, on the other hand, it's like a Hydra. I don't know what a Hydra is, it's like, I think I'm using the right name, it's this mythical Greek beast which has many arms and legs or something like that. It's sprouted out in so many different directions now. And the common joke with you whisky is that if you run you whisky help, you get 500 options or something. It's grown into this beast, it started out as Python, it's now got plugins for PIRL and PHP and God knows what other languages now. So what should I need... The question is why should I need to care about all this anyway, right? All these different options and all these different ways of configuring it. You want to sit there, you just want it to work, right? Now, I'm picking on the whisky service here, but it's actually not entirely their fault either. There's lots of different hosting environments out there where you can run your web apps. Heroku, DigitalOcean is a, well it's more a VPSC one, but it's a case of you're going to build up yourself. But if I just pick on the past ones, all those which try and help you. There's Heroku, there's WebFaction, there's OpenShift, there's Elastic Beast Docs and Google. These are all ones which have rather than going and having to build everything from scratch, they at least provide you a level of functionality where you can just dump a Python web app in. Again, they all behave differently and it's just a nightmare for you out there to come to grips with how different ones work and if you want to be able to have portability from one to another, it's just impossible. There's a bit of a saviour though and I'm sure you've heard of this thing called Docker who has not heard of Docker. Not a single hand up, right? So Docker has come along a few years ago and I should make a point that is a marketing buzzword if you like because Docker and what it represents is a standardized image format for containers. But containers have been around for a long time. Google's been using containers since about 2002-2003. Heroku, OpenShift, those sorts of platformers of service, web hosting offerings, they all use containers. So then it's not a new concept and it goes way, way back in time to Solaris who's owns and IBM operating systems have concept containers as well. But Docker has popularized this concept of containers and the main thing they gave you was this common format for images of how you define your operating system image, if you like, of which you're going to run. Now why this is appealing compared to what we have before is that rather than the old ones you're having to deal with all the web hosting infrastructures way of doing things or you having to create everything from scratch on a virtual machine. Here you can easily with Docker construct up a virtual image which contains your operating system, your Python language run time and all those bits that you need to host up a web application and then you're going to put your web application on top. Now keep mentioning Docker and I can't give out this other one called Rocket. Docker has certainly done a lot to popularize the concept of containerization but they have upset various people along the way with the direction they're taking that and that inevitably has spawned other similar efforts such as Rocket. So again it's another thing for doing containerization image format. So these people went in two different ways obviously and now there's this thing called the old container version even they're trying to bring that back in to get the one. Anyway the summary on this is that containers are great, the standardization is coming long for containers and this is a good thing as far as concerned of trying to make the whole story around deploying stuff a lot easier. Now one example I can provide of this with, I'm coming from Red Hat so my one marketing slide okay Overshift is a platform as a service offering and it's been around for about four years I think. It's like Heroku, the idea you give it source code it will run up your application and get it going and I just wanted to use it as an example of how you can make this experience a lot simpler. Now if you go into OpenShift you can go in there and say I want to create a new application in there and you can go in okay I'm going to select my image or my template or whatever I'm going to use to create this. Now what I can do here is I can go in and I can go and search on warp drive and I've got my template here preloaded for my warp drive which is a project I'm going to talk more about. And I just say I want to create my application name and here's my git repo and I deploy it and I get an application running. Now there's nothing special in that repo, it's not my Django project I haven't really done anything extra in there related to whisky servers or anything and I've got my app running already so this is how what I envisage of the simplicity you should be able to get to you shouldn't have all these dramas that we have had in the past. Now I've shown through a web interface, not everyone likes web interfaces and if you're familiar with Heroku you'll know that it has a git based thing where you can essentially pull your git repo down, you can push it up to Heroku. OpenShift also has a pipeline way and so again one command I can deploy it. I'm just going to give it the where my git repo is I'm going to give it the name of the template warp drive Python 3.4 which is the thing that's going to say how to deploy my app and I'm going to set up a few things, the name and also the repository location. And again I'll get my application running really quickly. Now if you're familiar with Heroku and OpenShift that existed in the past and I should point out I mentioned that OpenShift has been existed for four years ago and it did use containers in all that time. We've actually gone and rewritten it in the last year and a half, two years. It's all now Docker based. This is why I'm sort of using it as an example. Now in the latest iteration of OpenShift we've got this thing in there called source to image. It's similar to what Heroku had with Buildpacks but the big difference is that source to image is Docker focused. It allows you to very easily take a git repository with all your application source code in it and you take your Docker image which is your builder image and you say I'm going to point this at that, run my Docker image against my application code and it's going to merge them together into one Docker image. That gives me the thing I can then run. Even though I'm describing this in terms of being inside of OpenShift, it's actually a separate tool you can run as well. There's actually a source to image tool you can run on the command line. If you're using Docker now and you're not interested in OpenShift but you still want an easy way of taking a web app and get it running, then you can use this source to image tool. You run it with the Graeme Doctrine Walk Zero CentOS Python Free Vault. That is a Docker image up on GitHub and I'm going to point that at my repo with my application and I'm going to say run this. I'm going to build my image. Simple as that and I can run my image. Two steps, build, run. All the build and deployment steps have been managed for you. All that stuff we did before for Django and we did that, virtual environment, installing packages, collect static steps of my doing database initialization, migration, all these sorts of things are handled for you and you don't have to worry about it. Walk Drive is the magic glue that makes all this possible. I set the scene for why and this is where it all fits in. If you're familiar with Docker, let's drop down even further. I've given a few examples. I can do a really simple deploy with OpenShift. I can use source to image on the command line and get a Docker image. If you don't want to use either of those, you can also drop down even further. Because I've just got a Docker image which has all my stuff in it, if you're just using Docker by itself, you can do that as well. If you're familiar with Docker and Docker files, this would be now your Docker file. If you've done, if you've used Docker before to try and create an image for your Python web app, you'll know that you've probably had to go through and do all the installation of all your packages yourself, possibly install extra system packages because they're not existing there in the base image. You may have had to install Python. All those steps you don't need to do. Simple as copy in your source code for your application. We're going to run Walk Drive build and we're set up a command here for starting up the server. So if we step through those, let's say copy in the source code and we're going to do a build. This is where you can start to see the steps of what is happening. So we have copy in the source code. We're going to run package installation. So Walk Drive is set up to look for requirements.txt file in your repository. It will go into that and install all the packages that are present in there. If you happen to be using the older way of doing anything with a setup.py, it also handles that. It will detect that you're running Django and will automatically go in and run the collecting of static files for you and get all into the right spot. It's actually a bit smart here and I'll show you more about that later. It's run it even if you haven't set up your Django settings to say where to put the files. I'll explain why in a second. And that encodes that command and that means that when we run this, it's going to run up the server. So in this case it's automatically gone and configured modWiskey for you. And who here has used modWiskey before? A few. And you found that probably really horrible because you had to go in and modify Apache config files. If you're not familiar with it, I have another project called modWiskey express. It's actually now possible to install modWiskey from PyPI. You can go pip install modWiskey. As long as you've got Apache installed and the Apache developer package installed, it will go and build modWiskey for you. And modWiskey, when you install that it provides a script called modWiskey express. And it's like a command line program. You just provide the names, the arguments for how to run it and it will run it. It's just like if you're running the Uiskey or GiniCorn on the command line. Same thing. You don't have to go and configure Apache at all now anymore. So that's one. So what's happened here is that it's realized that you have got a Uiskey app in there. It's gone and supplied all of the command line options to modWiskey express. It's got an auto-configured Apache for you. And it provides all those important options that you need to have if running inside of a docker container. So you log to terminal because docker likes to have logs to standard output to capture, the port to use. And it makes a call into the correct Uiskey application entry point. All done for you. So integration possibilities. We can do a docker build and docker run as I just showed you. We can do an S2I build and feed that into docker run. So that's useful in the perspective of that S2I in comparison to a docker build. You don't need to even write a docker file. So that one's very useful if you've got a continuous integration, continuous development pipeline. And you need to just automatically have application images always built all the time because you don't have to manually create docker files. You could also use docker build or S2I build to create an image and feed that into OpenShift, which is the OC and UAP command. This could all be integrated into other container platforms. I've had an earlier version of this running in Well, that's the next one really. So other container platforms is where I meet by ones who are running docker images or equivalent. But also you then have legacy pass environments. So Heroku. Heroku has build packs. I have had an older version of my warp drive stuff working inside of Heroku even so that you could install there as well. What about local development? It's all well and good to be able to have an easy way of deploying into a production system. But if you still have to do all these steps manually in your local development system, then it's sort of, you still got a lot of work for yourself. So warp drive can also be handled locally. So not only can it do things inside of a docker image or integrated with a legacy pass environment pass, you can also use it on your own box. So first thing I'm going to do here is going to eval warp drive activate Django. That's a little bit of magic. You can just imagine warp drive activate is going to create your virtual environment for your app if it doesn't already exist. We can then go into a warp drive build. We're inside that virtual environment now. That'll go into all those steps of installing packages, collecting static. And we run around the server, warp drive start. And now it's all on your local box. And you're using the exact same tooling as you would use if you're doing it in your production system. And it's all set up to ensure that your app is running under the same environment. And this is one of the problems with usually doing stuff on your local box versus production environments is that how you set up environment variables and things like that in production environments are obviously usually a lot different to your local box. And you end up having to set environment variables manually and so on. In warp drive, you can have your set to environment variables which need to set up as part of your source repo. And they can be either just six values or generated dynamically on the fly as need be. And when you say warp drive start, it's going to take all that environment information and set up your environment variables that you need and everything will work as close as possible to what's in your production environment. Now we talked about before about database initialization and database migration. And Python has, not Django has that command. You can run python manage.py migrate. That'll do both of those things. But you'll also that create super user, the collect static and so on. There are all these commands, special commands that you have to run. Warp drive allows you to capture those into what are called action hooks. So you basically create a set up script and you put the command in there python manage.py migrate. And actually in my example here, I've also got in there and if it's run in an interactive terminal, I'll also run python manage.py create super user and I'll get prompted for my super user password. Now that you've captured all these commands and you may have others in there as well you don't have to remember what they all are. The only one command you need to know is warp drive setup. So you've created first time you do warp drive setup. If you've gone and modified your database models for Django and you need to then do all database migrations or any extra other things again there's an action hook for migrate. You put all your special commands in there and the only command you need to know is warp drive migrate. Now does this sound good for onboarding users who don't know nothing about your existing app? I reckon it's good stuff because otherwise people look at it like, ah, okay, over here in this obscure document there's all these steps you need to do. You can say, here's this one command. I mentioned before about environment. Usually when you have all these sorts of deployment things there's all these special environments that need to be setup. Django settings, module or other environment things like that. Usually they only get set for the app. But what if you need to actually debug the actual environment that's running at the time your whisky server and app is run? Normally that's something that is actually a little bit hard. There is two ways you can do things with warp drive. You can go warp drive exec and then command such as env. It will go and set up the exact same environment variables that would exist when your app was started up and you could then run a command like that. Or you can do it as a shell. So I can go warp drive shell. That's giving me a shell. It's exactly the same set of environment rules when my app starts up and you can start to debug things. I could go in there and go warp drive start now and it would run up my server manually. So you could go in there and modify the environment variables to override something and then start up to see what happens. See if that fixes your problem. So yeah, I'm trying to make all these shortcuts and doing sure that you've got an exact same environment with simple to remember commands. Another command working locally is that great you've got your command running locally. What if you now want to run take your actual code that's sitting in your directory which you haven't committed anything. But you want to see if it's going to run properly in your production, more production environment. You can go warp drive image and give it name Django. It'll actually fire off this source to image tool and go and create you a docker image. So principle commands. Build, building it, start, start server, set up for database initialization, that might database migration, creating an image, running command and creating a shell environment. There's all these things in there of action hooks. So it's not an environment where you totally lose control. You can add on all these special things for various things. Pre-build. So that's before you do pip install requirements. So you can install extra packages if you need to be. Post pip requirements. So the extra ones are part of the build. You can do stuff with setup environment variables, configurations part of the deploy or other deploy steps. And it's obviously the setup in my group. And it's not just tied to one particular way of running up an app. This example used a Django app, I think it was. In that case, it will look to see if you've got a manage.py, it'll realize this Django, it'll do special things for Django. If you've got a whiskey.py file, it'll run it up as a normal whiskey app. If you've got an app.py containing tornado or twisted, it'll just run the app.py. If you've got a shell script because you want to run a Jupyter notebook, you dump an app.shoot in there. It's flexible enough to detect all these different things and it does it automatically. And you don't have to tell how to run the whiskey server because this will actually know I'm going to run this whiskey server, I'm going to configure it for you in the way that you need to. And in the case of Django, even if you haven't said in Django that here's a place to generate static files, I will go and do that in warp drive. Generate the static files to this particular location and tell Apache or you'd whiskey how to run those static files using inbuilt support. Or if it's guinea corn, I will automatically wrap your Django app with white noise to automatically serve up those static files. But it's not just mud whiskey. This example used mud whiskey. You can tell it I want to use guinea corn, you whiskey, or waitress instead. And I'll do the same thing. I'll auto-configure everything. So my goals I want to build a best-of-breed Docker image for Python web application appointment. I just don't like it how I see everyone out there and building one themselves. Even the ones that are out there, the official Python base image from Docker themselves, it's not, to my mind, done a very good way. It's getting very bloated. It promotes society that you should run stuff as root, which is a really bad idea. And a lot of other issues. So on this base image I want to have integrated builder script which is what warp drive is providing. It just provides that easy way of taking your code and getting a final image which does all this steps for you. And I want that local development using the same scripts and workflow as you got in production. So that you got a much closer match so that if you're running the whiskey server that you will run in production in your local box, you're more likely to find a problem. And I want to make this work with all the major hosting services. Obviously anything that's Docker-paced, it's quite simple. Any legacy ones like brokers, a bit more work involved. But what I need, not a lot, this is where I get back to, I'm not a good example for Audrey and Danny, because obviously this is a technical one and I'm the subject meta expert on it because I've been doing my whiskey for 70 years. So all I'm after is people go back and go and look at what I'm doing and give me feedback. I'm not at the point of expecting that anyone's going to jump in. But hey, if you want to jump in and help, that's even better. I'm not relying on it. Okay? Now the docs on this, I've said a bit about this project on my blog. And if you go to getwarp.org, there's a bit of information on there. I'm going to be writing a lot more information on this over the next month because I'm going to be on holidays for about three weeks. So I'm going to sit up in KL actually, writing docs. And you can message me on Twitter. You can at least tell me, this sounds like a great idea. It'll give me that encouragement because one of the big problems which sort of come up a little bit in Danny and Audrey's talk was that fear of putting stuff out there. An infrastructure project like this, especially, you worry a lot about putting it out there because you know it's something, because I have to create a lot to start with. I'm more than likely going to be the middle person. The main person or only person is going to have to support this going down the track. So encouragement is great. Now, I did mention OpenShift before as one way of seeing how this works now. Now, I'm going to give you freebie here. You go to that URL, there's a book there. You can download it for free. This is a book that I wrote with my boss. We sort of rushed, rushed, wrote it in the last few months. But that you can download for free if you want. It explains how you can run a version of OpenShift on your local box using a VM virtual box and vagrant. And that image has already loaded into it, those warp drive Python templates already. So you can play around with this, see if it works, break it, and then go back and tell me no, it doesn't work. Or you like this or so on. If you want to try that. Was there anything else? No, that was my last slide. And that's it. So another one of my pie in the sky projects. And I've had a number of failures in the past. So this won't be the first if it fails again. So I hope it will be interesting. I hope I can get some momentum on this and people find it interesting. Because I think it's an area where Docker, especially in the whole containerized environment, gives us an opportunity to improve our, improve a lot of everyone. We can come up with this really good way that everyone uses and everyone gets in and makes it better and better and better. Whereas at the moment everyone's doing things themselves. And I think we're going to squander a really good opportunity if we don't get on the containerization and docker bandwagon and rocket, whatever you're going to call it, and make the best of this. And don't repeat a lot of the mistakes we've made in the past. And that's it. And I expect a really good question from Danny or Audrey. Anyone got any questions? So that's awesome and good talk on it. Do you support CERC-CLOC for pleasant correct care? Okay. Where things are heading in platform as a service environment is, I don't need to because the platform as a service environment handles that. It does the SSL. So my images don't support SSL themselves because they don't usually need to. Heroku deals with SSL or OpenShift deals with SSL. And even if you're using docker yourself generally you are going to need a routing layer in front of it to handle load balancings. So hate chain proxy. Use, put your cert in there. I have a second part to this question. So right now with Cookie Cutter Django we're making our way through getting the docker integration to be pretty seamless. And we're duplicating a lot of the work you're doing. Would you consider putting in CERC-CLOC setup and then we could probably strip off all of our docker stuff and just use work drive? Could you be open to that? Should I always know the mission? Knowing what I've put into this, I'll be surprised since it's a huge amount of overlap actually. I'd have to go and look at it. Yes, I can tell you there is overlap. Audrey and I are sitting here saying wow, he's doing everything we're struggling with. You should have just used this. But we need the CERC-CLOC stuff, that would be... We'll talk about, I'll have a look at what you've done. Any other questions? Very missed question. In Heroku they came up with the images and the static files but how about OpenShift? Okay, so the question was in OpenShift can you handle serving up the static files? Part of the thing with Heroku is there's two parts of it. There's not really two parts of this one. The problem with Heroku is serving static images is more that they give you limited options of what server you can run. Most of the time people will run Gunicorn, even though personally I don't think Gunicorn is a very good choice for Heroku because it defaults to single process well, multi-process single-threaded which means you have to have a lot of images, a lot of copies of your process, use up more memory. You very quickly run out of your memory from Heroku, you have to start paying more money to Heroku. Multi-threaded servers are better. But you can still run Uwiski easily on Heroku and it provides the means of hosting up static files using Uwiski so that your Python app doesn't need to do it. Now the performance is not as good as if you use it full on server like Ingenix but you can do it. There isn't really a technical limitation on handling static files which are non-changing. Now the other one which maybe is what you're thinking of, take an example of Wagtail CMS. So it's a concept management system but you can upload static files, you upload images or documents that you then actually going to make available for that website. Heroku does not have any local file system persistence. OpenShift does. So if you were in that second scenario then yes, OpenShift can help because we support persistent volumes. The problems I see with existing Docker images, the question everyone heard was what are the benefits of using warp drive instead of existing tooling such as Ansible or just doing it yourself. The problem I see out there a time and time again is people who are constructing Docker images themselves are not following best practices. And this gets back to a comment I made before about with projects, if you make a decision very early on and it's a wrong decision, you're stuck with it forever after. So this is sort of Docker the company has sort of done that in a little bit in that there's all these Docker images out there which expects you to run them as root. And seriously you wouldn't run your web app on your own system as root, right? But they have all these images out there. And so a best practice is not to run stuff as root. And that's just one and one of many. There's all these other different things one can do to improve the security of stuff running inside of a Docker image. Best ways of setting up use of Python virtual environments, we're talking about a Python app. And all these other things around like that. And I just don't see people out there doing Docker images for Python which are following all these best practices that exist. And this is where we've warped drive. The belief is that you can come up with a Docker based image in the tooling which incorporates all these best practices so that people are in a good starting position, not in a position where they've possibly got insecure images. Now Ansible as a mechanism for ensuring you're using best packages may well be a way of doing it. This question is whether the recipes exist to actually create those things with best practice. And I don't know. Any other questions? Okay, you can get me at the break. Now I've mentioned that in the book. That one. If you're really, really keen to play around this and see it while I'm here, I believe I still have an image of the whole virtual box vagrant image on my system here which I can copy over to a USB drive for you. I've got some USB drives to give away as well. I would not download this over the network here. The image I think currently is two gigabytes and I don't think that'll go too well. Please don't join and download if you. Okay, so thank you. And we can talk later.