 Well, hello everybody and welcome again to another OpenShift Commons briefing. This time Cloudsoft, a longtime member of the OpenShift Commons that I haven't gotten to give a briefing yet. I'm really pleased that they're doing one today and they're going to talk about keeping OpenShift evergreen using their Cloudsoft app. They've done a lot of work with Apache Brooklyn. I'm really pleased to have Andrew Kennedy online. I've seen him speak many times at conferences and he's going to give us a bit of an overview of Cloudsoft and then he's going to do some really good demos. So I'm really looking forward to it. The format for this is, as always, ask questions in the chat. I'm going to try and get Andrew to do most of his presentation and then I'll open up the lines after he's done. So we have a full hour. We'll see how long he takes to do the demos and everything and we'll pray. But the demo gods are with us. Andrew, I'm going to let you take it away with that. All right. Thank you very much, Diane. So what I'd like to talk about here is OpenShift, evergreen OpenShift and how we can manage to do that using Cloudsoft app. So first off a bit by myself, I work for a company called Cloudsoft. We are involved in the open source community. As I mentioned, Apache Brooklyn is one of the main projects that we contribute to. So my work is as a software engineer developing software for Apache Brooklyn. And we also provide that as a commercial product as Cloudsoft app with support. So I'll begin with an introduction to Cloudsoft app just explaining what it is, what it can do, its application management software. So just showing a bit about how all of that works, what it's useful for because you've never met it before. And then I'll move over to how we can use Cloudsoft app with OpenShift and the various things that we can do there with the two pieces of software. Applying OpenShift, managing OpenShift at runtime, how we do that management and how that lets us do the evergreening, managing to keep our OpenShift clusters running smoothly and up to date. So begin with the introduction to Cloudsoft app. And it's, as I said, it's Apache Brooklyn, it's the commercial version of it. And Brooklyn and AMP were developed to solve various problems. So IT and software today, ecosystems, they're large, they're complex, and they're fast moving. So there are bits of software that are considered essential today that no one had heard of three, four years ago. Docker is the canonical example there. And although they're essential, they're complicated. They're not the simplest of things to get your head around sometimes. Because they're new and exciting, they're not perhaps fully documented yet. So version zero point something, but they're still useful. So people still want to get these things out there. And finally, there, there are instructions here is how to install it. Here's how to get it out onto your systems into your data center. But then you come to the day to day operations, the runtime management. And often that's an afterthought. So Cloudsoft app is designed to solve some of these problems, make a few of these things easier. And the way it does that is modeling applications. So it lets you describe what your application will look like, describe the components it's built up of, and capture those in a blueprint. It can then deploy that. So instantiate that blueprint into a cloud or into a container ecosystem somewhere into Docker, into Amazon web services, into OpenStack, wherever. And then, once it's deployed, it can manage that deployed application. And so this isn't based on some particular, you know, particular piece of software. It's a generic capability. So understands how to send properties. It can go and talk to REST APIs. It can run command line commands over SSH and pullback attributes. Find out what's going on with your software and then make changes to that. So manage it. Perhaps, you know, the CPU load is too high in a cluster. It can scale that out or things around you scale them down. It lets you make changes to software as it's running. And we can do this pretty much anywhere that we have a supported driver for. So we use most public clouds. There are examples there, OpenStack, software, Azure, Amazon web services, so we can deploy to public clouds. We can deploy to private cloud. So your VMwares, the OpenStacks and Cloud Foundries. They can be on physical infrastructure. So bare metal machines that you simply have credentials for or virtualized. So again, your VMwares and your OpenStacks. And this is completely separate to the application blueprint. So the application blueprint describes the app. And then we can send that off to any of these targets. And the things we can create there, we have built-in blueprints for, but there's at least React, Inject, Spatial, Multi-tier apps, you know, ElkStack. All sorts of various useful open source components are available. And you can link those together and build them up into complex apps of your own. You can build in your own line of business software, your stuff written in Python, in Go, in Java, and include those, link those into your blueprints. So once we have the application deployed using AMP, maybe, you know, just onto some public cloud somewhere, we want to manage that at runtime. And I mentioned policies and doing things like scaling out and scaling down. So we use metrics that we pull back from the app, and we can look at just properties of the machines that the application is deployed to, or actual application-specific things. We dig into your database, find transactions per second, maybe we look at a web service, find out the latency. And we can use these metrics to decide what sort of actions to perform on your deployed application. And these actions, again, they're a specific library of actions. So how to scale across sort of things is not intrinsically dependent on what those things are. So you can build up a scalable application out of individual components that you simply pulled out of a catalog. So that it's quite a powerful capability. And it lets you concentrate on designing the app rather than worrying about high availability policies and mechanisms and worrying about what sort of monitoring software. These are just components that can be added in to your footprint. So this lets you use, as I said, concentrate on the application more. It lets you think about how your business is using the app. So you're not worried about infrastructure. And that's, I think, the important takeaway. You design the app. You think about the app. You don't worry about the underlying cloud. You don't worry about the underlying bare metal machines. We take care of that. So as long as you have a blueprint, you're able to send that off to some cloud somewhere. And finally, as well as being able to deploy applications to the cloud, so that's all well and good deploying something like OpenShift, which is the first thing that we would need to do. But then the applications you're going to deploy, they're no longer going to the cloud. They're actually going to OpenShift. They're containers. They're not software running on virtual machines. And Brooklyn are also able to deploy to container infrastructures. We're able to stand up a Kubernetes or a Docker swarm or, as I'm going to show you, an OpenShift infrastructure, build that out, provision it. But then your applications that you want to run can also deploy those to OpenShift. So that OpenShift is then not just the thing you deploy, but it's also a target. And you can even construct applications where some part of them is running a pod inside your OpenShift environment. And other components, an Oracle database is running on a bare metal beefing machine sitting next to it in your data center. So we're able to deploy and deploy to containerized application environments. So I'd like to next show how we can apply all that to OpenShift. How we can deploy and manage an OpenShift cluster. So we want to deploy OpenShift into the cloud. We've got an account on AWS or maybe we've got a private on-premise OpenStack environment. And it's pretty simple to work out what you need to do to stand up OpenShift because Red Hat provides Ansible and Ansible playbooks. But if you recall back at the beginning, one of the things that it's quite easy to do is to install an application, but then runtime management is something that is added on perhaps as an afterthought. So AMP is able to use Ansible for what it's good at. So we don't pretend to know better than Red Hat how to deploy OpenShift. The canonical and best practices are described in the Ansible playbooks that are provided on GitHub by Red Hat, so we simply use that. And we can configure that. AMP is in quite a powerful configuration language, a powerful DSL that lets us describe how we want to build our cluster so we can configure all the component parts. But AMP is also then able to introspect into the deployed OpenShift and work and see what's happening to these virtual machines in real time. What's happening to the OpenShift processes that are running the Kubernetes components, the OpenShift, the Docker components. And we can build up some quite complex mechanisms to do some very useful things to this deployed cluster from some basic primitive actions like increase the number of nodes in a cluster or execute this command on this node of this cluster. So we take something. This is a very simple bullet print that describes how to deploy OpenShift Origin template. And this doesn't contain the nuts and bolts of the deployment, but is rather the configuration that is then fed into the Ansible playbook. So you can see the OpenShift version. You can see the size of the master cluster. There is two. We have three XTG machines, five nodes. So we're going to build a five node cluster and some specifications for the machines that we're going to create for these OpenShift nodes with the number of cores of microgram. So AMP takes this and we have a nice graphical UI that lets us have a look at what we're deploying. So here you can see just exactly what was shown in that bit of YAML before, but perhaps in a slightly friendlier fashion. You can see there's our OpenShift cluster has been highlighted and you can see the table of properties down there to the left. And we would simply click deploy or save it to a catalog, but in this case we're going to deploy that and we get a running application. So this is the part of the application that contains that five node cluster. There are five OpenShift nodes there. You can see them with the little OpenShift logo. And we can also look at the whole cluster. So there's a tree structure here. You can see there's also the masters and the XTG part. We're not just creating a single entity that represents a cluster and that's it. It's in a page black box. You can introspect right down into the individual machines. So I'm going to try and flip across and have a look at this live. So I'm not going to show it deploying because to be honest, that's quite boring. You've got to sit and wait while YAML updates a whole bunch of packages. You've got to sit and wait while things are pulled down from remote repos. This is one that I've already deployed. And you can see the structure. It's easier to start here. So we can see we have OpenShift origin. We have a little entity that manages our host files so that the individual machines can talk to each other. This could be replaced with a bind server if you wanted but for purposes of simple demos, I'm just using XTGhosts. And we have an Ansible server that contains the link. It pulled down the playbooks from GitHub and that's where we execute the commands to deploy and create this OpenShift origin cluster. And you can see the main, the URI with our OpenShift.WalkHolder private name. It's not actually in the public DNS. So if we drill down a little bit, we can see there's a load balancer, some masters, an HCD cluster and some nodes. HCD, we have three of these. The masters, it's a pair of master servers and a single load balancer. And we've already seen the five node OpenShift node cluster. So going back here, if I look at the individual OpenShift nodes, we look at sensors. And what sensors are there, the mechanism that I'm from Brooklyn use to pull data back from things that it's deployed so they can then use them for runtime management. So you can see it's got the host address, which is, I'm doing this on a VPN, so they're all private 10-dot addresses rather than public. There's information where it's installed software and I'll pull back the whole YAML document that describes an OpenShift node status and in fact, this can also be processed. So this is simply the first part. So this is a YAML document showing we can read in here the 20 pod capacity, the amount of memory, and that can then be fed into policy that will extract out the correct, perhaps the memory status, perhaps out of this messages, it will extract those out and the policy can then decide, really, I need to do something, this node is unhappy. I need to restart the service. I need to maybe delete it, shut down all the pods running on it and go and start a new node. But we can see here OpenShift status, it's happy, it's running, services up. This is, it's a happy cluster, it's doing the right thing. If we would get the master nodes, again, we can see the same sort of thing and we're pulling back information about versions of which particular bit of software ended up being installed by that Ansible Ansible Playbook. And the cluster itself, I'll just pop open the OpenShift console. So it's a pretty standard OpenShift. We're running, yeah, we're running a single reader's pod on this and I'll come back to that, but we're just running a single pod. It's not doing anything particularly interesting at the moment. So that's our runtime managed and deployed OpenShift cluster. The AMP has created all of this. In fact, the final thing that I'll show is the activity going on and show where it came from. We provisioned this on a Playbox OpenStack environment. So I've created the virtual machines. It provisioned them on OpenStack and it's then executed a series of actions. It's installed some software, customised it and launched it. And then the same thing with our Ansible server, which goes, we provisioned an Ansible server and it then executed the Playbook. And the Playbook turned all these virtual machines into that running OpenShift cluster that you saw there on the console, the OpenShift console UI. So I'm going to leave that there for now. We'll come back to this runtime management and actually execute some runtime commands on it. But I'd like to explain a bit more about how that actually works first. So I'm going to move on to automation and describe how we can automate actions in OpenShift. So we're not just deploying, but this is automating tasks that you might want to do while the cluster is running. So we want to monitor the cluster activity. So I mentioned policies that would get, say, the number of pods. So I suppose I have five nodes that are configured for a maximum of 10 pods each. I'd need to monitor and perhaps once I hit some little watermark, say 40 pods, I know that actually I'm going to run out of space soon because I'm going to hit my limit of 50, so I need to do something about that. So the first thing we need to do is monitor the cluster, look at these sensors, look at the properties of the cluster and pull them back into policies that we can do something useful. Then we need to actually do that useful thing. So adding new nodes, and that's reasonably simple for AMP because, well, you saw it can deploy to the cloud. So adding a new node to the cluster while we simply provision a new virtual machine and, helpfully, Red Hat have provided further playbooks for Ansible that we can use to extend the cluster. So there's not just a playbook for deploying, there are playbooks to scale up and scale down. So AMP's able to orchestrate those and execute these further playbooks that will scale out when it needs to. Other things we might need to do, we might need to replace machines so perhaps a process has failed, maybe the distance failed, maybe there's a bad network connection. We don't know but our cluster 5 has become a cluster of 4. So to regain that capacity we need to, again, it's the equivalent of adding a new node. It's just when does it happen? So we monitor, we monitor for failures and we do, we replace the failed machines and either delete the failed ones or keep them and investigate later what was it that happened? Was it someone hacking in? Was it just a dead valuable machine or some natural cosmic rays, something like that. We can grow the cluster so adding new nodes when we need to because there's excessive load but perhaps we also want to anticipate that so we can provide a policy that follows the sun, so to speak, that 9 till 5 we increase the size of our cluster because we know that's when most of the jobs are running. By the clock happens we scale it back down to some minimal level and we're not wasting money, we're not spending, our Amazon compute bill is not growing enormously when we're doing nothing with the cluster. And finally we might want to update the machines in the cluster, we might want to update the operating system, we might want to deploy patches or perform some kind of operational maintenance of the underlying infrastructure and hopefully do this without upsetting customers that are using the cluster, without affecting the applications. We simply want to make sure we have the right type of security patches, we're up to date with the latest services running on our cluster. So these are all the things that we'd like to do and the way we do this we do this with effectors. So the amp blueprint creates that tree structure of entities that you saw, so we have the cluster, it has children, it has five children, there are the nodes running in there and each of those nodes has not just the sensors to find on it but it also has effectors. So these effectors they're ways to perform tasks so each entity can have certain tasks performed on it and which tasks you perform is based on the type of entity. So there are some things it makes sense to do on an OpenShift node there are some things it makes sense to do on the master and other things that it only makes sense to do on say the Ansible server. So only the Ansible server understands how to run the scale out of play week whereas starting off in the OpenShift process that's running a node well that only makes sense to happen on the OpenShift node or management of pods that only makes sense well it's the masters that would be able to do that perhaps wearing OC command line commands or executing some HTTP REST API request against the master or against the load balancer and resizing of the cluster as well the thing that knows how to do that is the cluster component itself it knows how to add another one of the things that it is managing a cluster of so the master cluster understands how to add a new master the node cluster can add a new OpenShift node and so forth. So all these effectors are defined on these individual nodes and they perform sometimes simple actions sometimes a little bit more complex in general and I'll show you an example here this is the definition from the actual underlying blueprint and it's just it's a YAML file it describes the structure that tree structure but it also describes these effectors these actions and this one it happens to be selecting the oldest nodes but you see the underlying way it does that it's simply an OC command and worked out the correct parameters to pass in we want the K oldest node so we pass in this $K and we can build up some quite complex things from just very simple components so we can define on nodes all of these little effectors that do various quite simple things but you then get a very powerful capability out of that because you can compose them together so select oldest nodes once we've selected them well maybe we want to do something with them and in fact the evergreening the whole evergreening process it's simply a composition of a lot of these very small vectors these primitive actions pulled together and bundled together to perform useful operational actions on your cluster and evergreening is just one of those it's just an example you could do pretty much anything you want so we're able to combine these tasks like select the oldest and we can do different things we could say sequence some tasks so do A then B then C we could compose them together so do A take its output and feed that into B take these output feed that into C looping here's a list of nodes just do X to each of them maybe X is restart the service so we loop through the list of nodes restarting the service choices if the service is unhealthy then do X otherwise do Y transformations and replacements so we can transform the content or the output of some command maybe we get the output and IP address and we want to find the host name so all of these little compositional mechanisms can be used to build these useful tasks that's how in fact that's how we build a task like repave so we want to keep our cluster fresh and we'll say the phrase if you're evergreen the cluster so we do that and we use simple primitive components and AMP lets us build those up into a task like repave so what exactly do I mean by repave and evergreening so if you're not familiar with it evergreening is the process of making sure that your cluster doesn't get stale that the operating systems that the patch sets that the security hot fixes that are deployed that you're always up to date so one way of doing that one way of evergreening your cluster is to repave nodes so repaving basically it's a simple set of steps laid out here so we resize our cluster so we've got say five nodes we resize it so we've got six and that sixth node is going to be created using a fresh operating system and when we do the update you know we call an update command it's going to pull down the latest security patches it's going to pull down the latest packages so we know that when we create a new virtual machine we get the fresh operating system the fresh patch set and the fresh packages that we need to update the cluster but that's just the virtual machine so yes we've resized the cluster we've got a new fresh VM that's got the latest set of hot fixes on it we then need to scale out OpenShift so scale out on to this new VM and that's where Ansible comes in to play Ansible helps us by giving us the scale out playbook we kick that off and the new fresh virtual machine becomes an OpenShift node but now we've got a six node we've got a bigger cluster than we want it so that's where that select oldest little task that I showed you comes in we want to get rid of some of the old nodes because we don't need them any more we've got a nice new fresh one so we find the oldest and then we run the OpenShift commands and we say let's stop these nodes take them out of the cluster stop all the tasks on them so they're running some pods they're running some processes we stop those processes stop the pods or rather tell OpenShift to do that and OpenShift will helpfully stop them and restart them on some machine that actually has space for them so OpenShift will drain tasks and move them over to hopefully our new virtual machine and we can then get rid of these stop nodes that aren't running anything anymore and are just taking up space taking up CPU and memory in our cluster and adding to our monthly bill on AWS so that lets us repay the single node or perhaps a couple of nodes and if we want to make sure our entire cluster is evergreen we simply schedule this we say let's run this we've got an end node cluster let's run it end time so five node cluster we'll run it five times once a week and that five times we'll clean up one node at a time and our cluster every week we know it's evergreen and if there's a problem well we can stop after the first one we can say oh okay there's something wrong with this and we can go back and we're not going to the cluster is not going to crash and burn because we just changed everything all at once it's a similar mechanism to say evergreen deployments the same sort of concept we're doing one at a time and making sure that yes that was okay and we can carry on so I'm going to kick off a demo of this and show you how Repave actually works so first off here are the effectors so we have Repave cluster on the OpenShift Origin but I'm not actually going to use that because what this would do it would Repave all five of the machines all five of these OpenShift nodes and that's going to take a bit more time than we've got available to us so instead we can execute the Repave nodes which is sitting here and Repave nodes simply Repaves K where K is some parameter and we can Repave a single node and that will happen a bit faster and actually this is a good place to look because you can see in this list of effectors things that an Ansible server can do you can see that the Ansible server knows how to scale out nodes it knows how to stop machines how to update nodes and if we look further at the master we'll find that the other things I talked about disabling scheduling evacuating nodes selecting old is they all live on the OpenShift master because it's able to run OpenShift commands across the entire cluster so before we go and start Repave what I'd like to do here's our OpenShift console so we're going to flip to the OpenShift web console I've got a demo project as you saw before and here's our Redis so let's look at the deployments so there's only one replica there what I'd like to do before this is to scale this up to five because we've got five nodes so now I go applications deployments applications of ports so we can see we had our initial running Redis and we're going to have five Redis services running and if I pop to the command line it's a list of nodes so we can see here's all our nodes that started this cluster an hour ago and I'm also going to look at the list of pods and we can see here's our one, two, three, five pods so we can see the pods running and you can see they're each running on a separate node so we'll leave this up here and go back to our amp so as I said we'd like to repave but we'd like to just repave a single node so let's invoke this I'm going to set that to one and confirm so that started the repave process and if we look there you go there is it's scaling up, it's resizing and what's happening with resized nodes it's calling, it's asking OpenStack for a new virtual machine and it's then going to run through, it's going to run the Ansible Playbook and so on and so forth but I don't think it's perhaps less instructive to watch this as I have a sort of visual diagram that shows perhaps more clearly how exactly this is all happening so we started with the initial OpenShift cluster you can see there are nodes, each node is running Docker engine and OpenShift and we have pods, okay there's a lot more pods running here than on my little demo cluster but you see the light green ones, there's some application that is spread across all five nodes so first thing that happens is amp resizes that cluster and in this case it adds two more nodes so we see those at the right and these are what I talked about the evergreen, the fresh nodes with the fresh security patches and so on, we know they're updated then you move on to Ansible Ansible has the scale out playbook and it's telling Ansible execute this playbook that causes these fresh nodes to have Docker engine and OpenShift on to them so we've now gone from five to seven nodes and are now running a seven node OpenShift cluster but we don't want two of these so we need to find a couple of the oldest ones and it so happens these are the ones we've gone dotted lines around them so amp is able to run that little bit of shell command that I showed you and it selected the two oldest OpenShift nodes so next we want to stop anything further being deployed to those they're the old ones who are going to get rid of them so we turn off scheduling which is again a single command and then after that we drain the pods from those nodes so you can see the dotted lines we've gotten rid of the pods running on these old nodes and OpenShift has helpfully started them up on the two nodes that we created and this is all possible and if you remember the way OpenShift likes things to happen you're running stateless pods you're running stateless images so we can stop and start and move things around and our end users are going to be none the wiser they're going to see their application continue working they still have capacity there's no lost transaction there's nothing to worry about like that your applications are built as stateless stateless microservices stateless applications OpenShift is able to juggle them around as it sees fit and move these pods to the latest freshest nodes that it finds in the system because of course it can't schedule anything anymore to these old the nodes with the dotted lines around them AMP comes in again so AMP has told OpenShift let's reschedule these pods to another node we then turn off the OpenShift process we don't need OpenShift anymore so SSH is in and stops the processes there is nothing happening anymore those processes are no longer of any interest to us but we still have the virtual machines so last task AMP needs to go and release those virtual machines put them back into the pool maybe it's open stack maybe it's AWS but essentially they were just taking up space they're taking up CPU from our pool of CPU they are increasing our Amazon web services bill or Azure bill whatever so we release those virtual machines they disappear and we end up with Repaved OpenShift cluster now this is simply two and if you imagine we do this perhaps one at a time or we do it once more but set the browser to three we end up with five build cluster every virtual machine has been renewed replaced and our applications are still running quite happily our pods are still quite happy end users are none the wiser and yet they have a more secure a more up-to-date piece of underlying infrastructure so we've been able to do all this to improve our infrastructure without inconveniencing the end users running applications there I'm not sure how much I can take an awkward time when I started running through that little animation so let's see okay no that's looking good so we've resized the nodes we then updated them so our select oldest happened we did a bit of bootkeeping converting the string to a list and we then stopped the oldest ones let's go back and then we actually stopped the machines those were running on so bit of bootkeeping again resized by Delta so that resizes by minus one and what that does it says let's throw away the machines that are unwanted so that would be any machine that had open shift stopped on it so that's happened we still got five nodes in our cluster it's not actually immediately intuitively visible here of what's happened so I'm going to flip across back to our command line so let's look at two things I'm going to run that get pods command again and we can see there we go seven minutes old seven minutes old, three older pods and we have two one minute old what did I run that with yes I ran it with a size of one so that there was the initial pod that we started then I scaled it out to five which are these seven minute old pods and then there is this brand new one which has just been started so this is the pod that got moved on to the new evergreen node and in fact if we look at the list of nodes we can see that yes they're all one hour old except here's our dot 146 and yep that's exactly right that new node is what our brand new pod is running on so you see that one node is vanished, a new one has been created two minutes ago and the pods have been shuffled around and our application still has five replicas it's still quite happily running there we've got to be able to see if we look at our console in fact yes we can see so there's our two minutes old this little redis running here activities and the only other thing that might be a bit useful to go into here is yes we can do that manually and that's great but we want to do this automatically so here's our list of things that happened but if I look at policies we see periodic effector and this periodic effector is running it's repaving two nodes every day or we could repave the entire cluster we could run this repave cluster once a week the principle is that we're able to execute these effectors automatically these effectors in turn run a sequence of further primitive commands so resizing starting stopping scaling out and that happens on some kind of scheduled scheduled policies scheduled program so I'm going to talk back to the slides now and just summarise what we've seen here and why it's useful so first off we talked about AMP and what AMP can do so AMP lets you manage your applications grow your applications scale them dynamically depending on the workload it gives you a common management platform so that mechanism that we saw to manage open shift infrastructure is exactly the same front end an API and user interface that would be used to manage applications running on open shift so completely the same piece of tooling completely agnostic so although we across platform so although we ran it on open stack there it could run on AWS it could run on private or public cloud doesn't matter it can also run on containerised platforms and finally it's a simple and consistent management mechanism and it lets you build up from simple components into complex apps using a library of effectors and actions and a library of entities and templates and blueprints so you can orchestrate not just your application and the mechanism for deploying your application but also the operations that happen to your application at run time so we're able to deploy open shift origin we then are able to I didn't really show this but we're able to deploy applications to open shift in the same way so we can deploy open shift itself we can manage a running open shift so work at the running virtual machines look at the properties of those virtual machines and we can orchestrate actions across the whole application so we can make things happen the repave of an entire cluster and use that to evergreen open shift so with this mechanism we can deploy open shift and make sure that at all times we have the latest version of whatever security patches and fixes are required and this all happens automatically and in the background and I just add that of course this is no solution or no alternative to your heart bleed comes out and you've got to fix everything immediately we're not recommending that it will be repaved next week but you can use the same mechanism we could simply have heart bleed is out we need to get the latest updates well I can just click on the repave effector and repave now I don't need to wait one week so we can either automate the management it happens on schedule or we can instigate those actions and execute them whenever we want as operators so that's essentially it hopefully that made things a bit clear hopefully you understand how open shift evergreening can be accomplished and how we can deploy and manage open shift using Cloudsoft AMP to talk to Ansible and talk to OpenShift so I guess are there questions well I think I have a couple just maybe clarification which and I'm thrilled that we finally captured all of this information and I can't wait to get the feedback from people who are using it and what they think of this approach I think it's solving a lot of problems I'm I have to say for me this was the first time I heard Repave I've heard it evergreen before but Repave was new to me I think it's introduced by Pivotal they first talked about it if I recall correctly so it's an interesting concept my question is you talked a lot about stateless apps and OpenShift leverages Kubernetes persistent volume the PV framework to allow the provision persistent storage for customers and I'm wondering how all of this repaving works or if you've tried it the things that have for PBCs or for volume plans right so I don't know if you noticed but I did I'm not going to say cheat but the redis I deployed was the redis with a femoral storage and essentially you wouldn't want to use localized volumes but you can Repave if you're using volumes that are coming from an external self cluster or something like that then there's no problem at all because Kubernetes and OpenShift are smart enough to restart the pod somewhere else and reconnect that volume from your SAN if you were using local storage of course you've got a problem your app is going to have a bad time because when it starts up it can't find any of its data but I think so always you follow best practices and everything should be okay and that's one of the next steps that I'd like to work on with this OpenShift deployment you'll notice that it didn't contain a deployment of a SAN there wasn't a storage cluster deployed there so I think that would be the next step is to modify the blueprint to deploy a storage cluster and make sure that we can Repave and redeploy the applications connected to volumes on that storage cluster and also think about what to do to Repave and update the machines that are actually running the storage cluster because you have to worry about never rain and availability of your storage and that's another exciting problem so there's a lot of exciting problems out there I'm hoping that some of you guys will be coming to Red Hat Summit in Boston and will pop into the OpenShift Commons Gathering on the day before May 1st in Boston because there'll be a lot of real world production scale customers talking I think we have 9 or 10 of them now who are going to talk about their stack and their implementations of OpenShift and it would be good to see because I think that the PVCs and persistent storage clusters is on the rise and that's something that's going to be interesting for people that are using this approach to make sure that they manage correctly there is one other question here and and I'm also interested in this too and I think probably it's new to you too but Dale is asking if you've attempted this with an OpenShift Upgrade Ansible Playbook and to run the Upgrade Playbook no but that is one of the things I had in mind as being one of the capabilities of these effectors so yeah absolutely that's a further effector that would be quite simple to design based on the work that's been done here instead of calling scale out we call Upgrade and we move from OpenShift Origin whatever to the next place yeah that's I think that would be infinitely useful to have and have it tested and have those have that implemented here because that really because with Kubernetes being on a three month release cycle and OpenShift Origin being rapidly coming up on its heels I think we're always one point behind because we take three months here in scale and test our projection versions of these but that I think the upgrading process is something that is always top of mind for everyone who's in operations using OpenShift pretty much anywhere 100% agree I mean that's basically it is one of the next things on our to-do list with this so yeah we're looking in the right direction so that's good to know and hopefully we could come back and demonstrate the Upgrade once we've done that we're also Diane we're also looking for people to work with us on this so rather than us just going off doing something coming back you know another show and tell if there are people there or whoever that wants to kind of come up with Andrew certainly we'd be very keen to support that yeah I think that would be interesting and that's kind of why I'm thinking that you should if you're not coming to Red Hat Summit or the OpenShift Gathering at Red Hat because there's I think over 80 80 sessions that are nearly just people who are using it in production and their feedback would I think inform and they'll have some real world use for this approach I believe so I think it would be a good time to get together and introduce you to some of these folks Is there a link you can just post for us of the gathering or am I just behind the curve here that's okay sorry I'm losing my voice a little bit because I've been talking all morning too not I got to listen to Andrew for a change but yeah if you go to I found it I found the link yes commonsadventure.org we do three of them a year two at each coupon and now one the day before every Red Hat Summit and we love I think I really think you ought to be in the room and be part of that conversation because that's where you're going to find people like Dale and others to help you out with this and I'll shoot you guys a note and a code so that you can figure this out with a promo code and if anybody else wants to I'm happy to get you in the door if you could that be great and we can talk internally about whether we can maybe power shoot Andrew in because I think it would be good because I also think it's the face-to-face time that really makes the difference a lot of these briefings and there's tons of people who watch them afterwards and people who've joined us today and I'm thankful for that I wish you would ask a few more questions but today I think Andrew did a very good job so maybe that's why there's not so many questions but I think we've got a lot of a lot of good synergies here and this is something that's very useful and for me I think today is the first time Andrew and I think you did a really good job explaining the value proposition for what you're doing at Cloudsoft so it was a very very good demo so thanks thank you for having me and I'll obviously send the slides to you so that anyone who wants them will be able to get to the relevant place at the relevant time yeah and as I said at the beginning all of these will be up on our YouTube channel for open space we can shift on YouTube and I am behind a few as some of you know I usually post each of them as a blog and I've been caught up in a few other things so I'm behind about three or four sessions so I will get these up hopefully by the end of the week and they'll be up on blog.openshift.org or .com rather they'll be there soon so thanks again Andy this is really a great demo and I think very useful for getting a good overview of what what you're doing at Cloudsoft and where you've done where you've gone with the patch we built last time I saw a demo it's quite impressive well thank you very much thank you