 Hi, everyone. I'm Steve Mulholland. I'm the technical authority for cloud native applications at UK Cloud basically means I'm responsible for OpenShift at UK Cloud. We offer OpenShift as a service for our customers and also use it internally. So just quickly about us. UK Cloud, we're a public sector focused cloud service provider. We offer secure cloud services on a variety of community networks and we're a multi-cloud provider. So we offer services across OpenStack, vCloud, Azure Stack and obviously OpenShift. Typically our customer facing OpenShift deployment consists of a private OpenShift cluster for a customer that's built in an automated fashion with automated scale out and all those good things. And we've built that with a fully automated approach and defining all of our infrastructure in code. So I'm just briefly going to go over why we do it that way and also how we're doing it using OpenShift pipelines. So the why is probably fairly obvious for most of you. We obviously reduce and remove the human error side of things. We can significantly increase the speed of our deployment process as well. I think when we started this process it was probably taking us the best part of a day to a day and a half to manually build a cluster. Now we're down to 40 minutes end to end and that's from no service to a fully running service. It enables us to do in the region of 20 to 30 customer builds a day if we need to. It also means we can keep up to date with the community releases which obviously our customers want to make use of as much as possible. Defining things in code makes it much easier for us to integrate OpenShift with our other services as well. So things like scale out become a lot easier for us to do on a per customer basis. And the kind of consistent baseline that we input in place across all of the deployments means that our confidence in our deployment is much greater. So now on to the how. So we're using OpenShift's integrated Jenkins pipelines to talk out to an OpenStack platform and to deploy OpenShift. So in effect OpenShift is deploying and testing its own deployment code. To do this we merge our customer code, our customer specific deployment information with the heat template for OpenStack, our own Ansible code to prepare the nodes and the upstream OpenShift Ansible code. And then we run some tests to prove that the cluster is working as we'd expect. The initial deployment is pretty basic customer deployment. We have a deployment server where we run all the actual deployment from, some load balances in a HA configuration, and then master nodes in a HA configuration as well. Obviously we deploy the number of nodes according to the cluster scale that the customer requires. So moving on to the CI pipeline itself. So we use a pipeline to build our CI tooling as well. So our Jenkins slave that talks out to OpenShift and OpenStack is built inside OpenShift itself. We pull the upstream Jenkins slave image, we add our own tooling on top, we validate that, and then we push it to the internal repository for use in subsequent deployments. The next pipeline pulls that slave, takes the heat code we talked about for deploying the infrastructure and the Ansible code to deploy on top of that. We've kept those two separate because as a multi-cloud provider we need to be able to deploy our platform on multiple different cloud platforms. So we don't want our infrastructure deployment on our application deployment to be too tightly coupled. And then at the end of the deployment we carry out the testing as I mentioned before. So this is the process end to end. So we pull some CMDB information into secrets and config maps. We pull the Jenkins slave from the OpenShift registry and then we pull our code from GitHub. The code gets merged and then is used to deploy the actual infrastructure onto OpenStack. On the deployment server we then pull in our repos from GitHub again and then the upstream OpenShift Ansible code. That puts our baseline config across the servers and then we deploy OpenShift on top. This is some post configuration done to the cluster and then we do the deployment testing from outside the cluster. That brings me to the end. That was a bit of a speedy run through. Our code is on GitHub if anyone wants to check out what we're doing in more detail. Feel free to come and ask questions downstairs. Thanks.