 Good morning. My name is John Tonello. With me is Simon Briggs from SUSE. Hello everyone. This morning we're going to take you through SUSE OpenStack Cloud deployment of what we call CASP, which is our Kubernetes deployment. And this will be a live demo, so the Wi-Fi here has actually been pretty good. So we're being very brave. And if you stick around to the end, we'll give you some socks. SUSE socks and don't forget to come to our booth later. But with that, I'm going to turn it over to Simon, who's going to take you through each one of these steps. Yeah, so initially what I'm going to do is log into this cloud that I've got running over in Provo in the United States of America. If you know any history around SUSE, we were part of the Nebel Group back in the day. So that's the reason why we've got so much resource sitting in Salt Lake or near Salt Lake City. That cloud is set up to run for me in a project. And into there, I'm going to drive a standard heat script that SUSE provides as part of its distribution to quickly allow organizations to rapidly deploy Kubernetes-based clouds on top of the OpenStack infrastructure. And I'll show you how easy it is by driving that demo. It should take only a few minutes. And then I'll pass back to John. I'll take you into one we built already because actually it takes a few minutes to do the deployment, but it takes a little while for the APIs to come up, et cetera. So John's going to take you onto a platform we've already built out and show you driving an application set into that Kubernetes environment that's available. So I'm assuming many of you have already used OpenStack, used Horizon in your time, so I'm not showing you anything new. And do excuse me, my eyes are so old, I need my glasses. There we go. So I'm just logging in as an admin user into my Horizon environment. That should only take a few seconds trusting the Internet in this wonderful establishment. I'll dance around, do some juggling. Oh, there we go. The demo gods are with us. What I'm going to do is quickly go into my own personal project space as a tenant in this demo environment. There's many, many members of my team working this environment. So I've got a tenant environment to work in and in time that will come back. See, it always feels a lot quicker when you're preparing for these things. When you're stood on stage, it takes forever. So now you see there, I've already moved across to my environment, Cybriggs. And in there, if we have a look at the resources that are running right now, I should have just a couple of security groups defined. So that is all the resource I'm using in my cloud environment in my tenant group right now. So I want to rapidly deploy with software definition a Kubernetes environment to allow people to deploy into that containerized world. To do that, I just very quickly pick up this heat script, which soon to define we actually deliver it as an RPM within our product set. I've extracted that out already onto this machine. So it's under here, this CAF stack YAML file. I can then use an environmental file. I don't particularly need one, but I can set one up. And then quickly I drive into there. For sake of speed, we only have a few minutes whilst we've got your attention. I'm not going to go into details of that YAML file, but we can make that available to you. Very easy. Come to the stand. We can show you it in detail. Here, I just put in my validation bits, give my stack a name, obviously. And there's going to be three elements to the virtual resources I spin up to then deliver my containers as a service platform technology. I'm going to have my administration node, which is a SUSE or administrator for our Kubernetes environment. I'm going to have the brain of our Kubernetes environment as master node. And I'm going to have some workers, some workhorse nodes that are actually going to drive containers for me if I deploy to this environment. And then I click launch. It takes that long, okay? In the background, obviously, OpenStack's doing this wonderful stuff with all its APIs that stacks being built out and progressing. We can keep up to date with what's happening in there. If we drill down now, we should see the components building out. I don't know why the graphics aren't presenting very well there, but maybe it's the browser. This is a demo box I've been testing on another box, so that's probably why that's there. We're also getting some information. I'm actually getting details of the private key tested out. I did hack the script a little bit to make it do that. You wouldn't normally do that, obviously. Just a way of showing the private key if I want to SSH onto nodes. And, of course, events are taking place in the background as resources are being built out. I should have delayed you a little bit there, so now if I go to the overview of my tenant environment, suddenly, miraculously, I should be using resources. You can see there I've taken up some of the VHCPU and RAM, attached some floating IPs to the management nodes that I need to work on. So I've got my master node, my brain node in my Kubernetes has been delivered, and importantly, I've got my administration node from containers as a service platform to go into what's called our vellum dashboard to deploy Kubernetes fully. I have a floating IP attached to that node, so over the internet, I should be able to log in. But let me just confirm first that that node has gone to the point of starting up. It's taken a little bit of time in the background, so I may have to tap dance again a little bit to keep your attention. But hopefully, in a little while, we will be able to see that that... I'm talking slowly now to try and keep your attention. But now if we go to that environment, it's not quite up yet. Take a bit. Can you do the dance? Do the roll on the floor to try and distract everyone whilst my environment is coming up. But it's the first time John's actually presented with me, so he didn't know I'd asked him to do a dance while we were presenting. I've lost my screen. John, out there we go. Right. If we refresh this, I'm very hopeful that that will have come back again. So it's not quite as rapid as we were hoping. The Internet's not being quite as kind in the background. Maybe somebody's running a big load on my demo lab in America, even though they were warned that we would be doing this right now. And it's like one in the morning there too, so... Yeah, so they will... Maybe they've automated it. It's a wonderful open stat. You can automate all these things to work in the background. I am now presently dying on stage in front of you, so enjoy this moment for all of you who do present technology. It's always such fun when it starts playing games with you. I know you'll have your sympathy. Let me just see if I can get that baby to come up. As always, I tested this 45 minutes ago and it went seamlessly. So go on. There we go. Finally, my Kubernetes vellum dashboard is available for me to build out my cloud environment. Obviously, it's only a demo environment, so I've not set up a certification. And the service is now in the background, so it will have to delay you a little while. But what you can see here is very rapidly using a heat template, software definition that's an asset to your organization. You can rapidly use the technology Susan makes available to its customers to deploy new wave technologies out to your user base in your organization. And after that, maybe if I hit refresh, we might actually get to see a dashboard. It's starting to come together, guys. So first of all, on this vellum dashboard, I have to give you some details as a first use. I'm using an American keyboard. I'm not used to where the keys are. Give me a second. Well, just this is administration, hold tight caller. OK, so now we get to the actual detail of deploying a cloud. So how many of you have deployed Kubernetes yourselves using the upstream? OK, you'll know there's many different components. This dashboard allows us to bring them together to then define what we need to on the click of a button, get that environment, and Kubernetes environment ready to start Helm deploy into it. So if I just fill out the details, I'm going to have a dashboard in this environment for my internal network. The 10.network is actually my software defined network in my cloud. I need to quickly shift some of the networking for this particular cloud. So the overlay network needs to adjust a little bit. And then I quite simply hit go. How many minutes have I danced in front of you? There we go. We should now be able to hit go. Now, if you're doing bare metal deployments of our Kubernetes management system, you can actually use an automatically generated Autoyaz file at this point, our automated deployment technology for bare metal. But as we're in a cloud, I don't need to worry about picking up that Autoyaz file. I can just go to the next stage where those worker nodes have already registered with my dashboard, but they're available for me to commit and create Kubernetes resources upon them. I validate that those are the nodes I want to deliver Kubernetes on top. Obviously, in a cloud environment, you'd expect more nodes being available in that network. That should only take a few seconds. Again, I'll start dancing in front of you. They're starting to come up, although only two of them have arrived. Right. So here I can define very quickly what roles these virtual machines are going to play within my Kubernetes environment. As I said to you, I'm going to have one master. Obviously, we don't recommend that in production. We recommend you deliver a resilient master plane within your cloud, control elements, and then the workers are defined. And I click Go. I do a little bit more networking because the dashboards would be available and I need to do this stage to be able to get it to successfully deploy. Come on, cloud. Play with me. Right, I need my external master node in this address. Obviously, Kubernetes much prefers a proper qualified domain structure, but in this case, because it's a demo, I've avoided that complication, made it easy for myself. I just fill out those elements and then I bootstrap my Kubernetes environment. So what you see there now is the software is now being driven down onto those virtual machines and that will work. As I say, it takes a little while for those APIs to come up. So at this point, I'm going to pass you back to John, who'll log into a previously installed environment, something we built earlier to show you how you deliver using Helm, some software assets into that. Thanks, Simon. Cheers. And this will not take very long and will actually, by the time I'm done, I'll be able to come back in here and show you that these are, in fact, live. It's being quite slow, so maybe not. It is being a bit slow. So as soon as you finish the deployment, it'll take you to a kube config download page where it'll download your configuration file so that you can run kube control proxy. That's what I'm going to do here. I've previously downloaded the file and it's sitting in the kube folder here in my home. You can see the config file right there. And I'm just going to grab the token out of it right now. Copy that. This keyboard also doesn't like me and I'm used to it. So I'm going to do a kube control proxy. It started locally. So this is the dashboard that comes up from that, asking for my token, pasted in. And this is the environment in real time. And again, I'm going to go to my personal namespace here. And you can see I previously deployed Jenkins, but for our purposes, I'm going to do a new deployment and just to show that it's alive, create an app. In this case, I'm going to do GitLab. I'm just going to create one pod. And again, we're in a demo environment. And I'm going to create the service. And if you know GitLab, you know it runs on port 80. And this will not only create my instance, but my service as well. So if I look at my deployments now, you can see that GitLab is now in place there. I can scroll down to my services, try this. And just to validate here, John, this environment is running on top of OpenStack in a project with Suzer Enterprise Storage underneath here as a Cep storage plane, isn't it? Is it running out of Provo as well? It's right, yep. So here is that service that I created. Of course, I'm waiting for it to come up. I'm dancing again. I have another thing in here. Just in case there's a conflict with another one. Still waiting for it to... I'll do the dance. This is your time to dance, yeah. Come on. You can do it. Bring us home. Start praying to demo God. Is she going to play? Do you want to start taking bets? Who bets on yes? She will play. And we have three minutes. Three minutes we've got. That's a lot of dancing. It shows it being up here. I might kill it and start again. Kill everything. I'm going to kill this. Again, that worked absolutely perfectly for it five minutes ago. Perfectly. Life has a way of kicking you. Great. That explains that. Okay, that's weird. Not seen that one before. Just want to make sure it's... I'm going to blow this away. I think it probably has got confused with the previous deployment you did. Yeah. I haven't backed it out. One of the joys of our production... Existence. While that continues to shut itself down. Okay. There we go. Yeah, baby. Now we're back on track. Time's running out. Pressure's on. Countdown. I should have known when it was so fast. That much of a tease. Yeah, taking us through. Something going on. In the meantime, do you want to flip back to elements? See if that's deployed. It might not have done. Things aren't really playing quite. No, it's still deploying. Still deploying over there. Can't even distract you with that. Waiting for our services. And what I did there, too, is I just exposed an external port. So this will be up and running quickly. And obviously, this is something that you can do, and we have done it from a Chromebook, because we could actually use Cube Control and like a Google Cloud Shell, something like that. So when we're using this environment, there are a lot of options. Flexibility. One of the things we can show you, too, while we're waiting for that to come up, if I can click on the right button, this is the environment file that Simon used before and the actual stack file. This is the AML file. And you can see and hear the descriptors of the environment. And importantly, that file is provided by Suzer Engineering, so it's fully supported. So if you use that file, customize it. Obviously for your own environment, but you have challenges with the components with it and the Suzer films will actually help you on that product as part of our support solution. And it's fairly straightforward, so that there are no surprises inside. I'd say, say, it's not rocket science. It's just a heat script. And this does show my GitLab. Takes just a little while. I should have brought some juggling balls. What we can here? I can't juggle socks. But I can throw socks. I should be trying some socks. Oh, no. Would distract you. Sucks anyone? There we go, sir. This is the price for watching our screen spin round while you're... These are very high-quality socks like our open-stack solution. They will last a very long time. Oh, sorry, sir. Poor throw. Matthew, do you want to come and hand out? I'm getting worse and worse as we're going further. Meet Matthew, our product marketing chief. He's actually supporting a Mo Venber mustache. So if you want to donate any money to him, it won't go to his beer fund, I'm sure. In the meantime, I can dig inside. It shows it being alive, but it's... Oh, yeah. But we're not getting through, are we not? I'm out of socks. I know it's an excuse, but they have been doing some engineering changes in our data centers because SUSE is about to go independent as an organization from its parent company and doing some engineering to change that this week of all weeks. And I think that might be biting us right now. But of course, that's an excuse. So very rapidly, just summarizing there, we've been able to show you can drive through a standard open-stack environment using a heat script, which is fully supported by SUSE, out a Kubernetes environment for an individual tenant. You can drive multiple of those very rapidly into your organization to allow them to start consuming containerized application sets, whether they're COT software or developed software that your internal developers, if you have any, are providing to your organization. And it literally takes few moments. That is GitLab working, is it not, sir? It is. Excellent. And you see the IP address there? That means that is working on our demo environment. So that's probably a good point to give in and wrap up. Yep. Thanks very much. Please join us on our stand and talk to us about this amazing technology. Thank you. Thank you.