 All right, there we go. Hello, everyone, hello. Come and sit down, hear about the awesomeness that is Ubuntu. Come on. Okay, so there's a few things I want to talk about today, but I also really wanted to really speak about how awesome this OpenStack's been for me so far. We've been to basically every OpenStack summit so far. We've had a very good relationship with OpenStack for a very long time. OpenStack was first in Ubuntu in 2010, I guess, around August time. So we've really been following it for quite a while. And every ODS, every summit, we tried to do something quite exciting. So you probably, if you were here a year ago, Mark Shudderworth did a deployment of OpenStack in, I guess, it was about 12 minutes or so. So we really proved that we could bring up an OpenStack cloud in a very short time. Now, this should not be underestimated, because actually, it turns out, OpenStack is complicated. And to be able to do something reliably in that amount of time is high risk. I mean, you've all seen live demos. There was a joke around live demos. You just don't do them, but we've managed to pull it off. And that's something I'm very proud of, actually. So we brought up this deployment a year ago. And then six months ago, we did another demo where we showed updating from Essex to Folsom. And this is, again, we kept the cloud running. No one else was really doing that at this point. And that was, to us, it was a big leap forward in what people actually do. So people who were there this morning where we did this other deploy, where we demonstrated H8 and rolling out a reboot. Now, I was actually sat there a little bit concerned that the complexity of what was actually involved there was missed by many people. What actually happened was is every machine was rebooted and the cloud was still maintained. Now, you imagine if you've got a production cloud deployment, you can't just turn it off and turn it back on again and actually still keep stuff going. You can't just bring the whole cloud down and bring it back up. People still be able to do their stuff. But he managed to put it off. And that was something that was really quite exciting. Thank you. So I'm going to be able to talk about Juju and OpenStack and scaling out services. I haven't looked at this. I didn't even know why I bought this. So Juju, now, Juju is a service orchestration for cloud mentality. And what this is, it allows you to be able to configure and deploy and scale out services. And deploy services automatically. And we've got a public repository of charms in the charm store. Now, charms are very much like sort of recipes and manifests and this sort of way of documenting how you actually want to do a deployment. And we've got a charm store you can actually look at. There's a video here. I'm not going to play it now. But if you go over to the bunch who stand over there, they'll more than happy to put it on there. And that actually explains many of the concepts in a very crisp manner. And I think it'd be quite exciting. Now, actually, so we started with just EC2 with Amazon for Juju. And we quickly realized that it made sense for us not just to speak to one provider. So we changed it so it spoke to many. And in many ways, Juju's actually become the abstraction layer for to deploy your workload into a cloud or bare metal provider. So you actually see here, we've got many different providers. We've got OpenStack, Amazon, HP, and Rackspace. Because the OpenStack APIs are standard. But in many ways, they are different implementations of this standard, if I can say that. But Maz, as well, for speaking to bare metal, I'll talk about a bit more about that a bit later. So actually, we found that Juju is a command-line client. But many people wanted to be able to use it in a graphical manner. And that's actually how we're doing it here using the GUI, the GUI. And this is web browser-based. It's HTML5. And you can actually drag these services around. For people that came to our session six months ago, the keynote there, you would have been able to see it there as well. And this is actually quite a complicated deployment, which is why it looks kind of scary. But for many people who deploy much smaller workloads, where you might have a database, say, Django or Rails, and you can pin them together and add relations, it's all about looking after the services and tying together through relations. So it actually provides quite a lot of flexibility, the work we've been doing around OpenStack. So we actually support these different compute backends. So in KVM, it's been our default for, I guess, what, five years now? And I wasn't sure I agreed with the decision when it first happened, but it turns out it was the right decision. Thanks, Rick. And Zen is another one we actually supported for a long time. And VMware, you probably saw the announcement just, I don't know, what is day four yesterday, where we've actually shown a very good relationship with VMware. And so if you've got a VMware ESX deployment, you can actually make use of that and grow it out much further. And Hyper-V is another one we're actually talking to Microsoft at the moment about how we can best make use of that. We're trying to work together on how we can make the JuJu charms work that really well. And LXC and JuJu. So LXC is a very thin lightweight container. So it's not quite a virtualization technology, but it allows you to be able to deploy for no cost to your local machine and scale your workload right out to full scale. And that's an area we've been quite strong on. And that is, for many people, a way to get introduced to JuJu. So experimentation. So what we actually found is that many people were wanting to deploy OpenStack, and they realized how hard it was. And we heard this morning with the NSA guy that what he did is he stole a rack of machines from another department and just deployed on there and made use of it. And this is actually one thing that's resonating quite a lot from many of our users, is they're starting off. They don't want to go all in. They want to just stick their foot in and see whether it makes sense for them. So what we've been hearing, but many people are doing, is getting some machines under the desk in their office and just starting a very small seed cloud. And they've been growing that out, and then it becomes suddenly a resource that many people want to start using. And whilst they do have to make changes to the infrastructure, the networking and things like this, Juju actually helps along the way of that. And what they've also found is that, what have they found? Oh, so Maz will metal as a service. What they were able to do was we actually virtualize it. So this is some people perhaps a weird concept, is virtualizing metal. I mean, it sort of seems a bit backwards there. But we actually use it as an internal development resource. But we found people actually want to use it. They actually wanted to use KVM to deploy their metal workload and then be able to grow it as their requirements soared. So in production, so this last cycle, we've done quite a lot of work around high availability. In fact, more than that, high availability plus one, plus n, to be able to keep your cloud running whilst actually bringing it down in some regards, but you'll be able to keep it still running. And this is really important because you can't afford to have downtime at any point. And landscape is another area. I'll probably speak a bit more about that in a few moments. But landscape has the smarts to be able to help run and predict when you might need to add more hardware to your cloud. In fact, I've got some screenshots here for the people that were there this morning. You would have seen it actually in use for doing some of the stuff it did. For the people that were there at the Horizon session, we did some great work on Horizon Solometer integration to be able to do some measurements on resource utilization. Because even in my company's internal cloud, we found that we couldn't work out who was actually using the most resources. And so this is something that stemmed from an internal requirement because we found that the only way to get really smart at doing what you're trying to do is to actually use it internally. So we've been trying to put as much production stuff as we can onto an internal cloud. And this is where we've learned some, hopefully we've made some mistakes and we've learned from them and we're hoping we can help people follow through to avoid making the same mistakes we have. And Canonical has a new bunch of advantage program. And certification, partner agreements, engineering, new resource stuff. But the real thing that should be taken from there is that we do have a wealth of knowledge of it on OpenStack, how to deploy it and use it in production. And typically, from what I heard from someone, we're something like a third to a half cheaper than many of our competitors. So that's actually just cost alone. That's worth considering. So Maz or Metal as a Service. So there's another really good video here. He uses standard technologies, DNS, DHCP, IPMI. We actually have some quite clever work around that and we're supporting that on 1204, the LTS. I recommend going over and watching the video, but that's something that is really something we're very proud of. It makes deploying OpenStack and other workloads that you want close to the metal to be much easier. And obviously, Juju speaks to that, right? So Juju can do this deployment. So this is a very pretty UI. And this graph, you should see it. When you've got real workloads going up and down across there, these bars are spinning up and down. I could stare at that all day. I would love to show you one of the guys. So come to the stand. I might be able to get you a live demo of that. And I'm not even going to try to explain this stuff, but it's got a lot of thoughts going into the components we've actually got to put that together. And I would recommend if you really want to get into the detail of it, speak to me, speak to one of the other people, and we can really go through some of the meat of what's actually in mass. So landscape. So as I said, landscape speaks through Juju to do a lot of things that was doing. This is the predictions. Doing things, the heat maps, such as the networking one, that was really hard to actually work out utilitization on a network. But the landscape team did some great work around there. And I think this is one of the richest UIs for managing an OpenStack deployment out there. It's well worth checking out. And I think we're actually doing some demos that you can do on-site and things. There's some people all around the landscape that you should go and speak to about that. And actually, it's a shame that you couldn't really see in the keynote this morning. But the way it was waving out, the actual kernel update, where it was bouncing machines up and down, it would have been good just to look at this page for the whole 15 minutes. And you could have seen how it was going through step by step. It's actually got the intelligence within this to be able to roll out and maintain HA plus N. Now, I've just got a few minutes left for questions. And if anyone has any questions, I'll be thrilled to field them now. So any questions? Is landscape community-based? No. So landscape is one of, I think it's our only closed-source project, actually. But it actually comes as part of Ubuntu Advantage. And Ubuntu Advantage is actually really important, because what it does provide is a way of getting an escalation path. So we've got some of the smartest engineers that there's a level three where they can speak to them. And you can actually get issues you've got resolved in a timely manner. And landscape actually becomes as part of a no-cost bolt-on to that. I don't deal directly with the finance side of things, but there are people that do over there. And it's actually quite competitive. It is not an expensive service. Any other questions? Can I say HA? OK. Well, thank you, everyone.