 Good morning everybody. It was so great yesterday to see everybody from this community again every six months we meet up. But I was stopped a thousand times by people with the same two questions. So I thought I may as well just answer them here. The first question, since everybody sees Canonical and Ubuntu as the company that's risen to challenge Red Hat in the enterprise, is what do I think of IBM buying Red Hat? There's two big burly Texans behind the curtain. So let me say I wasn't surprised to see Red Hat sell, because over the last two years I've seen some of their largest users and customers opening up and saying they wanted to have new Linux options and signing up to build open infrastructure with Ubuntu. But I was surprised at the amount of debt that IBM took on to close the deal. And I would be worried for IBM, except the public cloud is a huge opportunity. And I guess it makes sense if you think of IBM being able to steer a large amount of on-prem rail workloads to the IBM cloud, then that deal might make sense. The second question that everybody seems to have is, what bet did I lose to have to grow this magnificent beard? And let me just say that each of us in our own way has to come to terms with the fact that winter is coming. So what I wanted to talk about today is mastering the delivery of OpenStack. And I think this is a really important topic, not just for Canonical but also for the OpenStack community. This is an amazing community. It attracts amazing technology, but that won't be meaningful if it doesn't deliver for everyday businesses. And I say that as representing the company which doesn't just publish Ubuntu and an OpenStack distribution, the reference OpenStack distribution on Ubuntu. We actually manage more OpenStack clouds from more different industries, more different architectures than any other company. I think there are a couple of things that you have to get right. For us, we have to support every single OpenStack release with upgrades. Five years ago, we announced that we'd support IceHouse for five years. And here we are, five years later, supporting IceHouse. What that means is that when we release Stein and Train, we will actually deploy, as part of our test process, we'll deploy IceHouse on 1404, on trusty 1404. We'll deploy workloads on Kubernetes on IceHouse. And then with that running cloud, without losing a workload, we upgrade all the way to Metaka. We take the running cloud, we upgrade to 1604 under the hood. We then upgrade to Queens, upgrade to 1804, and on to Rocky Stein and beyond. All of that is standard when you do the operations properly. In this light, you'll see what matters isn't day two. What matters is day 1500. Living with OpenStack, scaling it, upgrading it, growing it, that is important to master to really get the value for your business. In that entire process, we never allow more than one second for network downtime, which is required when we bounce over ES. So to do that, we build the operations in. And amongst our customers, all of that operations code is open source. It is all shared. They all have the opportunity to use exactly the same operations code and to contribute to that code, right, even if they have wildly different architectures. So that's really important. Operations built in. The second thing that's really important is price performance. And the reason that's so important is because these giant public clouds are competing for your CIO's attention and for your CIO's business. And so if you're going to build a private cloud, it has to be competitive. It has to make sense economically. It has to be that good. So I just want to walk you through the engagement process, the way that we would engage with companies to help them meet that challenge. First, we add a lot of value if we are able to advise you on the way you buy hardware. We're neutral to all of the architectures. Ubuntu is the only operating system that supports all the major architectures with OpenStack. We're neutral to all of the server vendors. And we have hard data, perhaps the most important thing. We have hard data to inform your decision. Buying hardware for a cloud is different to buying hardware for the apps that you used to run and now want to run on the cloud. While the hardware is being ordered, we'll run a series of workshops with your architects to optimize the OpenStack design. There are reasons to have a custom architecture while simultaneously preserving common operational frameworks. When the hardware is ready, we'll lead the deployment. Deployment should take no more than a week and need no more than two people, mainly to keep each other company. This has to be on rails because in the long run, OpenStack has to be easy for a small team to operate if you want the costs to be reasonable. And then we'll hand over that certified cloud either to your operations team or to our operations team if you're looking for a fully managed OpenStack engagement. All of this is designed to help you manage the operating costs of OpenStack and keep your private cloud in line with the costs of public cloud so that you have a successful multi-cloud strategy. Now, a couple of years ago, we started focusing on telcos and I think everybody knows that Ubuntu is the platform for telco OpenStack. Last year, we said we were focused on adding financial services to that. And I'm very excited to tell you that over that period, six of the world's top 20 banks have signed up with Canonical to build open infrastructure on Ubuntu. Now, as we moved into these new industries, that really raises the bar and the complexity and the diversity of things that we support. For those of you who are grappling with GDPR, you'll be delighted to know that the current release of Ubuntu OpenStack supports full disk encryption, bastion, and uses Vault for key storage. I'm also delighted to announce that Ubuntu 1804 will be supported for a full 10 years, in part because of the very long time horizons in some of those industries, financial services and telecommunications, but also from IoT, where manufacturing lines, for example, are being deployed, that will be in production for at least a decade. So that's stuff that we're doing for big businesses. For developers, I think it's really important that we enable developers to engage easily with OpenStack. And so I'm very excited at this, which is the whole of OpenStack in one SNAP package. So SNAP install on any of about 45 different versions of Linux, Ubuntu, Fedora, CentOS, a bunch of versions of Linux. SNAP install MicroStack will give you a full working OpenStack. You'll have Horizon at local host, and your login credentials are over there, Admin and Keystone. There's more information online. Moving up the stack to Kubernetes, we continue to work with the major public clouds. The official Microsoft Kubernetes on Azure is Ubuntu-based, aka same for Google, GKE is Ubuntu-based, IBM is Ubuntu-based, and Ubuntu is a standard option with Amazon's elastic Kubernetes service. Delighted to announce Canonical supports upstream Kubernetes built either with KubeAdam or the charmed distribution of Kubernetes, and we support that obviously on OpenStack and on VMware and on bare metal with MAS. MAS and Kubernetes are key primitives for edge computing. You saw airship earlier, users MAS and Kubernetes. A bunch of companies now have announced reference architectures for edge computing. Again, I'm delighted to see Ubuntu and MAS as common primitives, even if the architectures are wildly different based on different requirements. And continuing on the theme of MAS, MAS is a lightweight provisioning engine. It allows you to deploy operating systems onto servers remotely. MAS 2.5 will add the ability to deploy VMware. So now you can run a remote data center, you can deploy VMware, Windows, Rails, and a Ubuntu using MAS, and of course go from there to do whatever you want. Wrapping up in the field of AI, almost all of the AI research and development that you've read about or seen is on Ubuntu. We work with these giants to enable them to publish that work efficiently to developers and to enterprises. And I'm most excited at the moment about the move of AI from the cloud where training is done and analytics are done out to the edge. This is Amazon's deep lens. It's an Ubuntu powered camera specifically to create a new class of image recognition applications at the edge. So providing people with a seamless platform from the cloud all the way to the edge, perfectly portable AI from your workstation, powered by Nvidia with micro-kates, a snap of Kubernetes, through to your racks on OpenStack. We talked about that in Vancouver. It is now widely in production. Out to the public cloud where that exact same stack, the exact same operating systems, the exact same tools all work, giving you perfectly portable AI. Thank you very much. Have a great week and it's great to be here in Berlin.