 OK. So I might just get started then. So good morning everybody. My name is, yeah, good morning. My name is Anthony Davis. I work for Canonical. I actually have an interesting role at Canonical. I'm in Global Alliances. I managed our strategic relationship with a couple of companies you might have heard of, specifically IBM and Dell. I was here in New Jersey about three weeks ago, and someone from the Postgres Conference asked me to present at this event. So that's why I'm here today. So the subject, of course this is all done at last minute, so the subject that was on the door outside is a little bit deceiving because I'm actually going to talk about much more than LexD, and I'm going to try and give you a bit of a demo. Now I am a business guy. I'm not a technical guy. OK. And when business guys do technical demos, interesting things happen, I find. So Ubuntu is a very popular distribution. Just to give you some idea of how popular it is, I've got a public website here called W3Text. Does anybody know about this? W3Text is a public website that you can go to, and you can look at, and basically what it does is it has a bunch of crawlers that go out over the web and look at what's running websites that are very popular. So when I ask people who's used Ubuntu or who uses Ubuntu, they put their hand up. But actually every one of you in the room uses Ubuntu if you use Uber, Lyft, Instagram, PayPal, WebEx, and any of these big applications along the web because actually all of them are platformed on Ubuntu. About 70% of the virtual machine images in Amazon that are Linux are Ubuntu. And who are Amazon's biggest customers? Netflix, et cetera. So that's all based on Ubuntu. So we've had a bit of an interesting time over the last few years. Ubuntu has overtaken everybody else and become the number one web-facing Linux operating system. So we overtook Debian about six months ago. Debian is quite high because it's installed on a lot of home routers and cameras that are connected to the web and a lot of web-connected things. So we've just overtaken Debian and of course we're actually a Debian-based distribution. So RPMs are the minority. So let's remember that. Debs are the majority now when it comes to application architecture. So just to put that in perspective, Red Hat is 3.4% and we are 35.9%. So we're doing well. We've been a little bit lucky because the market has moved in our favour. We've always focused on the cloud whereas other Linux distribution companies have focused on enterprise apps like Oracle, dare I say that word here, and SAP and things like that, internally facing applications. Our operating system is usually platforming externally facing applications. So when I say that to customers, I say your applications that drive revenue run on Ubuntu, the applications that cost you a lot of money and they're very expensive run on Red Hat. Which ones are more important? So we're headquartered in the UK. I'm from the UK, you can probably tell. Coincidence, I actually live in Dallas. We're 700 people, 32 countries around the world support. We're quite a big company now, canonical I'm talking about, the company behind Ubuntu. So you probably know about our versioning numbers. 16.04 is our current LTS release, long term support. Five years of support with all LTS releases. LTS releases come out every two years. Very predictable. That's what we are very proud of, our predictable release cycle. We release a new version of Ubuntu every six months. And then we release a long term support release every two years. So the long term support release is 16.04. Actually our current release at the moment is 16.10. We're just about to come out with 17.04. And then our next long term release will be 18.04. So very predictable. Customers love this, developers love this. Who remembers when Steve Balmer got on stage? Dare I say that word? Steve Balmer got on stage at some Microsoft conference and started shouting developers. And he walked up and down stage as developers, developers, developers, developers. Who remembers that? You guys are too young to remember that, I think. But anyway, he did. The company now talking more about developers and how important developers are is canonical. And because more developers use Ubuntu for developing applications than any other platform. So we're very proud of that. Really developers who are under 30. Which is probably most of you in the room. So there we go. So we're very proud of that. OpenStack is built on Ubuntu. We support the latest version of OpenStack with Ubuntu always. We actually have a release cycle that's very similar to OpenStack's release cycle. So that's not a coincidence. And we have strange names for our distributions of our versions of Ubuntu. Mainly because the uniqueness helps you with Google searches. So you can find documentation on Ubuntu products. Or should I say versions of operating system. You can search for scenarios you've got, issues you've got potentially. Based on the actual name of the product and find those very easily. So that's cool. So what we're going to talk about, I'm going to go through this. I'm going to try and do a demo. So what I want to do is I want to compare Ubuntu with other Linux. Because not a lot of people know when I asked them about the difference between Ubuntu and other Linux distributions. So we have Ubuntu is one distribution, one build of software. It's a single. When you have 1604, whether it's for power, for IBM mainframe, whether it's x86, Raspberry Pi, it's the same Ubuntu. Same packages, same repo. Just compiled binaries for the different platforms. So you can download Ubuntu for free from our website for any platform and use it. And you don't have to tell us who you are. It makes doing proof of concepts really, really nice for you. When you try Red Hat, you've got to go and buy Red Hat. And you've got to pay for a subscription. You don't have to pay for anything to use Ubuntu. We monetize our install base. If you want to know how we make money, because everybody does wonder how we make money. We monetize our install base and it's much easier to monetize an install base that's using your operating system than someone else's. So that's what we do. So there's a small percentage of customers that pay us for support. And there's large organizations paying us for support, like Amazon, others, Uber, et cetera, et cetera. So monetizing the install base is very important. So Cadence, so we release every two years LTS. And our friends over at Red Hat, how often do they release their product? Very unpredictable. And they generally release their product for different platforms at different times. So we're very predictable in our release cycle. We have a lot of packages. Our main repository is actually the source repository that we support. So when we say we support customers, we support them when they're running a software configuration that is made up of what's in our main repo. There's also another repo called Universe. We don't support what's in Universe. You can use what's in Universe. It all does seem to work very well, but we don't support this mainly because these are just very unpredictable projects and things like that that are in that repo. Lots of packages, 26,000 source packages. Compare that to Red Hat. It's quite a big number. What do developers need? They needed architecture support. So developers like to have a broad architecture support. So we have that. One of the things I do a lot as canonical is I do a hardware enablement particularly on Dell and IBM products. So we're shipping, for example, 48 laptops now with Dell. So you can go and buy a laptop with Ubuntu pre-installed. We're the only other operating system pre-installed on laptops other than Windows and Dell. If you can buy an IoT device from Dell, like a Dell gateway device, for example, and that's also running Ubuntu pre-loaded. And of course all their servers and storage systems. So very predictable release cycle. The other thing that's very important I think for the open source community is how free we are. So we're free and simple. Free to download, free to install, free to mirror, free to update, free to run, free to access source code. We're very different from our competitors on the next distro side in this area, right? Red Hat licenses for everything. They charge for absolutely everything. Scale up workloads. So supports are about half the price for support. We offer the same levels of support as our competitors were about half the price. So when you're looking at how expensive a Linux distro costs you, we could probably save you around 50% if you move that workload to Ubuntu, if it will run on Ubuntu. So 70% of public cloud guests, as I said earlier, are running on Ubuntu when they're running on Linux. Most of them are running on Linux. So if you wonder what the infrastructure is that your Uber app is talking to when it's running on your smartphone, it's talking to an infrastructure running Ubuntu today. So we're very proud of that. We tend to move customers from a very monolithic architecture to a very open source architecture. This is actually a trend that's happening in the industry anyway. So it's really helping us. Moving from proprietary software, closed source software to less proprietary, more open source architectures. We're definitely seeing that when it comes to the use of what we call revenue generating applications as I talked about, which is actually where a lot of development money is going into at the moment. So if you think about the types of apps that big name customers are using to simplify their business and give better customer service, it's those custom apps. So we're very prominent in the cloud. We have lots of certified cloud partners of ours that are offering Ubuntu images, standard Ubuntu images, or highly secure Ubuntu images today. So you get choice of compute, scale out storage, choice of network. I'm going to just go through this a little bit quicker. So one of the things that we offer with Ubuntu is a lot of very powerful tooling. So who knows about landscape? Who knows about satellites over? So when you run a lot of Ubuntu or a lot of any operating system, you need a really good management tool to manage the images that are running to update those images at once. Many of those images that our customers run are not internet facing. So you might need some on-premise infrastructure to keep those images up to date. So we have a product called Landscape. It's a great tool. It's free actually for a certain number of images that you're managing. And basically you can use it to manage a massive Ubuntu-based infrastructure running any type of application actually. What we tend to find is when software pricing goes down, operational pricing or operational costs go up. So you've probably noticed this with open source software. Open source software is fantastic. It takes a lot of licensing costs out of your business, but the OPEX cost of managing those different components goes up quite significantly because you don't have the tooling that you have with proprietary software generally. So what we want to do is we want to provide you the best of both worlds. We want to provide you the tooling that you expect with proprietary software for the use against open source software like Postgres. We want to make things nice and simple, like Google is nice and simple, like the iPad is nice and simple. OpenStack is one of the most complex applications. So we use this as an example actually of a very complex infrastructure. If you think about a big application, a big application is made up of lots and lots of microservices and they all are supposed to work together and they all are supposed to have no single point of failure and be able to scale out and potentially scale between clouds and things like that. OpenStack is very much like that. Given that we have so much touch to OpenStack, we talk about OpenStack all the time. So this is really what OpenStack looks like from an architecture perspective. You show an average customer that and they go, no thanks, I'll carry on using VMware. By the way, I used to work with VMware, so I used to use this against open source software, this actual slide. These are the number of packages that make up OpenStack, but the ones in red are really the ones that are used. What we tend to do is we tend to look at Linux a little bit like this. So what Linux is supposed to do is take the complexity of the hardware out of the equation so that an application will run on pretty much any type of hardware. That's what an operating system is supposed to do. What an operating system is also supposed to do is take the complexity away from the user or the administrator or the developer and that's what expert tooling is needed for. So here's another one. I show these, you all probably understand all of these diagrams. I do not, so there's three layers of an application really. There's three layers segments in infrastructure and we need to manage really applications, the platform they run on and the hardware they run on. So we put a lot of effort in making sure that our operating system works really well on all different types of hardware configurations that are out there. So we have a product called Juju. Who's heard of Juju? Have you noticed that at Canonical we have really strange names for things? Not many people can say that really well. Juju is actually, apparently means magic in some other language. It's an online service modelling and orchestration tool where you can go and take open source software like Postgres and you can deploy it. I'm going to show you Juju. What MAS is, if you're thinking about it, is metal as a service. So what MAS will do is provision bare metal hardware in a cloud-like way. So we've got lots of bare metal being provisioned into a data centre and what we can do is we can deploy Ubuntu and other operating systems including Windows to that hardware in a very efficient way. The reason why that's interesting to customers is because a lot of our customers want their hardware to be doing different things at night than what they do during the day. So they can repurpose. This is particularly in higher education actually. They want to repurpose hardware to do other things at certain times of the day or weekends and things. So that's what metal as a service can do for you. It can make your bare metal based infrastructure behave more cloud-like. So what I'm going to do is I'm going to attempt to do one of those live demos that I talked about. So Juju, you can go into Juju. It's online and you can set up an account and you can start configuring an application. And the first thing you want to do, of course, is you want to go and find Postgres and you get that from this menu here. It's all very live. And we want to add that to the model. And I talked about Landscape a minute ago, which is our management software product for managing your Ubuntu-based infrastructure, managing hundreds or even thousands of images. And what we can do is we can deploy that to the model, to the canvas. And then what Juju will enable you to do very nicely is be able to connect these together. So we actually need a database for Landscape. And if you think about how complicated it is to connect things like Postgres to other things, you spend a lot of time in configuration files, I would imagine, .conf files. And that's not a good use of your time. So if you can use a nice easy to use tool like this, then you can get a monolithic application working really well together. So let's pick another one. Let's go to do some monitoring. Nagios. Nagios is a good one for demoing because I know it's... I've always kept up to date. So these are called charms. So this is Juju's the framework and charms of what I'm using to push these applications into my cloud. So I connect Nagios, of course, to Postgres. And there we go. So now Nagios and Landscape are using Postgres as their database. And I could keep going here and I could cluster up Postgres and I could then add more and more applications. Now, if I was to go and connect Landscape of Nagios to Postgres without Juju in this presentation, you'd see me hacking away at the command line for quite a while. What I can then do is I can deploy these changes. And if this was aware of a private cloud infrastructure that I have, then that would be an option in this menu here. But I can deploy this straight into Amazon, straight into Microsoft Azure and into Google's cloud platform. So if you have those available, you can push this configuration straight into one of those public cloud infrastructures. So this is really nice. Juju makes your life really easy. It magically makes things like, things that are very complicated work really well. And that's what I was really referring to. When it comes to dealing with open source software, you need to have really good tooling. So this is actually not the slide that I wanted to present. I don't think, but anyway. Excuse me a second here. So the title of the presentation when you walked in a dorm, which I'm sure you are sitting here waiting for me to talk about, is about containers. So the other account that I have spent a lot of time with at Canonical is Docker. Who's using Docker for anything at the moment? Oh, good. Great. So when Mark Shuttleworth is a much better presenter on containers than I am. And you can find his presentations on YouTube. They're really good. But Docker is what we call a process container architecture. So it's designed for apps. Apps are invariably processes. And a single app is multiple processes. And some of those processes are shared. Like, for example, Java virtual machine or Python or something like that. They're shared between applications. And then there's applications specific micro services. And then there's container orchestration. And an example of container orchestration is something like Kubernetes. Who's using Kubernetes? Right. Well, very good. So we actually distribute Kubernetes as the project from Google and support it in Ubuntu today. We also support Docker's swarm and CS engine product that they distribute. So what I talk to customers a lot about is, and partners like Dell is a lot about, the great thing about containers is that you get virtual machine-like capability for your apps. They can move around the data center. They can scale out and scale back. You can upgrade them very easily. You can snapshot a process container really nicely, go back, roll it forward. And when I looked at VMware for many years, and one of the things I demonstrated to customers that totally wowed them was snapshotting and vMotion. And so being able to live migrate around your data center, a process container is like vMotion, but just for an app. The problem with process containers is you do have to rearchitect your app a lot of the time to be container-like or be supporting the Docker container architecture, for example. Who's running Linux on top of Linux with a hypervisor today? Like KVM or something like that. OK, very good. So the other end of the spectrum, of course, is virtualization. So you've got a hypervisor, you've got a kernel, hypervisor, kernel app. And that just seems like an absolute overkill, doesn't it now, because you've got all these kernels running. Right, you have to patch and manage. You've got all this massive amount of software that's consuming storage. So what Canonical has been working on for the last few years is a project called LexD. So Docker process containers based on a project, an original project called LexC. LexD makes a process container look like a virtual machine, enabling you to get virtual machine-like capabilities from a container, enabling you to move to containers much more quickly, enabling you to run Postgres in a container. We know that databases are bad when they're running in hypervisor-based virtual machines because of the latency. I was always told on as a VMware and everyone Microsoft SQL server or Oracle in a virtual machine. Right, because you've got this latency, because you've got an OS there and an OS at the bottom and everything needs to transfer through those OSs before they get to hardware and the network and the storage. And that slows things down. Well, containers share the kernel and machine containers share the kernel. So you've got all these machine containers running and they're sharing the underlying kernel, which means also that those containers can actually see hardware more easily than when they're running as virtual machines or in those instances as a virtual machine. I'm going to show you LexD, because this is one of the things I'm actually fairly capable of demonstrating. So with LexC and LexD, we use the LexC API, which is really powerful. No, we don't have a GUI and no, there isn't one. But we don't have GUIs, do we, at these types of conferences? I think GUIs are a bit of a swear word at these conferences. So when I want to see how many machine containers or the state of my machine containers, I type LexC list. Installing LexD is as simple as sudo apget install LexD. Bang! It's installed. So we have a very interesting configuration in our Linux distro when we are using machine containers because we use a file system that supports things like snapshotting and rollback. And it's not a standard Linux file system. We are the first distribution that fully supports ZFS. And for those of you who have been around the industry for a while, you'll remember ZFS was originally developed by Sun. It's a 128-bit file system, copy on write, very powerful. But it also supports things like unlimited numbers of snapshots and unlimited file sizes and things like that. And when you start dealing with virtualisation, you need a really powerful file system. This is one of the reasons why VMware built VMFS from scratch, the virtual machine file system. So when you use LexD, you have the option to use a standard Linux file system or you have the option of using ZFS on the partition that you want to run your container with virtual machine containers. So this enables me to support things like snapshots and as you can see, I've got some snapshots here with these machines. So if I want to launch a new... I want to create and launch a new machine container. I type LexiLaunch and I'm going to call it MC or Machine 4. Oop, nope, I won't do that. It's not as easy as that because I've got to specify the OS. So Ubuntu 16.04, so LexiLaunch, Ubuntu 16.04, Machine 4. This will go and get the image, Ubuntu 16.04, actually already have it cached locally, but otherwise it will go and get it from the cloud, but download it and would create the virtual machine from that image. I can as easily create a CentOS image, OpenSusa image, Debian image, easy, right? Just goes and gets those images. All right, could not find the requested image, I don't know. Okay. So, I don't know why that is. I think it might be because I'm now connected to the internet or something. This actually worked perfectly yesterday when I... So, all right. Let's go and luckily enough, I already have some images running here. So I've got, when it does launch, you can then go into the image Machine 3, as you can see I've got running, and I'm now in that machine container. I'm in that machine container, as you can see the IP address of that container.38 and is.38. And this container environment that I'm in is totally isolated from the host OS, completely isolated. So I can do anything in here, I can roll it back or snapshot it, I can, you know, it has no effect on the base OS whatsoever. And also what I can do with this machine container is I can live migrate it between hosts that are running LexD, all right? And the tools I can use to do this manipulation can run on Windows, can run on Mac, can run on Linux, all right? So you can go and get the tools, the API, the tools that leverage the API, you can connect those tools to a, the Ubuntu instance running LexD and then you can manipulate all of those machine containers. So really powerful stuff. And so, you know, I can do AppGear update, for example, and it goes out to the cloud and does an update of all the packages and sources and things like that. Then what I can do is install, I'm not sure in this container whether I have installed Postgres or not. I'll have a look in a second when this completes. Service. So there we go. So I'm installing now Postgres in this machine container live. So now I've got Postgres running in a container, right? And it runs better in a container than it would ever run in a virtual machine. As a matter of fact, with machine containers, we get about 20 times density versus virtual machines. You can 20 times more machine containers on the same physical hardware as you would be able to run virtual machines. So powerful stuff. It enables you to get to containers much more quickly than doing something like Docker, for example. This takes a bit of time to complete the installation. We can go back to that in a minute. But as I tried to show, the launching of a new container is very quick. I'm just typing the command in wrong. But I won't bore you with how long it might take me to learn the command correctly. So things like OpenStack. Now, if you think about OpenStack as a cloud platform, what do cloud platforms usually do? They provision multi-tenant instances of environments to users. And cloud OpenStack is really used primarily for... We're seeing it primarily used for DevOps. So provisioning machine instances to developers. And the machine instances now that are being provisioned more and more often are machine containers as opposed to virtual machines. So if you're using KVN, if you're using Linux on top of Linux, have a look at LexD. It is a much more efficient solution for you. More OpenStack runs on Ubuntu than any other distra. And so we are leading the way now to push OpenStack to more leverage machine containers over virtual machines than before. And so what I wanted to show you was... So going back to what I was showing you with JuJu. So here's some screenshots of JuJu. And what JuJu can do is provision these instances, these software components really nicely into machine containers now. And so you set up JuJu with a LexD-based environment that it's aware of. And then you can go to the canvas and it's all we do in the live environment. And you can go and get all this complex software and you can go and push it into your private cloud or a public cloud. And it'll all work really nicely configured as containers. So we're built for scale. Ubuntu is used, as I said, by the biggest names in the cloud because it scales really well. And the whole concept, I think, when it comes to application architecture is now focused on scale out, no single point of failure workload. So you can get your instances of your application and scale them out very quickly with the tooling that we offer. And often organisations that we're working with are using OpenStack to do that. We have reference architecture for OpenStack and a number of customers are using different types of configurations of OpenStack. So some organisations are just using Ubuntu for their OpenStack, some big names here. Then we have just the OpenStack packages in our repo. Then we have our canonical OpenStack reference architecture. And then we have a service offering called BootStack which is actually where Canonical will run your cloud for you. And a lot of organisations start off with that because they don't want to scale up their expertise until they understand that OpenStack is a good solution for them. So we're managing a lot of clouds for even some big telcos like Teli2 today in production. And BootStack stands for build, operate and optionally transfer. So we build the cloud for the customer. We will operate the cloud for the customer. And then we will optionally transfer that cloud back to the customer if they want to take over the control of it. So that's what BootStack really is focused on. So excuse the chaotic behaviour of my presentation here but I really expected that demo to work better than that. So I'll tell you what we do a lot of other very interesting work on and that's software defined networking. So we are one of the pioneers of what's called NFV. Is anybody using NFV and VNX in production today? Things like OpenV switch and things like that. So what containers make very easy as well is creating your own VNX or virtual network functions. So if you think about network security, if you think about how you do firewalling proxy servers, how you deploy quite rigid network configurations, when it comes to virtualisation that becomes a very difficult thing to do. And so what you're able to do with machine containers is much more easily configure a very secure network infrastructure for your virtual cloud. And basically what you're able to do with LegsD is set up things like Squared and OpenV switch and connect that in a very automated way to the various components of your application. So we work with these vendors here to get NFV and VNX working with LegsD, with KVM on Ubuntu at scale and we have a lot of successful deployments. We're actually doing some very interesting work with Facebook, with some of their wedge technology on the switching side. We're doing work with open switches from companies like Dell and Quanta. So top of rack switching is now going to do a lot more than just switching or just basic switching. It's going to do things like VNF configuration, compliance management and OS provisioning and things like our metal as a service for example can run on the top of rack switch, meaning that you can then from the top of rack switch provision operating system instances down to bare metal hardware. So that's a really nice thing to use. So juju, if you haven't used it already, have a go at it because it's a really nice way of provisioning and configuring up Postgres. And the charms themselves are very open source so you can actually add more and more functionality into the charms such as database snapshotting, curiasing, clustering, all the sort of things that you would probably want to do with the database. And you can go to jujucharms.com as well and download, well not really download, but deploy into the canvas some very complex configurations that are actually used in production. I can go back to my juju canvas here, I can go to canvas and I can search for and I can go and get a web app and management bundle which is a very complex application configuration and deploy that into my canvas here. I should probably remove what I deployed before. Now these are configurations of live infrastructure actually as you can see it's extremely complex. So you can go and get examples of what Uber's architecture looks like and deploy that into your canvas. And this enables you to very easily go and deploy a very complex infrastructure into your cloud for you to use as a reference architecture for example to use for some research purposes. So another thing I wanted to make you aware of was this, who's heard of Built With? So you have heard of it? Good. So what Built With does is a little bit like this W3 text and you can basically go to the website builtwith.com and you can type in Airbnb for example if you ever wonder what Airbnb's infrastructure looks like. Type in Airbnb.com there and it will go out and it will let you look at the detailed technology profile for Airbnb. You even might be able to find some people at Airbnb that you could potentially talk to because somehow it finds out email addresses of people. So this is what Airbnb's infrastructure looks like. Web servers, SSL certificates and what we're very proud of of course is if I am... the operating systems of course has been to. So if you're thinking about doing a startup, your own startup, Airbnb competitor, it might be a good idea to have a quick look at what Airbnb are actually using because they've obviously done a lot of work on choosing the right things for their business. And yeah so have a look at Built With. We use this to prove that Ubuntu is the platform for many of the primary... Lyft is another one, right? So let's try Lyft. Lyft.com. So again, right? Now the reason why I bring this up is not to show that Ubuntu is running most of these big applications, but also to show you how complex they are and to demonstrate to you that tooling like Juju can really be leveraged to deploy and manage and orchestrate these types of complex environments where you've got so much open source software that needs to run really well together. And also things like LexD can be used to run all of these instances of these components. And then what you can do of course, like I said before, is get all of the benefits of virtualisation without having the downside of completely rearchitecting everything to run in a container. So that is the underlying message there. So Juju, Maz and LexD. So we do a lot of testing at Canonical. We do a lot of different hardware testing and software configuration testing. We have a lab called Oil, and it stands for OpenStack Interoperability Lab. And within the OpenStack configurations we test different databases, different hypervisors and that sort of thing. So we come out with reference architecture all of the time and so the continuous integration we do as a service for many of our partners but our customers can benefit from this. We publicise the results of our testing that we do at Ubuntu.com. And so you can just go and download that and see what configurations work best potentially for you. Obviously tooling is very important. Here's landscape. Landscape is looking at the utilisation of instances of Ubuntu that are running in your data centre. So we also have a lot of customers that use Linux desktops. I've got one that's just about to deploy 20,000 Ubuntu desktops. They're getting rid of windows. And they need a centralised management tool to manage that fleet. So landscape can do that for them. We also have a live patch service for live patching kernels. So where you are running Ubuntu in production we can actually live patch kernels without rebooting. So this is pretty cool. Customers are a bit worried about us doing that during the day in peak times. So we tend to suggest that this sort of thing is not done at those times. But ultimately you never really have to reboot anything anymore which is rather nice. And robust. Many different vendors. So who uses Windows 10 on the laptop today? Dare I say Windows in this conference? Good. So we have a very interesting relationship with Microsoft. We're the only one of the Linux community that really works closely with Microsoft. We work closely with Microsoft on various areas. Obviously we started off with Azure where they push Ubuntu as their primary Linux distribution in Azure. But we also have Ubuntu pre-installed or embedded in Windows 10. So Ubuntu is in Windows 10. You just go to MSDN and get a free account at a shim into Windows 10 and you can run a native Ubuntu bash shell. What that enables you to do is be able to run Postgres on Windows 10 without a virtual machine. So can you run the native Postgres, the Linux version of Postgres on your Windows machine without a virtual machine? The answer is yes. So we're even thinking about talking to Microsoft about doing the same thing on Windows Server. So that might be nice. Give you a choice of what you use. So it really is, Microsoft and Linux and all the different distributions. Public Cloud on-premise Cloud with containers. This is a true cloud approach to compute infrastructure. We have a lot of customers also that are thinking about moving off public cloud and bringing things on-premise. We all hear that Netflix is considering getting out of AWS. We know that Dropbox did it. Others are talking about it. Is the public cloud the solution for all organisations? Sometimes it is at the beginning. When you are startup, the public cloud is really good because it gets you scale, it gets you all the things that are very expensive at a very low price. But when you are really established and you have a massive number of users and you use a huge amount of bandwidth and storage, is the public cloud the best option for you? Often, no. Often, it's too expensive. It's less expensive when you are at a certain size to bring things on-premise. So what you really need is a hybrid cloud approach. So our tooling that I showed you with Juju, containers and things like that enable you to move workloads around as you need to to give you the full flexibility to bring things on-premise. So that is the end of my presentation. So any questions from anybody? Yes, sir. In general, actually. So there was a very large amount of commentary, should I say, in the open-source community about canonical distributing ZFS. But we confidently do it. There's no licence or any infringement that we have. We've actually even spoken to Oracle about this because now they own that with their acquisition of Sun. We are shipping open ZFS, and it works great. I actually use it natively as the file system on this laptop where I'm doing the demonstration here. Obviously ZFS is very complex. There's things like Arc that you have to configure in ZFS for caching and make sure that's configured right sizing the cache for the IO that you have. ZFS is a much more complex file system, usable than any Linux standard file system like X4. But yeah, we love ZFS. And you combine ZFS and Ceph together. Obviously more Ceph runs on Ubuntu than any other Linux distribution. So we have a lot of reference architecture with a combination of ZFS for data protection because it's a copy-on-write file system and it does RAID and all this in software. And Ceph, which will do things like erasier encoding and scale out storage. So the combination of ZFS and Ceph is really nice. Yes, sir. I don't know the answer to that, but I can certainly find that out for you. Yes, that means... Yeah, is it really? OK, that's interesting to know. That might be the problem I have on this laptop actually because I have SSDs in this laptop and I think that the performance should be better given that it is SSD, right? OK, any other questions? So, yes, sir. So when you actually hit deploy, you get the choice of cloud where you want it to deploy to. So you can set up juju so that it's aware of different cloud infrastructures to deploy, so DevTest production. It's... There is. If you... You can build open-stack clouds, connect them to juju and have juju deployed to those clouds and have those clouds configured in the way that you want them to be configured for what they do, right? So it's not really juju doing that. Juju is not necessarily a cloud management system like some of the components that you get with OpenStack, for example. It's a software provisioning automation and management tool, basically. So it will deploy to what you tell it to deploy to and if you're telling it to deploy to a certain environment that has a certain size or certain configuration associated with it, it will do that. But it won't reconfigure that environment. Very good. Well, thank you very much for your time. Apologies for the demo issue.