 Throw it into the crowd, we'll cue the music. My name is Dustin Kirkland. I'm going to be assisted today in this presentation with my Alessandro Pissari, Pilotti, Pilotti, sorry. It's been a long day, I apologize already. That's setting the tone here. We're talking about guests in Ubuntu OpenStack. You know, a key theme of infrastructure as a service is that guests are what make the compute portion of infrastructure as a service interesting. And it's a variety of guests that keep it interesting. If you've been to a house party, probably with that music playing right there, and everyone is exactly the same in that house party, it's no fun whatsoever. Who wants to go to that party? You want to go to the party that has all sorts of crazy guests. That's a good party. And in fact, in the data center, if you're looking at a real world data center, there is quite a bit of variety. There's no such thing as a 100% homogenous environment in a data center. So let me ask you as soon as I find my mouse, and I click to the next screen. Hello. Let me ask you a question. Who here staying in an apartment or a bed and breakfast? Mr. Baker is Airbnb? Anyone else? It's great. It's such a disruptive model that Airbnb, that home away style of put your house up for rent, you rent a house instead of staying in a hotel. But if you're not in a B&B or an apartment, you're probably in a hotel, right? Anyone here in a hotel, right? Okay, excellent. And we're guests in this town, just like guests in an open stack. Vancouver is fantastic. I took a seaplane tour last night. If you haven't done that already, I highly, highly recommend it. When you booked your hotel accommodations, most people booked a hotel, what information did you provide? Very basic information, right? Billing information, right? You had to provide a name, the dates that you were gonna stay, how long you were gonna stay, maybe how many people were going to stay, and most importantly, a credit card number, right? But it was just basic information, right? You didn't have to provide what you're doing in town or what you're doing in that accommodation. That's your own business. You're using that apartment or that hotel as infrastructure. Now, if something goes wrong in that hotel or that apartment, and you call the landlord or the owner or the lobby, the front desk, you know, basic hot water has gone out or the electricity has gone out, you would expect support from your host, right? That's a basic guarantee. That's part of what you're paying for. And, you know, hotels, apartments, that's classic. That is the most classic example of infrastructure as a service. The brick and mortar infrastructure as a service has been around since Roman times, in fact. The first documented apartments were actually in Rome. And that hot water is gonna get fixed regardless of your personal background or your brand allegiance that you normally stand Hilton but you're in a Marriott this time. The who you are should never be part of the equation when you're obtaining support from your infrastructure provider. What you're doing inside of that guest is very separate from that infrastructure provider. And that's exactly how we operate in Ubuntu OpenStack. We have a very open platform, in fact, and we're very accepting of all guests. All guests, all types. Anyone recognize this sign? This is a classic sign from the 1950s. This is actually the sign, if you look up the history of the sign, it comes from the Amish culture. As the Amish were building their own handmade furniture and so forth, many people didn't realize that their goods were available for sale to non-Amish people. And so this very iconic sign was created in the 1950s. In fact, real clouds, you're gonna find every type of guest in the world in there. Hopefully you see a lot of Ubuntu. That's part of my job, to convince you to run a lot of Ubuntu. But again, in the real world, there's Sentos, there's Oracle Linux, there's Red Hat, there's SUSE, there's Debian, there's Windows, and lots, lots more, in fact. And it's full virtualization that makes that possible. We've talked in this room today, yesterday and the day before, quite a bit about Linux containers and why LexD is such an incredible technology for packing incredible density, high density guests into a one system. Something like 14 times as many. But that's not to say that containers are a complete replacement in all cases for virtualization. Full virtualization, KVM, still has some very important advantages. The most important of which, I think, are the limitless possibilities of what you can do in that container. It is an emulation of a full hardware platform. And perhaps the best example of that right now is Windows as a Guest, and Alessandro is going to talk to us a little bit about that. So my company, Club & Solutions, is responsible for quite a lot of work that we did together with Canonical and with Ubuntu Community. Actually, we are extremely pleased about what we did in the last two years. So the result has been published this week. Actually, Monday, we went live with a big series of, with a big catalog of Juju charms, okay? That cover a very, very large amount of typical Microsoft workloads, which I'm now going to introduce to you. So first, the first thing that we did with the OpenStack community was porting actually the support of Nova Compute to Hyper-V. So we have full Hyper-V integration. What we always looked for was to find ways in which we could also ease up the integration of this new Hyper-V compute nodes into full OpenStack cloud, okay? Juju and Mass provide an amazing way to do this. So the first thing that we did was, we did was always to create a charm specific for Nova Compute or Hyper-V, which is integrating all of our open source work that we did together with all the great deployment options that Juju can give you. Here's actually how it looks like. So if you guys are familiar with the Juju GUI, actually this is really a killer feature. I mean, there is no other deployment solution that can offer something like this. I mean, something in which you can really, really visually see what's going on. I mean, if you're really a hardcore dev-op, I mean, you don't need, of course, a graphical user interface for this, okay? You can just go in a command line shell and do everything, okay? So obviously you can use PapaChef, whatever tools you want to do with it, or obviously you can also use Juju on the command line. That's what you need anyway for more complex scenarios, but there is nothing like this view to explain very clearly how your OpenStack deployment works. So this is basically the plain vanilla OpenStack charm bundle, so the one that you can use with KVM and so on. Very important, we include here not only Hyper-V compute nodes, but also KVM compute nodes, okay? You have both of them included. On the top, on the top of this view, you can see that you have a failover cluster. You have an SMB file server and you have a Windows Cinder charm, okay? So this is actually saying that the Windows Cinder charm that we have here is using a full scale out failover cluster, okay? That provides a great degree of full tolerance and load balancing for storage. I mean, using Microsoft technologies is actually one of the best practices that you can use for Hyper-V. Under it, you can see a very common component in every Microsoft infrastructure, which is an Active Directory Domain. Active Directory Domain Controller. All those charms scale out automatically in the moment in which you increase the number of nodes. So it's totally and absolutely transparent for you. All the relationships between the charms do all the magic. For example, when you tie a Cinder charm to Active Directory or the Nova Hyper-V charm that you see under Active Directory, they will be automatically joined to be domain and all the credential, authentication, authorization and so on will be automatically handled. For example, the presence of that domain controller over there allows actually Hyper-V to perform live migration in a transparent way. So Hyper-V, for example, is the hypervisor in OpenStack that offers the most probably the easiest way to handle live migration. It's really transparent, easy, zero configuration issues, especially when done together with something like Jujo, which hides, let's say, all the complicated configuration options that the operating system requires behind this. Okay, next. So I think that we really like about Jujo is that we can use it to deploy the actual OpenStack, but we can also use it to deploy the workloads on top of it, you know? It's just one single technology to rule them all. It's amazing. In particular, in terms of workloads, we have all the main Microsoft type workloads here. Okay, not only these slides. I mean, they were simply too many to be put on a single slide, okay? Here we have some of the most important ones. For example, SMB, which is actually how we call the charm for the file server, which is used in SMB3 protocol. Windows Server 2012 R2, which is here as a place order for VDI. VDI is an excellent use case in OpenStack and works extremely well when used together with RemoteFX, which is a feature coming in Hyper-V. Then you have Exchange, SharePoint Server, SQL Server. SQL Server in particular comes into flavors here. SQL Server Express, which is free for everybody, okay? The SQL Server Express itself is free, not only the charm, and then you have SQL Server AlwaysOn, which is a full-fledged cluster, okay? Fully fault-tolerant, high-valuability, scalable, and so on, which is also completely deployed via our charms. Here is an example for deployment, in which you can see the SQL Server, Active Directory, and a failover cluster. The failover cluster uses also SMB for handling the quorum when you need actually an odd number of nodes, or actually an odd number of voters. So complex technology requirement made extremely easy thanks to Jojo. Okay, next, now for something completely different. We don't support only Windows, but we support also CentOS. So we contributed CentOS support to Jojo. We are big friends with the CentOS community, and we believe actually that Jojo is a great tool that has to work on as many platforms as possible. So that after working on the Windows part, we decided to contribute also CentOS, actually it's something that merged no more than three weeks ago, and it's ready to be used. So we have already some charms developed for it, so glad to see what the community will bring it for here. Okay, all these goodies are available on the charm store, and also at cloudbase.atte.jujo. Okay. Thanks, Alessandro. Yep. So the other new guest that we wanted to talk about a little bit here today is something called Snappy Ubuntu, Snappy Ubuntu Core. Snappy Ubuntu is a new way of thinking about a new way of using the same Ubuntu that hopefully you're well familiar with. It's binary for binary hash for hash, the same compiled binary code as traditional Ubuntu, but it's rolled together and deployed and updated in a different way. Transactionally, transactional updates are so important in a modern cloud world, and especially on embedded devices and smart appliances. So when we talk about Snappy Ubuntu, it's a Ubuntu, but the way that the system boots and updates is done in this atomic transactional manner. And the way that actually works is through something called A and B, A-B partitions, A-B versions, A-B versioning. So in a Snappy system, and these Snappy can run as a guest in OpenStack, that's obviously why I'm talking about it right now, as well as other public clouds and on bare metal, like a Raspberry Pi or something like that. And in all of those cases, the root disk is carved up into multiple partitions. There's always an A partition and a B partition for the root operating system, as well as another partition for applications and read-write space. So the A and the B partition are actually, actually contain the kernel and the OS. They're mounted at runtime in a read-only manner so that no application, no administrator can even tamper with the root file system. And that's done for security and reliability and a number of different reasons. Anything that you need to write and modify any configuration information is actually put over in the read-write space on the side. Now we call these bundles of software on a Snappy system snaps. So there's a kernel snap, and that includes of course the Linux kernel and any other key information that we need to boot the system. Typically what you would normally think of is in slash boot. The operating system, the rest of the user space OS is handled in another snap, that's the OS snap. And then on top of that sit one to many application snaps, app snaps. Each of those app snaps we'll get to in a second, but the key point is that the kernel and the OS are what is, those are the primary pieces that are provided by canonical along with Ubuntu. And then every time a Snappy system runs, it's running out of either the A or the B partition and in that read-only manner. Now it's running off of one of those two partitions. The other partition is actually the target for the next update. So when the next update comes, and that happens in the background while the system is doing whatever it does, right? Maybe it's a web server, maybe it's a refrigerator. But while that system is doing what it's supposed to do, updates might come down in the background and they get written out directly to the other partition, the one you're not running in. And then at an opportune moment, that system that on the very next reboot, it will boot from the other partition which will be your update. And the update happens atomically and as a transaction. If for some reason that boot does not fail, we can roll back that transaction, right? And we boot from the previous partition which was known good and working before. And the next update will actually happen to the other partition. We do the exact same thing with apps. So apps are stored, whereas the OS we only store an A and a B, a one and a two, apps we store multiple to many. You can configure how many different versions of the app you want. You can always roll back to a previous version of that app if you need it to roll back apps or you can purge some of the unneeded ones. Snappy is minimal. It's the smallest Ubuntu that we've ever created. It's what we call it Snappy Ubuntu Core because it's built on the core packages. Core is a special designation in Ubuntu for basically the most minimal system that we can still call Ubuntu. We can still support as Ubuntu. Our goal with Snappy is not to create as we do with, say, the Ubuntu server that you find in the cloud or the Ubuntu desktop, which is a, it's a small but broad set of packages that we find useful and most administrators might expect to find on a system. Snappy, on the other hand, is intended to create the smallest, core-est set of features of functionality that you need to then build on top of anything else you want. We're finding that to be an emerging trend in a fantastic best practice in the appliance space and in the microservice space in the cloud. So here's just sort of a comparison of what traditional Ubuntu and what Snappy Ubuntu might look like. And traditional Ubuntu, the Debian-based Ubuntu, the Ubuntu you're probably running in your cloud or on your laptop, we've got a number of problems. One of which is that basically any package can write to any file on the system. Discretionary access controls mitigate that a little bit, but applications that might run as root can overwrite any other applications data as root on that system, and it makes for a mess. It makes for very difficult, painful upgrades. And while we've done, I think, a pretty good job of keeping Ubuntu LTS after LTS after LTS, being able to upgrade, surely you've hit some situation on some server somewhere where an upgrade failed and then you've got multiple packages in inconsistent states and it's difficult, if not impossible, to actually roll that system back to a usable state. In the Snappy world, it's a much more crisply delineated set of interaction points between the kernel and the OS and the apps that sit on top. So this is sort of a pretty rudimentary diagram, but I think a fair diagram of what Snappy looks like. So there's that kernel snap at the bottom and any kernel config that might be required to boot a system. The OS is a little bit bigger. It's a little bit more than the kernel, but it's actually pretty small. And then the OS might have some writable files it needs. It has to write at C network interfaces, for instance, and some other configuration, but it's very easy to see and to identify what's writable at that point. And then there are apps that sit on top of that and each individual app has its own writable data. Snappy is also the most secure version of Ubuntu that we've ever created, not to disparage traditional Ubuntu, but the way Snappy is architected, apps themselves are isolated. They're contained and they're isolated. And so we can use Docker, we can use Linux containers, Lexi, LexD. That's one of the best practices around containing apps. You can even put apps inside of virtual machines, inside of KVM, for instance. Individual snaps have their own writable area and only that app can write to that writable area, in fact. And in doing that, we really ensure that that apps are contained and isolated. Snappy is freely available. You can download it right now. There's 64-bit Intel versions. There's 32-bit ARM versions. We have more architectures coming online soon. You can run it on physical hardware. You can put it on a BeagleBone or Raspberry Pi or PandaBoard or in a whole slew of little small devices or even Intel devices and Intel Nook. But this is the OpenStack crowd. You certainly also can run Snappy in any OpenStack public cloud or private cloud. And it's a great way to get started and to understand this sort of new way of thinking about operating systems in a way that comes 16.04 Ubuntu. I think you'll find a number of production web services and there's already devices being produced with Snappy Ubuntu on it. So from myself at Canonical working on Ubuntu and speaking on behalf of Alessandro and Cloudbase, come in, we're open. We're very fortunate to have users like you and we're at your service. That's the end of the presentation. I'm happy to take any questions about Snappy or anything else Ubuntu related and I will yield to Alessandro on CentOS and Windows. Yes, sir. That's okay, I can hear you. I'll repeat the question. Inherently, you might want to change a current rental config file. Fantastic question. It's almost as if you're a plant in the audience asking the best question you could possibly ask about Snappy. So the Snappy developer experience is very important to us. I should mention that Snappy is based on the work that we've done on the Ubuntu phone. If you're familiar with the Ubuntu phone, we've been working on that for almost three years. What we realize as we set out to build a phone is that the traditional app get update, app get upgrade way of installing and updating and maintaining Ubuntu over time wasn't gonna scale for phones for lots of reasons. It's hard to do, it's hard to keep it right, it's hard to keep track of things. So we had to create a whole new system called click for the phone and Snappy is basically click 2.0. It's the next way of doing that. So what's it like as a developer? Snaps themselves, individual applications contain all of their dependencies. They're totally self-contained. So from that perspective, it's a lot more like developing an app for Android, for iPhone, or maybe the classic way of developing a Java app where you build a jar and all your dependencies go inside of that jar file. We also see a number of apps being developed in something like Go, a statically compiled language where you build everything, by default, you can dynamically link it as well now, but by default, everything, your entire chain of libraries is compiled into a single library. So in Snappy, we've got a developer tool called Snappy. Snappy build dot will build the current directory. All you need to have to build a Snappy app is a metadata YAML file that specifies a couple of lines of metadata about that app, what it is, and specifies what binaries to install. And then those binaries are anything you need or source files, if it's scripts, are in that same directory and they're all sucked up into a snap. A snap is basically a tar ball, okay? If you need to modify information outside of your application, first of all, the developer docs will really question what you're doing and what your motivation is, but if there is a good reason for that, there's another level of snaps that we call frameworks. To simplify, I did not, yeah, I didn't show it on this diagram, but to the side of the apps would be something called frameworks. Frameworks are special snaps. They run privileged as opposed to apps which do not run privileged. Frameworks run privileged and they're designed to mitigate shared resources. So if there's a file or a device that needs to be shared amongst multiple apps, we create a framework that apps are then allowed to talk to and do what they need. And maybe it's a shared file, maybe it's a database, maybe it's a USB device or something like that. Apps we expect to proliferate, there should be lots of them. Frameworks should be a rough few. For example, the first framework we created is Docker. Docker has to run as root for various reasons. It has to mitigate shared resources, network ports for instance and so forth. So for various reasons, Docker needs to run as a framework and it does, but Docker apps, you can package your own snaps which are just Docker apps. Docker pull whatever, Docker run whatever, expose this port, voila. And you're an app running in a Docker container and that's a snap, snap build blah. But that uses, it depends on a framework. So frameworks is the answer to the question but I had to explain a bit about what frameworks are and why. Any other questions? Yes sir. Sure you can, there's always a way to do that. There's, the question I'd throw back at you is why? Why would you want to do that? Why would you not want your system to be up to date and secure? It sounds like your updates are off already. If it can't go out to the internet and ask for updates and yeah, sure. I mean, you could put a firewall in place that says don't go out and look for updates, look for updates over here and proxy it over here. That, yes, absolutely. The model that we're going for in Snappy, the motivation, the idea here is we're making it so safe to update that it just happens, right? It just happens, you're just always up to date, you're not susceptible to Venom or Logjam or whatever the next branded security vulnerability might be. You see this, I'm gonna go back to my screen and actually you can't see my launcher. My launcher on my desktop right now is screaming at me that I have updates that I need to, I have unpatched updates that need to be applied. I will give me one second and we'll see that. Mirror displays, that's fine for what we're doing here. Keep this configuration. You see this right here? No, I have language support I need to install, but it was yelling at me about updates just a minute ago. I haven't had a problem. I always want my updates. And I can automate that and I can turn on auto update. What we're doing with Snappy is we're making that so safe. I don't auto update my laptop because I'm about to give a presentation to you guys here and I don't want something to go wrong. But in the Snappy world, it just works. Updates just work and so I want them all the time. Thanks. It's well under 100 megabytes for the root file system compressed. We can even go a little smaller, but we're talking two digits of megabytes. 44, I think, is that 44 megabytes is the number that sticks in my mind. Yeah, so it's small, it's minimal, it's not MIPS. We're not looking at tiny OS for a MIPS-based system, but it runs great in Intel, small Intel, big Intel, runs great on ARM, 32-bit, 64-bit. Good question, thank you. Yeah, absolutely. So Snappy Info, Snappy Info will tell you, so the Snappy command is the replacement for AppGit. And you can say Snappy revert and boom, you go back to the other one. You can revert the entire OS and go back to your previous one. You can Snappy revert an app and take an app from version 1.5 to 1.4 to 1.3, whatever you need in it. Snappy is your friend, man, Snappy, and that's your equivalent of AppGit, and it handles all of that very elegantly. Okay? Yeah, so sort of, so the update itself, the download happens in the background, assuming you have internet connectivity. Part of IoT is, I and IoT is internet connectivity, right? So we're talking about devices and in many clouds, public clouds at least, going out to the internet's already happening in the Amazon world, maybe or maybe not in an open stack behind the firewall roll. But the download happens automatically. The installation of that to the other partition also just happens in the background. When to trigger the update? So what we do in Snappy is we expose an API, basically a flag where, sorry, we expose the opportunity for apps, critical apps can raise a flag that says, I'm in the middle of a critical operation, don't reboot or don't upgrade me right now, okay? The classic example of that that we toss around anecdotally, but it's only sort of anecdotally because it really would piss me off. But a device, a smart TV, running Snappy with the Netflix app, and I'm right at the end of the last episode of Game of Thrones, and I don't want my update happening right now, right? So in that case, the Netflix app would raise the flag that says I'm in the middle of a critical operation. There are dragons, there's some dragon stuff going down right now. Don't reboot me. And so it can raise that flag and hold that semaphore basically for a set period of time, a configurable period of time, at which point the Snappy, the OS itself will go and check, it'll scan and see if any of those semaphores are held. It'll expire that after some reasonable period of time. And as long as no one's holding the don't update me right now, semaphore, the update happens. Snappy right now, 1504 Snappy is system debased, it boots in a second, a second and a half. It's extremely fast. The idea is that there might be a blip in downtime, but it comes right back up. All the services that were running before, all the data that was there before is there. And if it needs to roll back, it can do that and that might cost you two times a second and a half, right? Yeah. It makes crappy stuff. Sure. It triggers that after something good. Right. No, so what I would say is that in that case, it would be the display manager. So maybe not the Netflix app, but the display manager itself says, sorry, users in the middle of something. Even though nothing's going on, it's just the guide, it's paused, nothing's happening, but the display manager's like, no, there's user interaction here. I'm raising the semaphore. I'm raising the flag. At which point, maybe you turn the TV off. There's no activity for 30, 40 minutes. The display manager notices the display is off. At which point, then it releases. It's a very valid question, but thank you, I'll update my anecdotal example. But yeah, maybe it's not the app itself, maybe it's the framework, the display manager framework that sees as that and does it for a longer period of time. Certainly the goal is we don't wanna reboot someone's TV while there gets written to be again. Yep, yep. No, you do not have to go through be. Absolutely not. So if you haven't gotten around to your update, but there's another update that comes, it gets splattered on top of that be. So obviously we're not gonna write it on to a, because you're running in a. We're not gonna force you to go through be to get to be prime, as you called it. You go straight to that next update. And again, at 44 megs with a decent internet connection, it's pretty quick. DDing that to disk is pretty quick. We can do that atomically and that's the atomic part is that, if we're in the middle of something, we'll wait for that something to cease, but we can download and write that very safely. Yeah, so Delta Base, that's a great question. We've been doing a lot of work in research around Delta Base updates for Debs and that's very interesting for us here for the same reasons. It's a bit of a science project, but there's active research development at work right now in canonical around that. At the moment, it's the whole tar ball, it's the whole image. But yeah, I mean, we really, if there's 50% of that that didn't change from one update to the next, let's grab the 50% that did, right? Yeah. Oh yeah, for sure. Content distribution, yes, needs to be done as efficiently as possible. Good questions, thank you very much. Anything else? So those are managed by Snappy. So each app's snap is a given version. Presumably you, the app developer, bumps your version over time and that sort of stack of apps there is a directory of a bunch of versioned apps with a sim link that's pointing to the current one. Yeah. If you're familiar with Homebrew, on OSX, it's sort of similar in concept to that. I'd say we looked at that amongst a number of different ways of managing multiple versions of apps. And that one, from my perspective, seemed pretty elegant. Thank you. All right, I think we're done. Please enjoy the conference. Thank you very much for coming. Thank you.