 Hi. I didn't bring sunblock today. So thanks very much for coming up. I'd love to talk to you today about one of the core technologies that we are building at Ubuntu to make deployment of large-scale data centers easy. That technology is called MAS, and it's basically a bare-metal solution tool. My name is Christian Ries. I'm VP for Hyperscale and Storage at Canonical. I oversee basically all the next generation server work and our storage portfolio. I'll be talking this afternoon at 3.30 about our storage announcement that we did this week. Very exciting, including multiple technologies from multiple vendors all backed by our support services. But today, or at this moment here, I'm going to talk to you about our bare-metal solution. And I'm going to start actually far away from bare-metal, just talking at a high level why cloud succeeded. And I think talking about cloud succeeding gives you framing about the potential that you have in your bare-metal kit and how you can use it. So looking at cloud computing, there are two core drivers that I see as being the reason why cloud was a successful technology transition. First, it reintroduced this idea that you could do pay-as-you-go computing infrastructure like you did in the 60s and 70s. So when I went to university for the first time in the 90s, we were starting to phase out this idea of being able to pay-as-you-go using terminals and batch computing. But cloud computing was interesting because it reintroduced this idea that you could start paying zero and then grow as you needed more infrastructure. But the second thing I think is more interesting, it solved a very common use case in a spectacular way. And addressing that use case is what led the cloud to actually growing. This use case is actually quite simple. Give me a new server to run this thing that I'm working on. So this is the typical dev and test use case and it's primarily the reason why most of the open stack clouds deployed today are primarily for dev and test. Because this use case here is of fundamental importance. Let's look at Bob's use case in a little bit more detail there. So he's asking to give me a new server machine and if he's lucky, he gets to specify to his IT that he wants this much CPU, this much RAM, this much disk. He'll have a chosen operating system. Maybe it's what the enterprise determines that he's using or if it's a bit more like the Wild West, they get to choose anything they'd like to use, maybe even Ubuntu. He'll specify how he wants the drives configured, if there's caching, if he needs LVM or RAID because that's important for durability reasons or not. And on a specific network because he wants to be on the DMZ or inside the QA lab. So he needs to specify where it is with his credentials on it. So basically something which, once it's provisioned, the user can come and access it and start using it. So Bob's use case is fairly straightforward. He goes to IT and he says, okay, can I have this this week? Ah, okay, what about this month? What about in three months? So that was a typical IT experience for anywhere larger than 20 employees. You have to actually find someone in IT that wants to do the job for you in a hurry if you needed that done in a hurry. So the cloud and Amazon as the best example of that, it completely nailed that use case. And I would also say that Ubuntu is part of that by ensuring that you are able to deploy at zero or the lowest cost possible with the best experience possible. The work that we did in cloud and ITs early on made the Amazon experience for the developer for this person in test and dev capacity such a great experience. So what are the other things that the cloud introduced? This idea of pay as you go, starting at zero and that you had a new machine in minutes. So you basically said, push a button and in a few minutes, you've got a full machine that you can deploy your software on, test it. And if you're running live, like Dropbox would do, you can have a new server up, servicing user requests in minutes, which you would never be able to do with traditional IT. Now, the way this was done in infrastructure as a service cloud was through virtualization. And the thing to ask of yourself today is, what does that use case actually have to do with virtualization? Virtualization enables it because it makes basically setting up that server a trivial operation. You don't have to actually mess with any hardware. But it also, because you're using Linux or a hypervisor to drive the cloud, it also lets you basically, just through the command line or through standard Linux utilities, fire up a whole new machine that the user can use. Wouldn't it be great if you could treat bare metal just like the cloud? So bare metal is trapped in the 1980s, right? It's IPMI, Pixi, DHCP, DNS, that's where bare metal today is. But it would be fantastic if you could manage your bare metal just like in the cloud. This is the objective of MAS, the technology that we deliver to open to users free of charge. So MAS stands for metal as a service, but today we just call it MAS because that's actually what the brand name is. It provides you with a cloud experience on bare metal. In other words, you can treat your bare metal as if it was a cloud. Instead of saying, I want that specific server, you can say that, but you can also say, give me a server which has this much CPU or this much RAM or this much disk space. It offers you a RESTful API, a web UI, and a set of embeddable libraries that you can use in your own application. If you have a management system or if you're offering a public service that wants to drive bare metal, you can embed MAS in it and drive bare metal directly. People like Softlayer, OVH, other bare metal clouds have technology similar to MAS. Ubuntu is now making that technology accessible to everyone through MAS. We have an incredible provisioner, the work of our amazing server team, which basically does image-based installs, which means that we install the operating system much faster than the native operating system installers do it. With SSDs and Ubuntu, you can get two minutes provisioning from scratch, which is pretty impressive, I think. With 1.7, we've announced now the inclusion of multiple operating systems. We built this initially for Ubuntu because we had very important internal use case drivers. I think oil that we talked about earlier today completely based on MAS, hundreds of nodes tracked in there, but we understand that, of course, people have lots of different operating systems deployed into their data center. They have VMware, they have Windows, they have Red Hat, and we wanna make sure that MAS as a provisioner is useful for all those use cases. And so, we have added in 1.7 the capability to deploy any operating system on a MAS cluster. We're using the same technology, basically writing out the image and doing configuration of it. Now's the time for me to check with my solution architect in how much trouble I am. Do you wanna just do a walkthrough of MAS? You don't want to, do you? I'll give you a few more minutes, okay, all right. This is a node listing from MAS, which basically shows, and this one is from oil. So this is a node listing of MAS of the kit that is currently registered in oil that's running there. You can see in the listing there that we've given you easy access to the number of cores, the amount of RAM, the amount of disk, and the MAC addresses there. And it's also cut up in different zones. So in a way, the zoning and the fact that you have all this metadata available there, the number of cores, number of RAM and disk, means you can treat this as a cloud. And I'll talk more about why the API makes that even easier. And if we manage to get the setup here working, we'll actually walk you through and show how the provisioning works and the details there, yeah, sure. What is oil? Oil is the Open Stack Interoperability Lab. It's one of the projects that Canonical has internally that does basically continuous validation of Open Stack in multiple configurations. So we work with a variety of hardware vendors and we work with a variety of software partners that are providing extensions into Open Stack. Oil is what lets us do the mixing and matching or variation of those permutations and testing on top of it. It's what lets us give back to vendors to guarantee that your kit works in Open Stack as is deployed by Canonical in combination with all these other technologies here. So if you are a sand vendor and your customer has an SDN installed, does he know if that's gonna work together? Well, oil is a guarantee that Canonical provides that they will work together. And so any sign partner with Canonical, there's an oil add-on to that engagements that they can get into and then that's really part of the offering that we provide. So what can MAS do? What are the key functions that MAS can do? Well, first it can discover new hardware. One of the things which is very annoying about racking kit is because you rack it up and then you're racking 10 servers at a time, then you go back and you say, which one is which? Where's the MAC address for that? Sure, some people will receive nice CSV files with all the MAC addresses and everything, but the reality is that the data center is a dynamic place. You're racking your kit, you're moving it around. What MAS can do for you is discover new hardware as it's plugged in. So the first thing that MAS does is it will boot up that server and run an ephemeral environment on it, which will detect all the hardware and which will set up IPMI credentials or basically management credentials that allow you to remotely control that server over there. So at the end of that first process there, we know what the machine is a little bit about its hardware and we know we can now remotely power control it because without having to use default IPMI passwords and other abominations that are standard in some data centers. MAS can also configure the disks for you, so basically you can say how you want the disk layout, do you want B cache on them, do you want LVM setup. It will configure the networking for you, so if you want bonds, if you want jumbo frames, if you want specific VLAN IDs on the outgoing ports, you can do that as well. It will install an operating system, pretty much our objective is to install any relevant server operating system. It will set up admin credentials for you, so essentially it will let you, at the end of that provisioning cycle, SSH into the machine, or it will hand off to DevOps Automation. You can then from that point onwards control that bare metal as part of your DevOps infrastructure. We work together with Chef on doing first class support for MAS and Chef together. This means that if you're a Chef user and you're using MAS you can drive bare metal using knife or using Chef provisioning like you would any other cloud substrates. We have the Chef guys here and I wanna call them up but MAS needs to tell me that I'm allowed to and if not, well in the end where they can talk more about what the work is and what they're seeing in terms of demand there. How does MAS actually do it? Well, the first thing, MAS packs in quite a lot of functionality in it, so if you are used to doing provisioning manually, in other words you have to set up all the services yourself, et cetera, well you have to set up a DHCP server, a DNS server, offer up TFTP, offer iSCSI for the nodes that are mounting a block. HTTP if you're offering an interface for people to be able to drive that infrastructure there. You have to then figure out how am I gonna manage these machines if you've got a variety of server kit like we do at Canonical, you need to support IPMI but possibly ILO extensions, DRAC, Cisco UCS, things like moonshots, even Intel EMT and ME. What MAS does is it probes and configures the BMCs and the PDUs and it then remotely controls the machine power state which is essential for you to be able to actually do the provisioning and then do the deployment as well. We've got this image-based installer that I talked about which is pretty unique which writes out to the disk a full image and then does resize at the end of that and MAS does one additional piece which makes it ultra valuable which is it will detect hardware component details and will auto-tag machines based on capabilities. So if you've got a machine which has a GPU, if you've got a machine which has a different architecture, MAS will know about that, it will tell you firmware revisions, it will tell you how many drives, if it's attached to a hardware RAID controller, fundamentally MAS will tell you everything about that machine and will keep that for you in a database. You can see okay, this machine had this configuration today and it evolved over time. It models layer two and layer three networks and interfaces so you can say this machine has these interfaces, I want them on these networks over here, I want them configured as a bond with this VLAN ID on this interface here and so it's on this network here and so that allows you to through the MAS UI do all the stuff which today you have to do after you've provisioned, logging into the machine, tying things together and hoping it works. And the last piece that MAS does which is a great amount of great work that we've done for one of our customers that is using MAS in a multi-tenancy setting is security commissioning. So when you decommission a machine, MAS will also wipe the drives for you ensuring that the next user that comes along will not have an issue, will not have access to existing data on the drives. We offer, as I said, a cloud-style RESTful API and this is the thing I think which distinguishes MAS from other provisioning systems at its core. The API is designed to operate like a cloud so when you're talking to MAS you don't say install this OS on that machine, you can say find me a node with these characteristics. You can provide CPU count, memory and architecture but you can also provide arbitrary tags that we set up for you. So you can say give me a machine with a GPU or give me a machine with an FPGA or give me an ARM machine with two cores that has an SSD attached to it. So it gives you that ability to find in the data center what machines you actually want. It can of course install the operating system in boot. You provide to the start command just a parameter which says what operating system version you want and if it's one of the supported versions that we have and the image is loaded inside MAS it will boot it up. It can get node hardware and LDP detail so this is how you expose that information that we detected when we did that commissioning step where we're doing hardware detection and finally it can do discovery of servers in a chassis. So we have support for new hyper converged or ultra dense chassis like Moonshot, like UCS. Those machines normally will have an interface which lets you query what actual blades or what nodes do I have installed inside this chassis. So MAS, you don't need to then go and individually configure all of those. MAS will probe the chassis controller and it will tell you this is all the machines that you have in that chassis. We do the same for Microsoft for their OCP kit and so we can also query the Microsoft OCP chassis and say give me back what machines we have, give me back what MAC addresses are there and then from that point onwards you can treat that as basically accessible infrastructure that you have. We did our first release in 2012. This was the first generally available release which was 1.2. 1.7 which was released at the end of last year is what brought in Windows, CentOS and RHEL installation. That 1.7 also brought full machine status tracking and this is maybe I'll do a parenthesis just to talk about why that's so important in a bare metal provisioner. The reason why the cloud was successful was also because hypervisors are a reliable way of controlling VMs. You can ask a hypervisor, start a VM up and it will start that VM up and you can ask a hypervisor, is that VM running and it will give you an answer. Anybody who has direct experience with a variety of VMCs in this room here. So okay, so you people here who are like me living in the 1980s understand exactly. So VMCs can be single threaded, you do a request, oh wait I'm servicing somebody else's information or my monitoring system has just gone through and I can't power the machine because it's servicing that request over there. So VMCs are notoriously unreliable and in the end your provisioning tool is the gutter of all your problems in your data center. If the BMC doesn't work or if you have an issue with the installation or the drives are dead, basically your provisioning tool has to tell you about it because otherwise you have no idea what went wrong. I press this button and I have no machine back. So the work that we did in 1.7 was instrumental towards actually tracking what is happening with the machine. Did we manage the boot at the first time? Did we manage to do hardware detection? Did it hang during an OS install? Did the mem test succeed? Did we manage the firmware upgrade successfully? So, Maz does not fire and forget. We run through all the machine states and we track what's actually happening with it. We give you basically a full dashboard which says this is what the node state is, this is how far it's gone, it's hung here at this point and we'll give you an alert if it takes too long. 1.7 also was a version that brought Chef integration as a fully supported system. So, you basically if you're using Chef, provisioning or knife or Chef server, Maz will integrate seamlessly into an environment where you can basically deploy any bare metal machine and control it with Chef. And 1.9, which is planned for the second half of this year, will bring out advanced networking and storage features and our redesigned fully dynamic web UI. So 1.7 has one limitation which is the web UI in certain circumstances won't refresh to give you back live information. In some places you still have to reload. The main node listing and the node page do reload themselves and so the individual elements will tell you this machine is now booted, it's running, but with 1.9, everything is dynamic. So if you're looking in front of that network view, you'll see dynamically the nodes being added onto that network and you'll see the network move or change as it goes. Maz pricing, I want to have a pricing slide here because I'm a product manager. It is open source and free to use and that includes the Chef and Juju integration. So if you're using Juju or Chef, Maz is perfectly usable and fully supported by Canonical. It runs on all Ubuntu certified hardware. So we made an explicit decision inside Canonical to ensure it to add Maz certification as a requirement for certification testing. So if you're a hardware vendor that's working with Canonical, you'll have noticed that since 14.04, our requirements have changed. We now require your BMC to actually work. We now require your entire kit to be deployable with Maz, which I think is the bare minimum in a cloudy era where everybody's racking up tons of kits, but that is now part of our certification requirements and so we are fully assuring that you will be able to deploy basically any machine that's Ubuntu certified with Maz itself. What's available commercially? Well, you can get 24 by seven support through our Ubuntu Advantage product which covers basically all the product portfolio. You can deploy RHEL, Windows, Sless, and ESXi as an Ubuntu Advantage subscriber. If you're interested in using Maz inside your own solution that you are going to export, talk to us about an embedding license because the way that Maz's license requires you to have an agreement with us in order to provide a public service and any custom development. If you have a new piece of kit that we haven't worked with a vendor before, we can go out and talk to that vendor. If you've got interesting use cases in networking and storage that need to happen, then we're also happy to drive that there. What should you use Maz for? So this is the tail end of the slide, but I actually want to just go back and think about what the actual primary use cases there are. So first, managing bare metal at scale. If you're here at the OpenStack Summit, it's because you care about bare metal at scale and you may be hiding it behind OpenStack and using it directly through Horizon, but you still have to have bare metal deployed and Maz solves that problem perfectly. If you're doing web storage and Hadoop and other performance critical workloads which require bare metal access, you don't want to run Hadoop in a VM and suffer the virtualization overhead. Well, Maz is perfect for managing bare metal at scale and it will provide you with a very easy path to running systems. It will allow you to replace homegrown provisioners. So if you yourself have this bolted-together system like we had a canonical where you had Pixi, TFTP, ice-caze demons running together, non-high availability, Maz is a great solution to give you a robust provisioner in that data center there. This is a way for you to get a fully supported Pixi and operating system install, Pixi environment and operating system install environments. If you're building your own bare metal cloud, Maz was built to be embedded. The API is exactly for that reason, very simple. It handles exactly the embedding use cases which are give me a machine, let me control access to it, give me feedback on what the status transitions are. So if you've got a GUI, for instance, you will live update how that commissioning or installation process is going. And at the end, if you want to deprovision, again, it will do secure wipes. So you'll have the R&T that your disk is clean before you hand it to your next customer there. And at a high level, keeping your operation seem happy. Abstracting bare metal complexity, as I said, is super valuable to people that have been working with unreliable hardware, unreliable protocols, as people that have used CSTP or Pixi, which is a combination I guess of DHCP and NTFTP, have found. And this is, Maz is a game changer in that aspect because we're providing a freely available, fully supported solution that's owned and is responsible for hiding all that complexity behind it. With that, I'm gonna pause again to ask our spectacular solution architect. Can we go? Okay, all right. So I can take some questions while we're there and then we can have a look at Maz together. Sure, go ahead. Okay, correct. Is that to what? Cypher zero. I'm not sure what Cypher zero is, but I can tell you how it's done. So when we boot up the machine for the first time, we are not doing power control. So the first time the machine boots for us to detect it, it has to be manually booted. It will do a Pixi request and we're not gonna install the operating system, we'll send an ephemeral image over. That ephemeral image boots on the machine and through in-band IPMI, it will set up the user credentials. Does that make sense? Okay, so basically at the end of that, we'll have added an additional user there. That's a very good question. Do we support UEFI secure boots and trusted boots? We support UEFI out of the box, which is more a task than just the sentence would let you believe. Trusted boot and secure boots, I think that there are caveats and we don't fully support them on all the hardware there and I'm not sure if anybody else actually does because there are, at least for secure boots, there is a bit of a chicken and egg problem and I don't actually know about hardware compatibility. The one thing I can tell you though is that we have not had any customer demand for it because if we had, it would have been a first-class feature inside MAS. For UEFI, we definitely do because there's a new server kit that comes out that is UEFI by default. Even the stuff we have in the data center today is only transitioning slowly but we see that UEFI is definitely a trend in the data center and that's supported there. Thanks for the questions. Sure, go ahead. Yeah? CentOS is freely available. So if you're using CentOS and MAS, there's no commercial engagement required, you can use MAS perfectly and we understand that people that are deploying CentOS are doing it because they don't want per node friction so we understand that perfectly but it's fully included. Yeah? Yes. That's a great question. The question is, does MAS configure all my switches? No, it doesn't configure all your switches yet and that is an exercise today left to the reader. So basically you have to configure the switches yourself. Part of the reason is because you have to operate with a bunch of different switch technologies. There's no single API that you can use to talk to all switches as we're all aware of. Part of the reason is also because we don't want MAS to become this thing which completely takes over your network the moment you put it in. So we want it to be something which you can gradually put into the data center and then grow as you feel confident with it. Changing your provision is a big deal so sure, go ahead. I have two questions. Yeah, okay. When you commissioned a machine, it tells you how much CPU, RAM, and disk it has, right? Yeah. But for some reason it only show you one disk, the first disk. Yes. If your server has 10, only show you one. Yes. Is that for some specific reason? Well, part of it is just because of an information management issue. We can't give you too much information back when you're looking at a glance. But we know all the disks. We run LSHW, NLDP checks and other probes when we boot it up. So we have the full LSHW output there stored in the database and tracked. So we do know everything which is on the machine. We don't expose it to you on the listing and on the node view. We do do a simplified version because, first because in the node listing you can't give back too much information, it's overwhelming. Yeah, but we do know and if you request it when you're doing the provisioning, when you ask to acquire a machine and you specify I want a machine with five disks it will go out and find that machine for you. Okay. Second question is, MAS has its own DNS and DHCP, right? Correct. And works perfectly with the commissioned machines. Yes. I use it and I deploy OpenStack with the autopilot from Ubuntu and it works very well, but OpenStack doesn't have a functioning DNS. Is there any in the roadmap intention to integrate the MAS DNS into the OpenStack or something like that? Yes, however, I'll say yes. Our intention is to interoperate with third party DNS in general. Our philosophy with MAS was functionality first and integration second. So we want to make sure that you are able to deploy at all and integration adds a lot of complexity to it. Being able to interoperate with standalone or remote DHCP and DNS is definitely part of our requirements, but on the OpenStack case it's interesting however that if you're using MAS to deploy OpenStack, you have the same chicken and egg issue that you have with other bare metal provisions that are very tied to OpenStack, which is, is DNS running by the time you have the provisioner up? And so it has, even in the OpenStack case, you would have to be, we would still have to have a standalone DNS server and then switch over to using OpenStack once the cloud is provisioned. Oh, can MAS offer DNS services into OpenStack? It doesn't today, but it could. And if people are interested in it, then I'd be happy to discuss it. Sorry, I had two questions there beforehand, so I just want to get to those. Yeah, sure. So the question is, is there an integration plan with bare metal, with OpenStack bare metal as service? It definitely, there definitely is. We haven't done it yet because our demand immediately has been to lay down OpenStack clouds. I think people using OpenStack to drive bare metal has been a use case which people are only considering now until we haven't had a lot of demand for it. But if you are interested as a customer, come and talk to me and I want to actually make sure that we have that captured in the roadmap there. So it is on the roadmap, we will do this work, but it's not there today. There was another question in the back. Yeah, so that's a bit, that's a longer question to answer because, well, I have to ask you back. Do you mean when first racked or do you mean after the machine has been acknowledged as something that you want to manage? Because we treat those separately. Right, so let me explain how we treat hardware and then I think that will answer the question. So the first time you rack the machine, you have two options. You can boot the machine once at which point we will know about it because it's just tried to boot from us or you can create the machine manually. You can basically import a CSV file and we'll create all the machines for you. No boot required if you're importing CSV. We will boot once to find out everything about the hardware. In that use case there, we'll boot once to find out everything about the hardware and we'll store that in the database. We've got IPMI credential set up at that point. We will boot once to install and then we'll finally boot once into a live instance. So it is at most four boots, typically three boots, if the machine has hardware information inside MAS already, in other words, if you're recommissioning, it's one boot to install and one boot to run and that's it. Thanks. Sure, go ahead. We do support class, so this is a good question. I think we do support CloudInnit and you can provide CloudInnit metadata that MAS will run at the end. CloudInnit is great in a way because it allows you the flexibility to do whatever you want and the same team that did, for instance, the provisioner that did the basic disk installer did CloudInnit as well, so I'm sure it's integrated there. Yeah, go ahead. Do you need a different app? That's a great question. We'll call you the last question. That's a great question. The OS stream is done either through TFTP or HTTP when you're doing the OS install. The ephemeral images are provided over ISCSI, so there's multiples there. You've called out the architecture which I didn't highlight here because I wanna talk about high-level functionality but MAS is done with the idea that there is a highly available region that you set up and that region contains all your server kits in that data center basically. Then the separate concept there is the cluster control. The cluster control is what actually provisions the machines themselves. The cluster controller is supposed to own a layer two segment because it has to handle broadcasts for being able to do pixie boots. It will grow to as big as your layer two network can handle broadcasts of pixie. Normally I would say 50 machines, maybe 100 machines in a layer two network and over that it starts not to become practical to provision on it. So you would normally segment at that point. People have done architectures where it's one MAS per rack, one MAS per group of three racks we have then in our data center and when I say one MAS, I'm sorry, I mean one MAS cluster controller. And talk to me afterwards and I'll tell you more about the architecture. It's a bit more involved. I'll talk to you how availability is done and how it's put together. Yeah, sure. You can actually provision VMs instead of the metal. So one thing that we offer you is verse integration. So there's a verse power type and if you give that verse power type the configuration information to access a specific machine it will launch VMs through verse. In fact that is a configuration that we use when we're doing demonstrations on the orange box like Massimo is rushing here to let us see. Now we didn't build MAS to be of an open stack replacement and I really believe in being fanatical about scope because you really wanna do one task really well. There are obvious use cases where you want to be able to deploy an additional VM or manage VMs in the same, with another same pane of glass. This is not an open stack replacement. It's not designed to give you all those services there. So it allows you to do that out of the box. Yes, exactly, yeah. So, yeah. And you mean softer configuration, right? Yes, so. Well, hardware configuration I think, I come from a hyperskill background which means you can stitch together hyperservers as you wish. We don't do that today. We don't basically create hardware out of the box but that's definitely something which we have in the short-term roadmap is working with people that are providing these disaggregated or hyper-converged infrastructure. It's sort of like one upside of the other and you can define that. Having said that, you can use the API to drive whatever you want on the machine. So you could, and it was designed to do that way. You have your own internal automation. Oil is basically built on this. Today I'm deploying open stack 100 times. Every time I'm configuring it, it's a different number of machines. Different machines participate and so on. That's exactly what it was designed for. Yeah? You can deploy as well as TFTP which is a UDB-based protocol runs on the Layer 2 network. Is Jason here in the room? I wanted to ask if he is here. Anyone from the oil team from our side? We've had actually, do you want to talk about that? Or can you talk about that, Dan? The question is, how many concurrent installs can you do when you're driving Mars? On oil, how many do we do concurrently when we're doing installs there? Do we stagger or do we do all the machines? We don't stagger? Let me connect you with the oil team afterwards which has the most knowledge that I have of somebody that is continuously redeploying. So one thing which is interesting, most customers are not redeploying continuously. Most customers are growing. Once they have that first setup done, they're growing sort of organically. Oil is special and it's basically the test and dev use case that we're always re-imaging all the machines. And so let's talk to that team afterwards and they'll actually be able to answer back that question. 16 is what you said, or 6-0? Yeah, 60. All right, I think Massim is telling me that he's ready to go. Yeah, all right, so let's show off the Mazen Chef integration first. Let me hand this over to JJ. JJ, amazing Austin engineer. Like me has a very newborn child on his house so he's suffering from lack of sleep and when he travels it's when he actually is able to do seven hours straight. So JJ did all the work to get Chef up and running on Mass for the first time. We did this for ChefConf the first time. We announced it there and I really want to thank you. You've been a fantastic partner to us and let you drive the demo. Hey everybody, so there we go. Basically what we're going to show here is the base. Oh, it's really small in the corner over there. This is us just listing out from the mass directly leveraging the API. So I'm blaming the orange box for the brokenness, just FYI. It's pretty straightforward. It basically goes in. You can run Bootstrap off of there which will actually go ahead and install Chef after telling Mass, hey, look Mass, I want this machine to Bootstrap. If you saw on the Chef slide a little while ago, that was actually the little screenshots that you took of my laptop actually running this stuff. So it was pretty cool. I love Mass, I actually do. It works exactly how I would hope you would and it does the thing. I wish I could do the Bootstrap. Except when the machines are not running. Yeah, you can see what you're saying. Exactly, but basically the Bootstrap is just one simple command to say, hey, I want a new machine out there and it'll boot it, install the OS that you're expecting to have, then go ahead and SSH into it if it's SSH or if it's Windows it uses WinRM and then injects the Chef binary on there and actually checks in with your Chef server with your KnifeRB and your validation PEM. So it's one step, you can do it. I'm putting a project together to actually build the Chef OpenStack cookbooks with Mass as the top layer to the Bootstrap for these things. So if you're building OpenStack clusters over and over and over again, Mass will be one of the drivers for that. That's great, that's great. Cool. If they wanna see this running, can they stop by the Bootstrap and take a look? Absolutely, we'll have more than, you know, 20 minutes to talk about this and please find me on the Bootstrap Chef. Also my boss, Matt's here. I do. Do you have a Bootstrap? We have a Bootstrap down in, what is it, number? I don't even know. It's the one with the big Chef on it. Perfect. They're awesome. Yeah, any questions or I can answer anything? It's also lunchtime. How many Chefs have you been here? Awesome. Any of you using bare metal automation today? Sorry, yeah. Are any of you using bare metal today? Okay, interesting. So take a look at Chef. If you're interested, come in and talk to JJ. He'll talk to you about how the integration was done. There's a YouTube video that we did for ChefConf which we distributed there that you can take and see it running in action and downstairs we have hardware which is behaving a bit better and we can show it off to you. Do you wanna do a walkthrough of itself, Massimo? Excellent. Are we out of time? Can we keep going? Four minutes. Okay, four minute walkthrough. Thanks, Vanna. So we have, so I apologize, the hardware's not cooperating with us today, but I can give you the highlights, right? So there's different tabs at the top and we have a new version of Mass that I can actually show if you would like or not. You choose. So you decide, what's the nicest to show? One, one dot eight. One, we'll go with the old version for now. So basically you have the functionality on the right. You can, you know, commission your notes and that's basically getting all the information. We can, if the nodes were cooperating and we had all the information, we'd have, it listed at the bottom, so there's like a discovery. So you basically, you know, you get the, down to the serial numbers or the firmware versions, serial numbers of, you know, the drives, the firmware versions so you can update them. There's a lot of information. So Mass has a command line client and also an API so you can actually query the API if you have existing auditing software that you need to grab the information, you know, for whatever reason for auditing purposes. So you have full control again. You have node listings. So talking about the scaling, so Mass is set up as, there's a region controller and cluster controllers and here we have the cluster controller. We only have one running but you can set up multiple cluster controllers. They do the caching, the package caching and you're really limited by, you know, disk speed and bandwidth at that point. You can just add as many cluster controllers as you need to. There's an idea of zones, kind of like AWS where you can split a crossing and open stack. So if you deploy to zones that kind of, or if you deploy multiple machines it kind of spreads them across the zone so you have an increased or a lower chance of failure. The networking, so we can define networking and network so Mass understands them in the 1.8. I think you mentioned the IPAN functionality where you can actually query Mass and say, hey, I need a block of IPs. Mass catalogs that and sends off the response back images. We have multiple architectures, multiple versions, not only Ubuntu but custom images like Windows, Lez, CentOS, et cetera. So you can install multiple architectures, multiple operating systems. Here we have ARM64 and also PowerPC along with X86 obviously. Any specific questions about the user interface that I can answer? That's a great question. Do you want to answer that? No, go ahead. The images have to be specially prepared to be deployable with Curtin. So we deliver these images in the same mechanism through Simple Streams, which is how we deliver our cloud images on Ubuntu today but they're specially treated in order to be able to be Curtin installable. There's an old legacy installer that uses Debian installer that will use the original, I guess Debian installer bits but the newer version uses a special image. Let's talk afterwards. There are two answers to that question over there and I'll explain. You can assemble the images yourself or you can download them but because these mechanics were done, so the CentOS piece was done as a last thing in the 1.7 cycle and so we didn't do a lot of publicity on it so we basically shared it with customers that were interested in it but let's talk afterwards and I'll... That was an example. Yeah, no CentOS specifically, there are two ways of doing it and I'll tell you afterwards how it works. This is not a fair rendition and actually I have to apologize to Massimo and I'm gonna get yelled at by Dan Poler who is Massimo's boss who's asking me why has this demo come up in the last minute? I apologize, I didn't prepare for it. So, this is not a fair demonstration of Massimo. Please stop by the booth. All the orange boxes are running Massimo underneath. If you wanna see Massimo walk through with the latest version of Massimo with all the features that we talked about here, stop by there and talk to the team there. Massimo was there most of the day as well so I'd be delighted to help you and walk you through how it works. Thanks very much for your time today.