 All right. It's a packed house. Anybody want to see a demo of Nebula? Come on. Get over here. Hey, guys, you know Vance. I think they double-booked this. You guys got to come over, check it out, check it out. It wasn't advertised. So the people who are here to get to see this are people that happen to be hanging out here and not going to sessions. So you're in the wrong place. Leave, go to sessions. Just kidding. All right. So what I wanted to share with you guys today was something that we just announced last week, the Nebula One system. I'm going to take you kind of through what we've built. And then I'm going to actually do a live demo of the system real quickly. And I just tweeted a little bit. So I don't have everybody in the exhibit hall here, because I'm not sure if the demo is going to work. I'm just kidding. So first of all, I want to kind of introduce the Nebula One system. Obviously, we're all here building on OpenStack because we believe in what's on this slide. Everybody wants an environment that they can like spin up stuff, spin down stuff. They're dealing with big data. They're dealing with all sorts of stuff. And they don't have any money to do it. So this is a challenge that most enterprises, most service providers, hell, I have this challenge at my house all the data that I'm collecting every day from sensors and other things my wife doesn't approve of. So what do I have? I can go upload stuff into Amazon, saturate my internet connection. I've got various companies here that will build a custom cloud. And some of them are really good at what they do. So if you've got to build something custom, if you're a service provider, call Morantis. Call some of the folks here that are experts in taking OpenStack. Half a million people have downloaded it. If you need to tweak things, if you need to build a cloud, there's lots of great options here. And then for the enterprises out there, there's a lot of companies selling cloud that are really just putting cloud on a bunch of products that have been around in the market for a decade. And so Nebula is a little bit different. What we've done is we've taken a systems approach to building a cloud. And if you look at Amazon Web Services, if you look at a lot of the big clouds out there, including Rackspace, and you look at what they're talking about, they're talking about their entire infrastructure as a system, and they're involved in projects like the Facebook Open Compute Project in the corner here. They're looking at the designs of the servers, the networks, the storage, and they're trying to figure out how we build the most efficient, reliable system. The question is, how do we go do that in an enterprise? How do we go and take that kind of thinking into an environment where you've got a team of storage people, a team of compute people, a team of network people, a team of security people, a bunch of people with opinions about how Morantis should implement it? It's really challenging to bring a custom solution into an environment where people don't have that expertise of operating an Amazon scale or Google scale infrastructure because you get a lot of opinions influencing the implementation that may or may not be correct. So what we've done is we've built a turnkey system that provides full infrastructure services, a turnkey installation, rock solid stability, high availability, and everything from the ground up. So everything in this system is actually built from the ground up to provide this intuitive self-service experience. We're gonna see a demo of this in just a second, but what you get as an end user is EC2 APIs, OpenStack APIs, you get object store, block store, elastic networking, elastic compute in a system that you can deploy in an enterprise data center in an hour. You can plug this box into a rack full of Dell, HP, or IBM servers, power it up, and it literally boots that entire rack into a cloud. And then if you add another rack and another rack and another rack, each of those racks becomes part of a distributed system where the cloud gets more reliable and faster. And it's really cool. So what we've got here now is a transition from a custom cloud where every single OpenStack deployment, all half a million of them that are out there, are running on different hardware, different storage options, different network options, everything's different. And even if you follow a reference architecture, you know, even if Red Hat says this is the reference architecture on IBM, you know as well as I do that the security guys at a big financial services company or a pharmaceutical company are gonna use their monitoring software on those boxes. They're gonna use their CMDB tools on those boxes. And every piece of software that you put and hardware that you put into that environment creates a variable. The BIOS versions, the driver versions, the kernel versions, everything matters in an operational environment where you've gotta make the thing something that a small number of people can manage. You can't have variability in every component in every implementation. So what we've done is we've built the building blocks of a cloud. One thing you're not gonna see nebula cell servers. We believe that large enterprises are already getting a great deal from IBM, Dell, HP, and other vendors for their servers. The things that wrap spinning hard disks, solid state hard disks, CPUs, memory, and that whole global infrastructure to support servers is something we're never gonna win, right? Not something that we wanna do. But by taking the nebula cloud controller and plugging a rack full of industry standard servers into that device, as long as those servers are, they meet a specification that we provided, we know exactly what's in that cloud. So everything that we're able to do is driven by a systems approach. So whether you pick an IBM server, an X-series, the Dell R720XD, the DL380 from HP, they all have the exact same number of hard disks, the exact same number of memory sockets, the exact same number of CPU sockets. The ratios of all these things are exactly the same. So it really doesn't matter whether you've got racks of IBM servers or racks of Dell servers connected to this device. You can use the nebula system with whatever servers you bring to the table. At that point, it all fits together and you can literally use one of our controllers to power a rack of these industry standard servers and then scale out from there. Plugging in additional servers is a few minutes, they boot up into the cloud. The operating system based on an open stack is called Cosmos. Cosmos is running across every single piece of hardware in this environment. And what you end up with is a single computer. Every single element of this system is part of a single logical computer that just gets bigger the more hardware you throw at it. It's kind of like disks in a storage array. You know, every single disk that you plug in just makes the array bigger in a storage array. Every server you plug in gives you more networking, more compute, and more storage in this cloud. And the moment you plug it in, every bit of compute and storage and memory resource in every single one of the servers becomes part of the shared resource pool. So you've got everything. So what you have when you deploy this is one system with one thing you can count on. And that's the control plane, the APIs. So as your application needs more resources, you can count on the availability of the APIs. I need more compute, I need a network, I need a block file system, I need the cloud to do things. That's what Nebula worries about. What you worry about is capacity. And how you worry about that is by using the showback tools, the self-service provisioning tools that I'm about to show you, so that you can scale out the size of this thing so that your end users have the resources that they need. But everything else is turnkey. So you've got the OpenStack APIs, the Amazon APIs, you can use the same command line tools or rich user interface that I'm about to show you. Ops, code, Chef, Puppet, any number of platform services run out of the box on this system. So you don't have to worry about anything below infrastructure as a service, you don't worry about. So essentially what you're doing is you're taking raw servers, physical, and converting it to, you're converting infrastructure and infrastructure as a service as a device. So just to give you a sense of what you see plugged into these systems, Dell, HP, and another vendor, ZT Systems, all have great servers that you can plug into this system, different price points. As you start to think about your cloud, you start to think about how important is it that I have someone else come in here and replace things as they fail? Or can I just allow servers to be failed and replace them many servers at a time? So you're now in control of what kind of an SLA you need from that infrastructure. Nebula is of course one of the founding platinum members of the OpenStack Foundation. Bunch of the folks at Nebula actually wrote Nova, which was the compute core. We have been and will continue to be very involved in every OpenStack release. So we've been in the top several contributors of each release. In the Grizzly release, which just shipped, we had the top number of lines contributed to the core OpenStack projects of any company. I think it was a number of 157,000 lines of code. We were the top, I think number four in terms of actual chain sets committed and bugs patched, but Nebula is very involved in every piece of the core OpenStack technology. We're leading various sessions on the compute Nova project, the dashboard, salometer, security. We've worked with the NSA guys on a lot of the security patches that went into the Grizzly release. And what I wanna do now is actually show you the system because we've been talking for two years as a company. We've been contributing code to OpenStack for two years as a company. And last week, we shipped a product. And I'm really proud of the product. If you wanna really compressed version of what I just talked about, go to nebula.com. There's a video which we created that has our customers actually talking about how this thing works. We had Patrick narrated it for us, and he's the voice of Nebula now. So we've got here, let me just find a browser window here. There we go. Sign out. When you first see Nebula, if you're not signed in and you haven't been signed in, see sign in page. But I'm not gonna start there. I'm actually gonna request an account because a true cloud is something that you should be able to allow people to discover and sign up for on their own. So I'm gonna just go in here and create a new user called OpenStack. And so at this point, I'm actually not granted access to the cloud. But if I go in and I log in as an administrator, I can see that there was a request that came in for a new account. So what you're actually able to do as an administrator is as people hear about this cloud, you're able to see all these requests come in and approve them. Or they can be in a pre-approved list. But just by clicking this approve button, that end user gets an email. Here, welcome to Nebula. I'm gonna log back out and I'm gonna log back in as that user before we get in. So at this point, I have a very simple experience in front of me. I see block storage, compute, object storage, support. And this is basically what a basic infrastructure cloud does today. It provides these foundational services. I wanna be able to start running stuff. I wanna be able to store stuff, either an object store or block store. When I click on compute, I can see that I've got a bunch of options here. I've got pre-loaded images. I can run Fedora, I can scale the size of this. You can see my quota here is being adjusted dynamically. And then I can name the instance. I'm just gonna call it OpenStack. And then I'm gonna click launch. And within a few seconds, with just a few mouse clicks, I'm now running an instance of this cloud. So what we've really done is we've simplified the idea of cloud to the point where it's actually easier than running something in parallels or in VMware workstation for an end user. So the average consumer experience with virtualization technology is now more complicated than spinning something up in a cloud with this experience. And this experience is just the web GUI. You now also can do exactly what I did with OpenStack APIs, with Amazon EC2 APIs, with command line tools, or with third party software or services that can spin up resources in a cloud. With all the richness of the OpenStack API support. I can see my instance is now running. It's been up for a few seconds. I can also do the same thing with everything else. So if I wanted to go in and allocate a floating IP address, boom, there it is. I can go back to my instance and I can associate that floating IP address to that instance. And all of this is based on the quotas that have been pre-established. And in fact, what you didn't see happen here was I was created a sandbox. So I had a project that was created that has a certain amount of resources and storage and compute and instances. Everything just kind of was created for me with a self-service provisioning experience that just makes the cloud accessible to end users. Same thing's true of everything else. So I've got object store here, I've got block store, I can go in and I can create a 10 gigabyte volume which uses up a portion of my terabyte quota here. I click create, I'm gonna call it OpenStackVol. And now I've got a volume. And scroll down, have a volume. I can then go back and attach that volume to that instance, right? And so what this is really doing, because it's so easy to just click through here and do this stuff, I'm allowing someone who doesn't have a lot of experience with cloud to familiarize themselves with the principles of cloud, right, that a network addresses something I can just ask for and it comes out of a pool. An instance is just something that I can ask for and it uses some quota. And then when I release it, it goes back to the pool. And I'm not gonna go in and show creating projects, but you can have different projects. And as an end user, you can have resources, resource pools across all sorts of different projects. So when I get a job at a biotech company or financial services company, they're gonna hand me a laptop. It might have a terabyte hard disk or a couple terabytes of hard disk storage. It might have eight gigs or 16 gigs of memory, maybe 32 gigs if you're really lucky. And that's the computer that you're expected to do your job with in the age of big data. And so what you can use a Nebula system for is to augment the knowledge workers' resources that they have at their disposal, their computational resources. So you get a laptop with eight gigs of memory and an account on Nebula that has a terabyte of memory and 100 terabytes of storage. And now I've got a sandbox. I've got a workspace I can use personally and my team might even have more storage. I might have a petabyte of storage for my team. And what you end up with is you end up with a whole new kind of computer at your disposal. You've got your laptop or your tablet and then you've got your cloud computer that's out in the cloud where you've got all this capacity and all this performance. I haven't talked about the performance characteristics of this thing, but just suffice it to say if you've got the fastest connection you can get into Amazon, which is about 10 gigs and they charge you dearly for that. More than our product costs every month. You're still less than an order of magnitude as fast as what a Nebula system connects your network. So running out of time. So what I wanna do is just wrap up with one quick thing and then we gotta cut this off here. And if you guys wanna have a longer demo we've got a booth right behind you and I'm happy to do that. I'm just gonna just quickly cruise in here and show you what the back end of this thing looks like and then I'm happy to finish this up at our booth. I can see the entire health of this cloud. Every single open stack service, how it's running, every single note I can click on. I can see the performance characteristics as it's running, information, notification, system log, BIOS information on every single device, load balancer stats, proxy stats and every time you plug in more infrastructure it just gets bigger and bigger and bigger. And you do absolutely nothing. So the punchline here is let's all start using clouds and stop talking about building clouds. Let's check that box that says build a private cloud right now. You buy a Nebula system, it's up and running in an hour.