 Hi, good afternoon, everyone. We've got a great session lined up for you, and it's actually gonna be a two-part session. We wanted to have some hardware here, but it looks like they're gonna let us have the demo theater for a half an hour after the booth crawl. So we have Burban at our booth, and then we'll actually get up and touch the thing and bang on the Nebula hardware. So what I wanted to talk to you a little bit about is our journey over the past three years that started when I started OpenStack back at NASA with a small team of folks at Anso Labs. And the journey that led us to making OpenStack this incredible open source foundation for all these companies and all of these technologies and all these products. And kind of where Nebula is focused and what Nebula's been up to as we've been contributing, not contributing, contributing on certain things more than other things. And I wanna focus on what we've built. So we ended up building a cloud at NASA about five years ago that had a very primordial version of NOVA. And it was for a bunch of rocket scientists. And we thought the rest of the world looked like NASA. And we thought that everybody would quickly understand this cloud technology thing and would rapidly flock to any cloud that anybody bought in the enterprise and everybody would just be beating our doors down for this thing. And when we built this thing, we really built it for people that didn't care about the details. They wanted to build incredible things. That's the hyperwall. It's one of the largest computer displays ever built. And we built it because when you built something like this, when you built the supercomputer behind it, you had to wait in line for months to have access to a supercomputer at NASA. You had to often hide computers and buy them. And literally we found cameras, we found computers behind sheetrock walls with thermal cameras that scientists would buy on the money that they would get from grants to hide them from CIOs. And so we set out to do what we did at NASA, but do it for everyone else. To, as I said, when we started the company, democratize this idea of cloud computing so that every enterprise could have this thing that we built back at NASA five years ago. And what we built, frankly, was a community of folks that really liked that vision. And they took OpenStack and they took armies of people and they built a lot of custom private clouds, essentially what we did at NASA. And can you install me now? Can you hear me now? So what we ended up with was we ended up with a lot of people that became experts in OpenStack. And OpenStack became a very complex piece of technology that integrated a lot of things together. And it was very difficult to bring OpenStack up. Nova, I think, has 670 configuration options. And you have to know how they all work, or you might not have a cloud which is secure, or a cloud which is easy to use or is easy to upgrade or bring up. But we built this thing for people that were animators and creators, people that were trying to figure out how to visualize gene sequencing and scientists and end users. We didn't intend to build something that was so complicated that it would take a whole team of people, months or years to stand up a cloud. So when we set out to build the Nebula product, we set it out, we set out to build it for people that wanted to use clouds, not build clouds. And I think that's a really key distinction. If you're in the enterprise, you don't necessarily wanna build the technology that you consume. You wanna use the technology you consume. So in 2011, we set out to simplify private cloud. And our vision, everything we've done for three years has been focused on making the cloud something that you can deploy easily. And so what I wanna introduce for the first time here at OpenStack is the Nebula One. And we're about to see this thing and some really neat new features and technologies Vish is gonna dive into in a few minutes here, but this is a truly remarkable piece of technology. We built what we call a cloud controller. And this is kind of the amalgamation of a really fast switch in a powerful compute platform. All the OpenStack stuff runs in this thing. So I can plug a rack full of servers into this device and bring up a cloud in about an hour and a half. And the idea is that anybody should be able to buy this thing, plug a whole bunch of off-the-shelf servers into it and have an OpenStack cloud running in an hour and a half. And it has a remarkably simple user interface. Building on the foundation of Horizon that we built, we've got this user experience that allows you to quickly log in, create accounts, provision compute resources, and it's been designed for people who don't understand cloud or infrastructure because frankly no one in the enterprise does. So to the extent that we can go out to millions of people that might be scientists or animators or creators or people that really have no business going out and buying a bunch of O'Reilly books and introduce them to the power of cloud, this is the key to the enterprise. It's taking people from not understanding this technology, never having control of infrastructure and getting them in the pilot seat as quickly as possible. So what we've built is a product that you can actually deploy very quickly that's actually very inexpensive. So this cloud which has five servers, a Nebula controller, over a terabyte of memory and hundreds of terabytes of raw storage, you can buy for 100K, street price. And then you can scale it out. You can fill up the rack with a whole bunch of off the shelf industry standard servers. Vish is gonna show you a little bit more how this works in a minute, but this is the way it works. And then if you need it to be bigger, you can actually scale it out to multiple racks. And this multi-rack system with 95 servers with over four petabytes of storage and 18 terabytes of memory, you can deploy for around a million dollars. And this has never been possible before. So what we're here to do today is show you this thing. So what I thought, I could bring Devin Carlin up here who's my co-founder, who wrote a lot of the Horizon Code in Keystone. I could bring Vish, excuse me, Gordon, our new CEO up here. Trey Henry who wrote most of the Horizon Code. There's probably 15 people that I could have brought in from my company, but I'm honored to bring Vish Ashaia, the number one contributor to OpenStack, our CTO to just dive in and really get to know this product. So thank you. Thanks. Thanks, Chris. So I'm gonna be showing you guys some of the new features that we've been working, some of the way that the whole system works. First I wanna talk about the main goal of our system was to create something that is extremely easy to install. So we want people to be up and running in a couple of hours. So this is just demonstrating, unboxing the system, plugging it in. We've got an integrated 48 port, 10 gig switch. You cable up a few cables, cable an external out of band management switch, and essentially in a very, very short period of time, you're up and running. So this is going through the installation process. The first thing you do is you enter in an accessible IP address so that you can access the system and then you go to that IP address and you go through a few pages of configuration options. One of the main things we're trying to do here is make this ready for the enterprise. So there's things that enterprises need. You wanna be able to capture your syslog feed. You wanna be able to have SNMP traps. There's certain configuration options that you need to enter in. And here's going through the configuration options. You can see you put SMTP server in. You pick your network range. You wanna see where it's gonna show up. You put in your DNS server and actually show up on your local DNS server. So you can basically hit proxy.nebula.yourcompany.com instead of having to remember some random IP address to hit. And within just a few minutes, you've got a private cloud that's ready to use. And going through using the cloud is as simple as entering a username and password, selecting an image, picking a name, and hitting launch. So that's the experience that we want everybody who wants to use the cloud to have. Not trolling through thousands of pages of documentation, figuring out all the configuration settings you need to go, spending two weeks just getting a demo cluster up and then eventually getting a team of people and going into production. We want it to be just that simple to use the cloud. So that was our first goal. And so what are the main things we're focusing on is making it really, really easy for enterprises? So what does that mean? That means first of all, we wanna use industry standard servers. So you don't have to buy your servers from us. You get the controller, you buy a Dell server, you buy an HP server, you buy an IBM server, you buy a Cisco server, you buy a Super Micro server, plug them all in, they all work. So you can leverage your existing relationships. Most enterprise customers have existing vendors that they wanna buy from. They don't want to have to go through a new process of verifying that they're allowed to buy servers from a new OEM that they never heard of, they wanna be able to get their standard support, et cetera. So we offer standard industry servers. We've done a lot of configuration improving the network experience. So the whole goal is basically you take your servers, you plug them in, and then everything comes out through a single cable. Now obviously we support, we have a full switch, so you can plug in multiple 10-giglings, but the idea is it should be as simple as, your cloud is at the end of one cable. That's all you should have to do. Once you plug your servers in, you connect to that cable and you have a cloud. Now the process here is showing what it's like to log into the admin panel. Chris showed a little bit of this already, but essentially you have this beautiful interface, it's very responsive. For going in, editing your configuration options, one of the main things we focus on here is all of those enterprise style features that you really need. Being able to change network addresses, being able to put in your syslog feed, all those configuration options that make it easy for an enterprise user to hook it in with their existing systems. You shouldn't have to create a whole new set of strategy in your infrastructure when you bring in cloud. Yes, you're gonna be changing all your development practices, that's understood, that's why cloud's so valuable, but you don't wanna have to change your infrastructure practices, so we're trying to bring cloud in an easily consumable way into the enterprise. One of the other things we have is SSL certificates by default. That means that everything is HTTPS secured, all of your endpoints, you put your own certificate in there, it can be verified or it can be a self-signed cert if that's what you want, but you can actually have a signed cert and have real, you know, the little green lock that you see on your browser, that actually works, which is pretty neat. We have systems like remote support, so if there's ever any problem, you can actually open up a tunnel, we can go in and we can help you debug issues that happen remotely. We have a view of the whole system where you can see your utilization, how much compute and how much storage you've used across the cloud. I mean, this is really valuable, I wanna know when is it time to buy new nodes? We actually provide a feed of metering data from the cloud so that you can see what's going on and we have a nice little graph showing metrics, how much CPU is in use on each node, so you can find hotspots, et cetera. So basically, we're trying to make OpenStack useful for operators in addition to just end users. You know, there's been so much effort put into making cloud a great experience for developers and there's not a lot of been thought, been put on those people that have to sit and maintain the cloud and run the cloud every day. That's where we're trying to really focus. So what are some of the newer things that we are integrating? One of the new features coming in our next release of the project is Nebula Identity. So what that means is basically we integrate with Windows Active Directory and this is for logging into the cloud and it's actually really simple. I showed you the configuration panel, it's as easy as going in, hitting edit on your directory configuration, turning it on, so by default, if you don't have it on, then you just create users and accounts like you normally would in an OpenStack cloud. But if you put in your Active Directory credentials, then within a few minutes, you have the ability for any of your users already existing in your enterprise to log into the cloud, which is actually really powerful. Now you can limit that by limiting it to a specific Active Directory group or you can just say everybody who has an Active Directory account automatically can log into the cloud. But you know, that's just a couple of minutes in configuration to add Active Directory, which is pretty powerful. The next really big thing that we've been working on and this is something that we think is really important. So most enterprises have a huge investment in storage infrastructure already. They trust their storage, right? So we trust your storage too. One of the things that we've been working on in our last release is integration with NetApp. Now I'm gonna do something completely crazy here and do a live demo. People say you never do a live demo on stage, but I don't believe this people. So I wanna show you how simple it is to script. The power of integrating existing storage devices with something like OpenStack is you suddenly have a consistent set of APIs with which you can then provision storage resources. So in the same way as cloud computing gives you the option of not having to have a person clicking a button to give you a virtual machine, not having a person click a button to give you a loon or an NFS share is a really powerful idea as well. So this is all the code that I'm gonna be running right now. You can see it's about three, four lines split out. So I'm basically just having, using the Cinder APIs to create two volumes, launching an instance that's connecting those two volumes. Now the image that I've created is a special image that runs VD Bench and basically just does a whole bunch of IO load on the system. So I wanted to show you what I'm running. This is running against one of our dev clouds. I'm not getting error messages, which is a good sign. So, over here, this is the NetApp UI. So this is actually, there's actually a NetApp plugged into the system directly. We have good 10 gig connections and you can see that the current IO throughput is up on the top right. So what's happening is in the background instances are being launched. If I were to log in to the account, you can actually see that in the background as these are going, I'm getting notifications telling me my instances are being created. This is another one of the cool little hidden features of our UI, which is you actually get notifications when other people in your project are launching instances. So I'm launching instances from another location. I can see a little notifications. All of them are spinning up as you speak. We can see them building down here. So what's gonna happen as this whole process is going on is, as you are familiar, the image is being downloaded onto the compute nodes. The volumes are being attached to the NetApp. It's launching. It's going through the boot process of the US starting at VD Bench and just running a whole bunch of IO. So what we should start to see happening over in our NetApp UI is suddenly our kilobytes second of IO throughput, which was down at about 100 or actually less than, yeah, it's so far off the screen now you can't even see it. We're all the way up to about 40,000 kilobytes second of transmit going in and out of these volumes. And that was a five line script that's coming up in a couple of minutes. So we're getting all of this data processing using of our existing storage resources, our existing compute resources, really, really quickly and really easily. All of that through an automated script. So obviously this is a toy demo. I mean, all I'm doing is doing a benchmark, but it really shows the power of being able to dynamically provision things with those APIs and the power of being able to do it on infrastructure that you already have purchased and you've already invested in. Like you already have a NetApp sitting in your data center. Wouldn't it be great to be able to use that to back your storage devices? This is the real power of the plug-in architecture of OpenStack. And being able to provide it in a way that makes it easy for existing customers of these various storage platforms of various services to plug OpenStack in and start using those common APIs, but then already use their existing investment is really, really important. So that's one of the main things that we focused on. One of the other areas that we've put a lot of work into is security. So during that initial boot process of provisioning a node, it actually takes about 12 and a half minutes, at least on one of the models that we have plugged in to our demo rack that we're gonna have later this afternoon for a node to boot. And during that process, essentially, there's no configuration. You don't buy a node, configure an operating system on it, and then plug it in. You buy a node, you plug it in, it's pixie booted. The Cosmos are software kernels loaded down onto the node. And we do a lot of very interesting things around security. So it's a very hardened system. We use the TPM chips on the servers to cryptographically verify that the node image has not been modified since it was installed. So if it's ever modified, the node will fail to boot and you will not, it won't join the cluster. We do things like hardening the kernel, hardening Kemu, we wrap everything in SE Linux containers to make sure that there ever is a kernel exploit, which, interestingly enough, was pretty much theoretical until a few weeks ago when there was a really big one that happened in the vertio layer. I don't know if you guys read about that, but everybody always said, well, what if someone breaks out of the kernel, like breaks out of the hypervisor? And I was like, well, it never really happens. Well, it actually did. So this kind of thing does happen. So having that defense in depth of security is a really, really important piece. So all that streamed on in the node within 12 minutes, you have your existing storage resources that are added to the system, right? And essentially it's as easy as, and I lost my internet connection, of course. It's as easy as going to the admins page and you'll see that new node just added to the cloud. So it shows up on the front panel, there's a nice little animation you see and then suddenly it's on the admin page. So we can make use of all these existing resources right away with minimal effort by wrapping them in a common API, which is the real power of OpenStack. Okay, so the last thing I'd like to do is bring up our CEO, Gordon Stitt. Gordon came to us from, he has a lot of background in enterprise companies, used to be the CEO of Extreme Networks and he's gonna talk to you about OpenStack and Nebula in the Enterprise. Thanks, Vish. Thank you. So being relatively new to OpenStack, it was interesting to me, looking a little bit back at the history and the first wildly successful consumer-oriented public cloud was Amazon. And if you use the Amazon cloud, all you need is a browser and a credit card and you connect the way they want you to connect. And in effect, Amazon was the center of the universe. They didn't worry about anything else as long as you had a valid credit card. I mean the interesting thing is looking at some of the early OpenStack implementations and in fact when I got to Nebula looking at some of how our early customers used it was exactly the same way. They'd set up the cloud and say, okay, we're the center of the universe, you connect to us. And in the early stages of the market, that worked great. People were really excited about having all these compute resources at their disposal and they were excited because it was new, cool technology and was gonna solve all their problems and they connected to it. The challenge is as we move into what I'll say the next phase of this market where people are looking at within an enterprise and saying hey, this private cloud or this hybrid cloud model is really what I want. But I have all my data stored in my storage arrays or in existing data centers and I don't wanna move all that onto a cloud and Vish talked to you a little bit about how we solve that problem. And then there's the networking side of it which is how do you connect this to your existing enterprise resources? And if you look at it the way people viewed OpenStack in the early stages of this market was as the center of the universe. But if you're an enterprise and you're using this as a private cloud, the center of the universe is not OpenStack. The center of the universe is the existing IT infrastructure. And that includes existing compute, it includes a network which may be configured and used in many different ways in terms of layer three endpoints, in terms of VLANs, and of course the whole concept of identity. And so what's important now is we look at putting these into production in real life enterprises that are existing and ongoing is making OpenStack an appendage. And it's not a step backward, it's really a step forward in connecting OpenStack to existing resources so that the analytics, perhaps, can be run on OpenStack and can access existing enterprise data. So when we look at this market and say, hey, what's the opportunity in this hybrid model is as an enterprise private cloud. And that is in producing all of the features and capabilities, hardened capabilities, necessary to connect to existing compute, to existing networks, and to existing identity systems. And those are the elements that are needed in this next phase of the market. So with that, that's the end of our presentation. We'd like to invite you at 6.30 this evening to come down and see our live demo. We have brought a real live Nebula out here, mounted in a rack with a bunch of nodes from all the different vendors that we showed you along with some NetApp storage and we'll run these demos live for you later this evening. Thanks everyone for coming and thanks to Chris and Vish for a great job.