 Good afternoon, everyone. My name is Jonathan Lacour. I'm the Vice President of Cloud at Dreamhost. I'm here today to talk to you about the all-new Dream Compute, our OpenStack public cloud. First, let me tell you a little bit about Dreamhost. We were founded in 1997 in Southern California as a web hosting business. We have over 400,000 customers, over 1.5 million domains hosted. We provide managed hosting, web presence, and cloud services. We were a founding member of the OpenStack Foundation, or a gold member now. And we were a finalist, as you may have seen, on the Keynotes on Monday for the 2016 OpenStack Super User Award, which we're very proud of. So I want to talk to you a little bit about Dream Compute, which is our OpenStack public cloud. I'm going to start off by giving you some of the basics about Dream Compute. And then I'll get into kind of the geeky aspects of Dream Compute. We at Dreamhost are very open about how we build our services. And I'm going to share all the details, all the dirty details. And happy to answer questions as well. So Dream Compute is an OpenStack-powered public cloud for developers, creators, and businesses. We serve every customer down from a person who has one VM that they occasionally spin up and down to tinker with the latest programming languages, frameworks, whatever it may be, all the way up to businesses who have hundreds or thousands of VMs to deploy large-scale applications. It was announced in 2012 at the OpenStack Summit. And we've been carefully incubating it in alpha and beta and public beta releases. It's taken us a while to get to this point where we went into GA earlier this month. And the reason for that is we really wanted to test the waters of OpenStack and make sure we were configuring things and designing things to really scale and delight customers and really be the best cloud on the market. So what makes Dream Compute different from other clouds? So there's four things I want to talk about. So the first thing I want to talk about is kind of the disruptive pricing that we've come out with. With this unique feature, we call PredictableBill. I want to talk about our very performant, secure virtual networking. I want to talk about some performance around the VM creation and how quick we can spin up VMs and about our SSD-powered block storage. So if you are out there right now shopping for cloud services, for cloud compute, you probably know a lot about how complicated pricing can be. If you go to Amazon and you look at their pricing page for just EC2, it is miles of text and tables and complications. We're trying to simplify things for you and offer you the lowest cost cloud service we can. So with our new PredictableBill pricing and our new Dream Compute cluster, you can get a one gig VM for only $6 a month. That's kind of crazy. And if you have something that can run in a 512 meg instance, you can get that for as little as $4.50 a month. With PredictableBill, you pay for what you use. It's utility billing, like you're familiar with. Big difference is once you hit 600 hours in a month, which is 25 days, you no longer pay for that VM in that month. So basically, you're getting a predictable price. If you have static workloads that can run in VMs during the course of a month, you will only pay for what you use up to a maximum price. And you can see in that table, it's very competitive. It's about 30% lower than our closest competitor. With this, you also get 100 gigabytes of included SSD accelerated block storage. For $10 a month, you can get more, right? Another 100 gigabyte block. So that's PredictableBill. You'll never have to plug your information into a spreadsheet or a calculator again to figure out what the hell your bill is going to be the next month. So we also have performant and secure virtual networks. Every instance inside of the Dream Compute public cloud gets a free public IPv4 address. And IPv6 address. If you want, and you need more sophisticated private networking capabilities, for $5 a month a network, you can create L2-isolated private networks. This is all powered by Project Astara and Cumulus Linux under the hood and VXLanth. And we're really proud of it. It works great. We have customers who have tens of private networks and their tenant supporting hundreds of VMs. And it's all hardware accelerated at the host and at the switch. So you can create sophisticated network topologies. Another thing we're really proud of in the new cluster is we basically doubled or tripled the performance across the board from compute storage and networking. And one of the most fun metrics of this that we like to measure in a continuous fashion is boot time. So time from that you send a request to boot a VM until you can SSH into that virtual machine. With Dream Compute, we're getting it in about 25 seconds on average. We've seen it as low as 12 to 14 seconds. And we're going to be working with our partners at Canonical to make this even faster with Ubuntu images. And we're also trying to optimize and tune our storage to make this even faster. I omitted the names here to protect the innocent, or the guilty in this case. But a lot of these other clouds that you see out there, especially these light clouds, they talk about how fast it is. We have SSD-powered servers that boot in less than a minute. Well, that's cute. 2 and 1 half times slower than Dream Compute and more expensive. So this is a lot due to very well-tuned all-SSD-SEF storage. Dreamhost was actually the creator of SEF. And so we have a lot of expertise around how to tune it just right and optimize it for our workload. So we're very proud of Lightning Fast VM creation. And as I mentioned, it's all SSD-powered now. Our original Dream Compute beta cloud was using spinning disks. And we really wanted to see if we could deliver more aggressive price point with more performance. And thanks to the onward march of time and the decreasing prices in flash storage, we were able to get there. And the SEF cluster itself is also powered by fast Intel processors, 10 gig and 40 gig networking. And our operators really are SEF experts. So if you are here at the OpenStack Summit, it's probably because you like to dig into the nitty gritty details. You want to see all about all of the crazy stuff that makes this fly. So what I'm going to do is I'm actually going to peel back the curtain a little bit and show you, if the clicker will work, a little bit about our network to start. So I have a very high level diagram here that you probably can't see. The critical thing is that we have two top of rack switches at the top of every rack in our dream compute cluster. There are 10 gig switches with 40 gig uplinks to the spines. And those are 40 gig spines. And we use white box switches. And we work with our partner at Cumulus Networks. We run Cumulus Linux on those. And all of our cloud nodes, which are interconnected to both, they're dual-quartered to both of the top of rack switches, and at 10 gig. So we get an effective 20 gig between every node in the cluster. And we are doing VXLAN overlay. And we're doing all the end cap and decap in hardware, either at the host in the NIC or in the top of rack switch or at the spine. So we can support the acceleration all the way through. So we get very fast accelerated networking. We do L3 plus services. So if you want a private network or you want to do load balancing or anything like that, we can provide that through OpenStack Astara, which is a project we actually created and gave back to the OpenStack Foundation. It was developed as open source from the beginning. But now it's part of the Big Ten. It's an OpenStack project. So I encourage you to check that out. It actually provides these services within VMs inside the cloud itself. So we deploy routers and load balancing and firewall VPN, those kind of capabilities we can provide down into a VM. Helps us really scale it up very efficiently. There's no magic, no OVS to peg your hypervisor CPU when it goes out of control, no kernel modules, no proprietary SDN. In terms of the storage and computes, each of the cloud nodes that we have contains 8,960 gig SSDs for CFO SD storage and also SSD boot drives configured with RAID. And we're using a converged architecture now. In our original cloud, we actually had separate hypervisors and storage nodes. We thought that we would be able to tune the performance best for that because they would be specialized for their purpose. Turns out we can do a lot better if we converge things down. Some friends of mine told me that was a good idea. And we are using very high performance Intel Xeon E5 processors, 2.4 gigahertz, 128 gigs of memory on each node. And we're doing very minimal oversubscription on CPU because we want to give you really good performance. So that's kind of the overview. But I wanted to leave some time for questions. If anybody, I'm very open. We want to share with the world how we're doing this. And if you have questions, please raise your hand. Let me know. Yes, you gentlemen in the front. How long until we lower the boot time until under five seconds? Specialized use case. So we track continuously in our tests how long it takes to boot each operating system that we provide images for. And at each level of that boot time to get to SSH, what's taking the most time? So currently, in that 20 seconds or so that we're doing on average, about half of it, a little bit more, is spent in cloud in it. So if you know someone who works on cloud in it, tell them to work harder. No, we're going to be working on that problem. I actually was just talking to, like I said, my friends over at Canonical yesterday about this. I would love to see us work together to get a very micro distribution package image up that does just barely anything in it and boots right up. From the perspective of all the things we can tune that aren't inside the OS image, we're doing pretty well. We actually have some tweaks we're putting in next week that I think will make it go even faster. So we'll see. Other questions? All right. Yes, one more. OK, if you are already a dream host customer and you want to get access to the new cluster, you can log in your dream host panel. And in the section under Dream Compute, you are able to activate. And this is actually true if you are not a customer. You can sign up and go into the panel and activate Dream Compute. And what you do is you convert from our old pricing model into the new utility billing plan, and that gives you access to everything. And then we just track it across both the legacy cluster that we provide. You will get a lower price point because it is lower performance. But then you get access instantly to US East 2, is the region we're calling it. So all right. Well, thank you very much. I'll be up here to answer any more questions if you have them. But thank you for the time. Come to the booth. Yes, we have awesome backpacks to give away. And all you have to do is tweet about how Dream Host is making Cloud great again. So thank you.