 Okay, hi everybody, sorry about the little difficulties that are getting started. So my name is John Griffith, I work for SolidFire, and I also work on the OpenStack Cinder project. This is Rodney Peck, I'll let him interview himself. Hi, I'm from eBay, I work with the PayPal team and the eBay team and I'm a storage architect. So today in this session we just wanted to kind of go through and talk a little bit about some initial use cases for OpenStack and block storage in your cloud deployment. A little bit about some real world examples, what people are doing, particular what eBay's doing and stuff like that. So my favorite slide in case you haven't heard yet today, the data center is changing. Used to be when you needed a resource, particularly as a dev or somebody else, you submit a ticket to the IT department and request some resources, whether it be storage, compute, whatever it might be. We're going to focus mostly on storage today. So anyway, that's kind of the old way of doing it. So the way it worked was a dev would usually plan out their project, estimate kind of what resources they needed, take kind of a swag at what they needed in terms of servers, storage, and networking. And of course we always pad, because chances are we're not going to get what we asked for, so we always ask for more. So then we submit our ticket and we wait and we wait and we wait and we wait and we wait. And finally IT comes back and maybe they'll actually give us what we asked for or maybe they'll say, you're not going to get all of that but here's what I can give you. So that's tough. I mean that's a tough way to work. It makes people unhappy. You don't get the resources you need to do your job, you're not effective, and it's extremely inefficient. So a slightly better model that we started to see a few years ago was devs are actually given a pool of resources of their own. So maybe they're running ESX server or maybe they have a storage array or something like that and they get to go ahead and carve that up and administer it and use it on their own. That's slightly better. Still not great though. So if we go to the next slide. The problem with both of these approaches is no matter what, it's slow. There's a lot of lag time when you're talking about dealing with tickets and submitting requests and things like that, that's always a problem. It also relies on this thing called accurate prediction and I don't know how many of you in the room are developers but typically we're not the best at predicting exactly what things are going to look like. We're good at planning things out and stuff like that and starting to work and hack on things. But things tend to look significantly different by the time you're done. At least they do for me. The other thing is that's a very inflexible way to do work. It's not agile. It's not flexible. It's very rigid and introduces a lot of constraints. The other big problem is it's not good for reproducibility. One of the big problems that people run into on the development side is as they're going through and they're iterating through their development and stuff like that, they discover something or they find a new thing or they find a problem or something like that and they have trouble reproducing it because they've done all these manual steps to build all this stuff and put all this stuff together. So unless you take really, really good notes, which I for one don't, it's really hard to reproduce those things. So the whole idea is that way of doing things in my opinion is absolutely ridiculous. It's a silly way to do it, especially given what we have today. So with OpenStack and with Cinder, the idea now is we go from this chain of events that we have on the left where you submit a ticket, goes to the IT rep, who calls the server admin, who calls the storage admin, who put together some resources and respond back with your request, to we have a user or a dev, just uses the OpenStack API, goes to the resource pool, gets what they want, done, boom. So now it's self-serve, it's immediate response, it's dynamic, it's on the fly. So you cut out all the middleman. The key values here, what we talk about is agility, scalability, automation and predictability. So those are the key things about OpenStack in using this model. The idea is I may go and I may say, hey, I need a compute instance with four VCPUs, eight gig of RAM and 100 gig of storage. I may start working on something and realize, hey, you know what? Actually, I don't need all the storage. So I'm going to change it and only use 50 gig. Or I may decide I need more CPU. I have the ability to go ahead and return all that stuff to the pool and just check out what I need. And I can do that dynamically on the fly in a matter of seconds as opposed to days or weeks. So that's a big deal. The other thing is, is I can use the OpenStack APIs to actually automate all this, so then I can reproduce everything that I've done really easily and really succinctly. So the end result, when you go into using OpenStack and you use Cinder and things like that and use this model, is you actually fully utilize your resources. So now you're not over provisioning or patting your requests. You're not waiting. You don't have things sitting idle. Because now you have a dynamic pool that people can continue to take things in and out of. So you actually get significantly higher resource utilization. The best part, in my opinion, is the self-service aspect. So now, instead of submitting requests and asking for things and people have to go off and do it, I just go ahead and get it myself. So I know what I need, I know what I want, I go and I get it. And again, you turn it to the pool. So go back to the utilization thing. And then automation is king, right? Automation is the key to everything. It took me a while to actually finally get to the point where I realized I would build these things and I'd be working on stuff and something would break and I would spend all this time trying to fix it. And I'd go back and I'd be like, what's wrong with this server? What's wrong with this? And trying to fix things and stuff like that. And I finally realized that when I automate things and I get everything running using OpenStack and using the APIs and stuff like that, it is way more efficient and way easier for me to just go ahead and just throw it away and rebuild it. So things like debugging a server or some code that I have that I broke and I destroyed my setup. Instead of spending a day going through and trying to unwind it and fix everything, I just go ahead and I just build it again in a matter of five minutes and I'm done. So it's a significantly better way to work. A lot of the things that I'm kind of touching on are more test dev. And the things that we're going to talk about are a test dev environment. But the thing that I want to point out is test dev is not the only thing to use OpenStack for. It's a great on-ramp. I think it's a great platform for test dev. It's one of the best use cases that I can think of. But the thing that's cool about it is if you get your devs using OpenStack and using the cloud and stuff like that. They're going to start playing. They're going to start experimenting. And they're going to start exposing new things that you can do in your business for your standard data center use that you can pull into the cloud and use OpenStack for as well. So it's a great gateway. It's a great on-ramp. It gets you some internal OpenStack experience. Helps you grow your deployment. And the key about OpenStack to remember is you can start small and scale out and you can seamlessly scale out. So start off with a 10 node deployment for your devs to use. And then as you grow and you decide that, hey, I want to actually start deploying some of my Salesforce workloads on here. Just add more nodes. Just scale it out. Add a new tenant. So, well, there you go. With that, I'm going to let Rodney talk a little bit about some specific use cases that he's doing at eBay. Thanks. Again, I'm Rodney Peck. I'm from eBay. And we're going to talk today about a system we call the stage system, which is our dev QA teams use stage hosts to develop and test all the new code. They're pretty big. They have 400 gigabytes of storage, 8 CPUs, and 100 and something gigabytes of memory. So it takes up a lot of the hypervisor. Building them in the past, you can go into the slide. Oh, sorry. There's a whole lot of numbers about eBay. Great place to work. So back to my piece. We've improved the development time significantly by using Cinder. In the past, it took more than three hours to build a PayPal stage. So we added OpenStack. We moved all our stuff in OpenStack in the next slide. Not much faster. The problem is that it's all this data, 20, 30 gigabytes of data, moving from glance through the system onto the local disk. So by taking the local disk and moving most of the data off to Cinder, we only have to move a couple gigabytes onto the local disk to boot. And then we clone the Cinder volume. So you can show that. That's what I was just telling you. And it only takes 14 minutes instead of three hours. So our developers, they think we're heroes to help them so much. And it turns all of the developers into big OpenStack proponents. So you can see the slash x volume attached in only 14 minutes. Not a lot to talk about. But the details, it's all just normal OpenStack sort of stuff. We didn't do anything fancy. We just moved our data and clone it instead of copying it all over the 10 gigabit network. Next slide. So how's that possible? We've minimized all the transfer. We're using fast volume cloning. And there's almost no network I owe now. One of the interesting things that we could do in the future is totally boot from volume instead of cloning anything out of Glance. The services aren't quite as reliable because you have to have a very high quality network attachment. But doing that, you can boot a new machine in under a minute. But we want to have the reliability. So these are the steps that it goes through. Just sort of walk it through. So in the three hour time, you'd have to schedule a hypervisor. You have to download all that data from Glance, which has the image, into the Glance cache on the hypervisor, and then copy it from the Glance cache onto the local disk. And we're talking about 50 to 100 gigabytes of data. So that takes forever. Then the VM launches, it boots. And then you do a code push and all of this stuff. That's where the three hours comes from. In the new version, the code push has already been done. The data's already in slash x, so all we have to do is copy it and then use Puppet to set the host name in various small configuration changes. And that's really all there is to it. It's pretty straightforward. And in summary, it used to take three hours. You have a sleepy kitty. Now it's done super fast. I don't know, do we have questions and answers? And do we have time? Have any questions you want to know more about how this is implemented and how it works? So the key here is some back ends in the Cinder project are going to give you things like fast cloning and stuff like that. And that's definitely something that we're relying on here using SolidFire. So there's some other products that may give you that as well. But even so, just not transferring the data. The cloning is really great. It makes it super fast. But not moving the data out of glance makes a huge difference. And if you could imagine booting, say, 50 machines at once, glance would fall over trying to download all that data at the same time. So those are the sort of things we've run into in real-world applications. Real-world question. Why are you bothering moving the data? I'm sorry? Why are you bothering moving the data at all? Well, oh, so it's a blank image. And a PayPal stage has a whole lot of hundreds of gigabytes of data that the applications are expecting will be on the machine. So when we boot a new machine to run our regression tests of all of the code, it has to have that data audit. So we'd have to fetch the data from somewhere. And that's why it takes three hours to fetch all that data and move it. But if you can fetch it from somewhere, presumably you can connect it. If we fetch it, we're connected. If you can fetch the data, then presumably you can connect to the data and see if having to move it in the first place. I'm not understanding why you're still moving data. This is a PayPal in a box sort of thing. So it's a database. And then the application talks to the database in the box itself. So we need to make a copy of the original database. That's what these are. They're fully self-contained DevQA boxes. So yeah, you could architect it differently. You could have the test DevBoxes. I'll just talk to some servers over there. But then if those servers aren't available, none of the developers can do any work. eBay is all around the world. So we're very distributed. But yes, it's true. You don't want a DevBox that has 400 gigabytes on it in general. But if you do, then you need to make many copies of them, because this is what we've done to do it. So another example of that, too, is in our lab we have setups where our development environment takes quite a bit of package installation and things like that, right? And you have over probably about 75 gig worth of stuff that you have to go out and fetch, and ad get, and install, and configure, and stuff like that. So instead of actually going out and doing that every time, or building a mirror, or anything like that, we just have a template. So anytime we have a new developer come online, or he needs to spin up, or do some work for the day, or whatever in a fresh, clean environment, he just goes ahead and uses that volume template and clones it, boots it up, uses it. And then when he's done, he can throw it away. So that's kind of the idea. I mean, the idea is to eliminate going out and fetching data, right? I mean, that's the whole efficiency that you're getting. Any other questions? Also, attentive. We still have a little time. It supports all of our developers. We really can't talk too much about the detailed numbers, but it supports all of our developers. Hundreds to thousands of VMs. It's very big. Like a D-Dirt in the headlights. Well, thank you, everybody, I guess.