 All right, well, I guess we're going to get started. Thank you all for making it and not just staying at the beer area there on the way in. I'm sure that was tempting. So we've got a great panel. Matt and I did this last year. Basically, how to build a case for OpenStack. So my name is Ryan Floyd. I'm an early stage venture investor. And I spent a lot of time in the OpenStack community. I've invested in a bunch of companies. But there's always this issue inside of enterprises about how do you really build a case for OpenStack? How do you get the enterprise moving in a direction to do some deployments? How do you start? Where do you get started? And so we've got three great guests up here who we're going to walk through how they got started, what the problems they were trying to solve, how they found solutions, how they overcame some of those obstacles, and try and make it interactive. I'd love to take questions. So if you have questions while we're going through it, feel free to go up to the mic. They're in the middle and jump in on the conversation. I've got a list of questions here we can go through, but happy to take yours as well and make it interactive. So let me quickly introduce the panelists. So Brandon, immediately here to my left, is with Hudson Alpha. It's a genomics company that, if I remember right from what I was told, it's what, 13 terabytes every three days, something like that that you're generating in terms of storage. So quite a volume. So it'll be interesting knowing the volume on that. Hello? Hello, hello. Hey, how are you? Then to his left is Matt from Time Warner, who can talk about what they've done. And I'm interested to hear what's happened over the last year. Hopefully things have gone better. And then Scott, to his left, from DreamWorks that can talk about what they're doing and what they've implemented. So I thought it would be a great place to start. Maybe we just kind of go down the line and you guys can set the context for really what does your infrastructure look like today. And then we'll back into some of the problems and how you got to where you are today. And then we'll talk about maybe where you're going. Perfect. That's good. So again, Brandon Crew from Hudson Alpha. We work in the genomic space. And we kind of have genomics as a service or sequencing as a service. So it's a little bit different than your normal SAS play. But we have a tremendous amount of data that's been introduced since the cost of human genome sequencing has gone down an insane amount. So just to give you an idea, at the beginning of this year, let's say roughly, it might have been $5,000 or $6,000 to sequence a whole human genome, where now it's down to around 2,000 in dropping. And the main reason for this is this company called Illumina, which makes these sequencers. And it's chucking off a whole bunch of data. Maybe three years ago we had to deal with about 12 terabytes a year worth of growth. So using some of the big players and some of the big-scale-out NASs that were expensive really wasn't that bad because we didn't have that much data. But now we're up to several petabytes a year worth of growth in this relatively small research facility with this even smaller IT team of just a couple people. And that kind of pushed us to come up with something a little bit more robust and something that can be cost-effective to scale, which was the main driving factor. As it, and from an open stack standpoint, is it basically storage that you guys are focused on? Or is it the compute side as well? So right now we're focusing on storage because that was the biggest immediate issue. That was the one that was chomping at us the most. But we're slowly getting to the compute side as well, which I can talk about a little bit in the future. But the storage piece was just massive. So the compute side stayed the same because we were still kind of doing the same amount of throughput and things on the other side. We had a lot of capacity built out. We had a relatively good way to scale that even now is being tested. So we're looking at open stack. But in terms of Swift and storage, it's become a huge problem. And adoption was another thing that we'll talk about in the future, which has been, or just a little bit, which has been an interesting trial to overcome. Excellent. Matt? All right. So I'm Matt Haynes with Time Warner Cable. And we got into the open stack business about a year and a half ago. We really started all of last year, 2014. We started to build out our first production pair of regions at two of our national data centers. So we're now up and running in two national data centers. A bunch of Time Warner folks have been here speaking in the last day and a half. We run a full complement of the infrastructure services. So compute, block, object storage, neutron networking. And what we've really built is a general purpose compute platform for subscriber business applications. So the applications that power our video, our IP video, broadband, phone, all the services we send out to customers. So right now, we're in the process of expanding and maturing that platform. We're onboarding more and more production applications all the time. And so it's going well right now. Excellent. Scott. My name's Scott Miller from DreamWorks Animation. We've actually been doing infrastructure as a service before OpenStack was invented and before clouds were cool. Back in 2002, it was called Utility Computing. And we started doing that work as a way to mitigate having to spend money on data center. Our data centers were full of power and coolings. We were looking for ways to use somebody else's power and somebody else's facility. We worked with HP to do 1,000 core infrastructure as a service play for some of our compute needs. It was called Utility Computing. It was based on some of what researchers were doing in the grid computing space, if anyone remembers that stuff. And the intent was the computers could be off-premise. You would manage them through an API. In 2002, that API included the telephone in our contracts department, so the latency was really high. But if you needed machines, you picked up the phone, you called them, and they built them, and sent you an invoice and a contract, and you had machines. We had guaranteed 30 days to add machines and 30 days to flex back down. But that set the stage for a later visit into compute as a service with a group in New Mexico. New Mexico Supercomputing Center turns out they had some grants where they built a supercomputer, but they didn't have any customers for it. So they realized that with incentives to attract business into the state, they could offer compute services and networking services using, a little bit fraudulent maybe, using some of the university resources at a pretty cheap rate. So we bought contract compute services, again from them for a couple of years, and realized that someone else's data center at the other end of a network wire totally makes sense. The two things that we're missing, and where OpenStack has kind of resolved this problem, there was no virtualization, so you couldn't do any multi-tenancy, and there was not enough internet bandwidth, so you couldn't really do a ton of data intensive compute far away from where your data was. That stuff's all changed. Our first real OpenStack deployment that's in production has been with Swift. I know the topic of this top is, how do you make a case? And the biggest reason you have to make a case to your management is there's risk involved. But if you do something additive like storage, and Brandon did this, they had a storage system that worked, it was just expensive, so you stand up a storage system adjacent to it, move applications, move people's mindset, and move some of your operation skillset to that, you get OpenStack into the institution and people start to realize, well what else can it do? Can I do compute? Can I do dynamic networking? Our actual OpenStack use today is for dev test and for experiments. It's really easy to experiment and fail in the cloud, and it's a lot cheaper than buying a bunch of infrastructure, provisioning it, and then failing. So small projects start, they get deployed, they succeed or don't succeed, and then we'll provision traditional infrastructure. Later this year, we're gonna do our first production facing application on OpenStack, using managed cloud as a service. So that's kind of our journey. So Scott, just hold on the mic there. What was that application that you guys were using utility compute for, you know, across the wire? It was our HPC environment. So people think you're making movies, you have a lot of videos. Well it's not really true. We're more of a file-based, high-performance computing environment. Lots of NFS servers, lots of machines, reading files in, doing math, and writing out other files that happen to be pictures. The way that the distance computing, utility computing could work, is there was a pretty good fledgling industry around NFS caches. With object stores, you'd use Squid or Nginx or some other proxy cache, but back then we used NFS caches. Put the caches adjacent to the compute and that way you can get quite a bit of data. The workflow, the HPC workflow is very read mostly and reread intensive, so that worked out for us. So what puts you on the path for Swift? I mean, there's a lot of choices for storage out there. There's proprietary object stores, there's open source, you know, whether it's Cep, there's Swift, there's new stuff like Nexenta, there's the traditional guys that will sell you an object store like NetApp. What puts you on the path to Swift and how did that make sense for you in terms of starting to build that case internally? Yeah, there were two things. So last year I called it the summer of object store. We looked at all of them, we POC'd a bunch of them. Our history starting in 1999 is we were an SGI shop from the early days. SGI made some questionable business decisions so we switched to an x86 shop, but our code was primarily a Linux base, we ported it all to Linux and became an x86 Linux shop. So we'd already embraced open source, we'd already done our own device drivers, we understood the ecosystem and how flexible it could be. Looked at every commercial object store and they all had something that we're good at and they all had a few things they weren't good at. Swift turned out to be good at everything we needed it to be. It's got good multi-geography, it's got good replication skills, it's got runs on any hardware on almost very flexible bill of materials. We chose a Swift deployment using a managed controller from a company called Swiftstack because trunk open stack is, you can do it as a science project and install it and run it yourself, but I don't advise it, right? These are enterprises that have to make money. It's hard to do on your own unless you either have a deep staff work and build the deep staff like Time Warner has, best have a partner to help you build this stuff. So again, Swift was complimentary, we had made changes to our application stack that would support both NFS storage and object storage. So it was additive and the risk was if it blew up we'd move the data set back. Right. And it hasn't blown up yet. So it sounds like one of the key things that you went through in this summer of Object Store technology was basically figuring out a way to implement something where it really presented a low risk approach in the environment. So I think this is something that a lot of people don't appreciate inside of an enterprise. I mean, these are high risk infrastructure decisions and if it fails, you're not gonna have a great summer. Yeah, well and the other thing, we make pixels on the screen. I didn't wanna be in the infrastructure making business. I didn't wanna be in the storage provisioning business. So having a partner to do that work led us to our core competency. I didn't wanna spend engineering labor on implementing, managing and maintaining these systems. I want the engineers to engineer stuff. So the managed services partnership is also looked like staff augmentation in that I don't have to hire more staff. I can use them for real work. So maybe Matt, let's turn to you since you're next next to Mike. So, you know, a couple of things. I mean, so compare this year to last year. I mean, how has your infrastructure grown? Maybe you could, I don't know how much you can share with the group here. I mean, how many cores, size storage you run. I mean, it sounds like some big applications in terms of subscriber provisioning, your, you know, telephony. And so maybe give us a sense of that and what's changed. So a year ago, we were just getting ready to go to production. We were, you know, a couple months from launching OpenStack. But you know, and to put it in context, we have, so my team runs cloud for Time Warner Cable across a lot of different technology stacks. And OpenStack is one of the primary ones we now have and we brought that in for it's, you know, really it's elastic computing, programmatic infrastructure capabilities. But we were just getting it ready for primetime, which we did last July. Since then, we have, we have, I would say more than doubled its initial footprint. We're into the sort of hundreds of compute nodes. Kind of, I think we're close to a petabyte of storage and combined object and block. And we use both SWIF and SIF for object and block. We have a pretty wide array of applications running on OpenStack. Again, we didn't build this for a specific workload, as is often done, right? We built this for general purpose computing. So we've got folks who build the portals for Time Warner Cable, TWC, you know, tv.com. And, you know, they're on there. So typical web application portals are on there. You've got folks in our video back office that has this crazy glue of services that support both the IP video and the linear cable video product. Not the inline actual delivery yet, but all of the setup and authentication and authorization and provisioning that goes on when you click on a channel to say, you know, do you have it and all that. We've got folks looking to do offline VOD transcoding on our platform now, because it's a nice elastic workload that can be done in off-peak hours. So there's a pretty wide range of customers now. And what we're finding is, and I actually talked about this a little bit earlier in a talk on culture is as we've introduced this platform and people, you know, we didn't do it with a sort of stick to say you have to move. We introduced it with these sort of carrots of, hey, you get free VMs and you can have them in in two minutes. And we have, you know, we have all this automation. You can have that for free. And so people have come over and started to use it from an application perspective. And it's getting really sticky with them. So they'll bring one application over and then, you know, they'll say, great, now I want 10 VMs and great, now I want this and all of a sudden they're up against their quotas that we give people. And so now they want more quotas and then they'll last for crazy stuff. Like I wanna run, you know, this log aggregation platform. I need, you know, 200 terabytes of SSD based storage behind, you know, SWF or I mean, behind SEF. I'm like, okay. That sounds like that's a victory. Give us a minute. Give us a minute. I mean, that sounds like that's a victory. Right? No, no, no, I did. In terms of, you know, getting the use cases. I think it is because the carrots that we're providing in terms of automation and in terms of ease of use and all this other stuff, there really is a motive behind it. I mean, we are making them faster. They go faster when they come over. But what the real, I mean, there is a real cost savings here. And we can talk about, you know, that, but, but, you know, for me making it enjoyable for them to come over and get their applications and helping them with education and everything else is about getting them in our footprint so that we're getting them out of more expensive platforms that we have to support them on. When you look back and you think about the case you made originally, right, you know, before you're, when you're deploying everybody was committed and now you look at kind of where you are today. What, what are the gap, I mean, what worked better than you thought? And maybe what were some of the gaps in terms of where maybe it didn't quite get there yet? Maybe it will, but, but, you know, you underestimated something. What learnings do you have there? So I think, I think the piece that worked well or better than I expected was some of the, some of the internal politics, to be honest. I think I was expecting, I came in from the outside world. Cable, as you can imagine, is, you know, is not a, you know, is not a flexible, you know, it's, it's kind of an old world kind of company. And, and I was expecting, you know, this was a brand new organization, Cloud at Time Warner Cable didn't ever, you know, didn't ever exist before, before I showed up. So it was, it, what worked a lot better was the partnerships that we made with the networking team and the data center team and all these other teams. And my pitch to them mostly was, hey, if you help me, me successful, I'm taking 90% of your customers away. And they're kind of a pain in the ass for you, right? So you're not gonna have all those people coming to ask you to turn on one computer, or RAC one system and wire one thing, because I'll do it in terms of rows of equipment and then they'll be my customers. And they were like, yeah, that makes good. So anyway, so that worked well. I think what took a little bit longer, and I think I was just optimistic was this learn, you know, was the entire learning curve of, of, you know, the cloud model that OpenStack brings across, you know, and again, because we weren't targeting just, you know, object storage or something, you know, it was an entire, it's the entire platform of building a cloud aware application on top of OpenStack. And there's a tremendous amount of customer engagement, education, customer service. We have Jira's for the team, you know, and we have a category of Jira that's basically customer service questions. You know, when people get on and they're like, I don't know how to do this or this isn't working. And the team fields those, and we still to this day or we probably field 50 of those a day. Just questions like that. So, you know, I think that was one that caught me a little by surprise that there's a tremendous amount of customer engagement and training and help that you need to do initially. But once they get it, then they get real sticky, real fast, and then they want a lot. Yeah, that's good. Well, transfer of brands, so tell us about the crazy storage requirements that you guys have. Yeah, so, you're right up there with, you know, box, Dropbox, something, right? Yeah, it's nuts. It's nuts. So, just to give you an idea at the kind of research angle, research institutes really aren't a tech shop, right? So, you know, just like Dreamworks was saying, we have no additional staff that we wanna pour into doing something new. So of course, Hudson Alpha comes to me and as a consultant, I wish I could be the safe person, right? Like, I don't get fired for choosing EMC, right? I choose EMC, it blows up, people yell at EMC, it's okay. You choose Swift and it blows up, they yell at you, right? So, you know, that's where we decided to move and use Swift Stack, you know, kind of as a partner, but essentially we said we have to do 18 petabytes of data in the next three years, right? Ton of data. We have two people on IT, including myself, right? And we can devote 0.25 FTEs or full-time equivalencies, AKA 10 hours a week, right? To manage, deploy, and structure 18 petabytes worth of data, right? And there was just no other way to do that. So, we said we're gonna get as near-line as possible. We got these super cheap Seagate drives, we got them in a rack, Seagate delivered them, okay, now we got four petabytes just staring you in the face, right? Now what do you do with it, right? So, you know, that's where Swift Stack came in, we ended up deploying Swift, and for the researchers, for us, we had to do the same thing, we had to dangle the carrot, right? This is, and the carrot for this part was interdepartmental chargebacks, AKA, you gotta start paying for this stuff since you're using it. Beforehand, we could never do that, you know, with traditional scale-out NAS, we couldn't say, oh, hey, Billy Bob, you know, you're using a hundred terabytes or a hundred, you know, or a petabyte of worth of data, and so instead of just having this kind of general cost allocation for all, like, you management folks in finance, we actually can allocate cost for the associated service to each of the individual people. So, if you're using it, you have to pay for it, right? So, with the problem, was the problem you were solving, then it was clear that it was the lack of resources, so you didn't have a ton of people, and you gotta manage petabytes. It sounds like it was also a cost, cost issue internally? It was a huge cost, so in the research and institute field, the majority of the dollars are, let's bring in other research groups, which all have their own requirements, and IT is typically much smaller than normal, so three years ago it was fine, right? Like, any middle of the kind of road IT person could get away with an enterprise storage tier, 20 terabytes, right? You know, you go out and you just pull something off the shelf that costs a lot of money, and it will probably work, but when you get into the petabyte realm, it's much more difficult and it's way too expensive. Like, we looked at Amazon, my budget was 500K, right? For like, three petabytes. So, we looked at Amazon, like, plug in a little numbers, and we're like, this is exciting, everybody's excited, like, do you like S3? I'm like, yeah, man, S3's awesome. So, you plug in an S3 object storage, and it's like, $1.2 million a year per petabyte, because a lot of people don't factor in bandwidth, you know, so if you actually wanna use your data, you need to factor that in, that's kind of important. I said, well, okay, well, I can't do that. Went to EMC, and EMC slapped me around a little bit, and I was like, okay, I can't do that either. So, really, I have to come up with a petabyte level storage that doesn't require any people that's cheaper than the maintenance contract on an EMC storage array, right? And that's where we kind of ended up with Swift, and it worked out pretty well. There must have been some skeptics internally that said, you're crazy to try and deploy this, didn't believe you, tell us a story, I mean, how did that work? I mean, there must have been some consultant that someone brought in that said, this'll never work, right? Well, yeah, so one of the consultants was me, and I'm like, this'll work great, and I'm just sweating bullets, you know what I mean? I'm like, oh, God, this is such a big infrastructure change from what you guys are currently used to with block storage, even, and that was a big issue, right? It's like teaching people about object storage. So what we did was we're gonna hide this a little bit. We're gonna bring in the storage, we're gonna show everyone how we store replicas, so everyone feels safe and warm inside. But then we're also gonna go in kind of DevOps style and see where people are using the block storage and just change a little bit to where they'll start using object storage, which was a task that we had to undergo. But ultimately, you have to have that buy-in of the manager that's willing to kind of like take the sword and run forward with you, or you're just gonna get crushed, you know? So for us, that buy-in was the IT director at Hudson Alpha said, hey kid, I think this is a good idea, let's give it a shot. And just like Scott was mentioning, we're able to run them in parallel, the storage systems. So we're able to take virtually no risk while we kind of vetted things out and got other people confident, as confident as we were, you know? And building their confidence also helped us in figuring out what we wanted to do going forward. Scott, was it similar in your case where you had, was there a lot of political pushback and friction you found in trying to adopt this, or was it because it was a lower risk kind of test dev environment? It was really just the financial sort of, hey, do we have budget for it? Well, the Swift deployment was interesting. My CTO said, no more name your vendor, no more scalable ads. The budgets are too high, the maintenance contracts are too high. And so you had senior support for doing things differently. We're not spending money that year. If you read the trades, we had a couple films that the Wall Street analysts didn't think were good and they'd become film critics. So if the film critics don't like your movies, your stock price goes down, which is bizarre. Don't go public. That's a tough way to manage your IT budget. It's ridiculous. Because it seems very disconnected. It is somewhat disconnected. So normally we do a pretty decent year over year storage refresh. And the intent this year, starting in the summer of last year through this year was try to hold capital flat and try to reduce OPEX. So what we tried to do is supplement. But the problem is you can't usually subtract until you add first. You have to add, do things in parallel, then subtract. The cool part about adding Swift is it's a pretty cheap spend to get a proof of concept running. Swift Stack has a managed controller in the cloud. I stood up a couple boxes full of disks and I had a cluster running inside of about 30 minutes. Aim the developers at it, they use it, they're happy. It helped that we had some public cloud Swift experience and we understood object storage as a way to store things. It also helped that we had been re-architecting our applications to get rid of the direct file system access for a lot of them. We put a piece of middleware in place that abstracted away storage locations and storage protocol. And that way the applications don't really know they're talking to an object store except they behave different. You can't do random access. You can't be, you don't open, seek, update, and place and close. That sort of file system abstraction. So we also had the education challenge of we gotta do your IO different. Application developers have to think different. And so we got the why can't I just use a file system thing? And again, because we had corporate sponsorship, our CTO said no, no, the software developers will fix their stuff. We'll use this middleware tier and the applications will then start using object store. So that can't emphasize enough, find yourself a corporate sponsor who really wants to make a difference and you can both leave the company together when it fails and you can both celebrate when it does well. So Matt, I'm curious in your case, now you've got what seems to be a pretty successful, at least year behind you, under your belt, still got a ways to go. Do you still have to build a case now internally for why continuing on this path? I mean, for example, for these new applications you're talking about, do you find yourself sort of saying wrestling with different folks about it, whether this is the right way to go, or hey, we gotta go deploy containers now and so you gotta go deal with that, or how does it work now in terms of now you got some success, can you build it? Is it easier or does it get harder? So I think it's always a little easier once you've been successful, there's a little bit of pressure off, right? So that, you know, I came in and- This guy's not a total idiot. I came in and I said, you know what, I've got a year before somebody's looking at their watch going, what the hell is he doing? So we made good progress in that year and I not only wanted to stand it up in six months, I wanted by the end of the year to really actually have a production workload on there, like a viable customer using it for production. So it's gone well and so what we find ourselves now in this Time Warner Cable is, and again I have a lot of different customers with tons of different kinds of requirements and needs for their workloads and storage and I have now a sort of a mature array of cloud services to offer them and some of those are open stack and some of those are other platforms but what I can do now is rather than fight with an application person about what they should do is I just say, well what are you trying to do, right? And what's your application built for? And if your application has a certain set of characteristics, I think you'd live well over here, you should go over there and we kind of have a common single sign on provisioning for all of our application people and we've provisioned them resources out of the different cloud services. So for them they just sort of shop where it makes the most sense and they keep going. I think what I'm looking at is now how to get over some of the next hurdles. So what I've really stood up is infrastructure as a service and as flexible and as programmatic as that can be, it's a little too low level for a lot of application developers. So we're looking at platform services, not just Paz like Cloud Foundry but actual just straight up platforms. I can't tell you how many in my IT side of my shop how many people come to me and say, I want a VM. I'm like, okay, you can have a VM. I want another one, okay. You want another one, okay, there's three. I want them to be big, I'm sure. You can have big VMs. I want some really, really fast storage. Okay, fast storage, here you go. Okay, and now behind this, oh and I want Windows. Okay, you can have Windows. Are we good? We're good, okay. Then they go away and they come back a week later and say, I can't get my SQL clustering to work. SQL server clustering won't work on this. I'm like, well, why didn't you tell me that's what you were trying to do? Right, right. That's not how you should go about doing it. And what you really should do is, I'm standing up SQL as a service because Microsoft SQL server is a bit finicky about how it stands up in a virtualized environment and not shockingly, it works best out of something like an Azure platform. So you can host your own Azure and so offering my customers SQL as a service makes everybody happy, right? They're not trying to figure out how to stand up VMs and stand up storage and hook all this together. They just ask for a SQL server that's got replication and clustering on and I hand it back to them. So that's all on that platform level, right? So there's passes and Cloud Foundry but there's also these higher order services that are closer to what they're really trying to do. So that's our real next step is delivering services at that level that are sort of closer to what they want. That makes sense. Just to remind everybody, there's a mic there. If you have any questions, please get up, jump in. We'd love to hear from you. So this is for anybody, I mean, what kind of criteria? I mean, there's some things we've talked about here, low risk, not having enough budget to do whatever you want to do, trying to focus on the right application workloads. What sort of criteria do you think maybe aren't as obvious to everyone that you ought to be thinking about when you're trying to build this case internally and ultimately trying to be successful? So one of the things we ran into and kind of like what Matt was saying was we had a lot of people, IT's like way over here and biology's like way over here and IT just says, okay, here's a whole bunch of servers, we've done our job. And the biologists say, here's a whole bunch of Pearl code, we've done our job, right? And then there's like this big gap in the middle of what they actually need to run services. So what happens is, I think a lot like what Matt was describing, it's funny seeing this in a whole different kind of sector, is like, I'm just gonna Google and like figure out what the requirements for the awesome, like badass SQL server is. So I go back to Matt and I'd be like, I want three VMs and I want it super fast, has to be on SSD and not really understanding the workload too much. So there's a really interesting need of IT to kind of get into that space. I think OpenStack really allows that well. And you see even other abstraction layers that are really getting down to abstracting out the compute resources to the individuals. But for us, one of the primary driving factors was total cost ownership, right? And of course, when we came in and we said, okay, total cost ownership, our CAPEX, our capital expenditure, what we're spending to buy the hard drives in infrastructure is cheaper than our OPEX if we already had, magically had an EMC infrastructure. We got a lot of buy in that way. For us, budget constraint was kind of the main one. So when we came in and we said, we can buy all this stuff or cheaper than we're currently paying just to maintain, right? They said, well, it's worth some kind of risk. So you have to figure out where your kind of pivotal angle is. So total cost to ownership and then integration to the current software is something where we had to step up. Luckily for us, there's a lot of workloads that already were kind of utilizing their block store as an object store. They were pulling it down, doing something, and then pushing it back. So that was pretty easy to kind of work around. So in terms of cost, I think you'll be very hard pressed for online hosted, hybrid, or on-site storage systems to be anywhere near the cost associated with it. But you'll also probably be presented with the same challenges of we want all this, but we don't want to hire anyone new. So there's some really, really cool middle ground there. But the middle ground that people talk about is a huge actual discrepancy, right? And I mentioned this in my talk, $37 a terabyte for raw drives and $1,000 a terabyte for enterprise and EMC type stuff. So why is there that big gap? And if we can solve a little bit of that big gap and somehow not lose 100 or $950 in the mix, then we should be okay. It's interesting, I'm glad you actually brought up TCO, because it's a kind of amorphous concept. It means different things to different people, what do you include in on those costs, and so forth. And it's interesting that in your case, you were able to do it with relatively low headcount because I think one of the things that OpenStack it's not for is, yeah, it's great to use, but it takes a lot of people to stand it up. That's a really good point, right? And that's something that a lot of people have to look at. I think that that's extremely common when you look at any open source software, right? And this is my favorite argument. Luckily, since we're in the academic space, a lot of our actual software that we run the whole institute on is open source. So there's kind of already that, instead of, oh, we need enterprise stuff, there's kind of already that adoption of open source, which is really neat. But the downside of that is the open source guys, and you guys, because I was part of an open source firm before, which made an open source PBX, asterisk, and whenever someone came with a problem of something not working, he would say, oh, it's open source. You can fix it, right? And of course, it's funny because the finance department, people making decisions at Hudson Alpha kind of said the same thing, and it was a little bit scary. I was like, well, they're introducing a racial coding soon, which is basically just an ability to have the same durability of data. So it'll always be there, but for less overall space. And they were like, well, if they don't figure it out, you guys can. And I'm like, oh, me? Oh yeah, okay, I'll work on that. So being able to, in looking at your total cost of ownership, you have to look at the engineering side. But what's so cool is that every day that OpenStack matures and grows and the different side products mature and grow, there are people filling that very real need. And in storage, I think Swift Stack's one of the players in that space that says, hey, let's take out, just if you look at the cost of buying Swift Stack versus the cost of hiring just one engineer, it makes sense. And so you have to do those relative comparisons for what we're doing. Question? Yeah, I think the question there. Thank you. Each of you has made a fairly strong point that you must have executive sponsorship of some kind or another to be successful to drive the change that you're describing, which is very exciting. But how did you obtain that executive support? Because executives are not ones to say, oh yeah, let me risk my career and my company for the sake of change. That's a very good question. So I can answer that first and pass it on the line. Very good question. So for me, and I think in a lot of situations, especially now, it was born out of a very real problem. So I wasn't artificially saying, hey, let's just use open source even because I love it and it's cool. It was because they said, hey, Brandon, we want you to do petabytes worth of data. And even though you went from 200 terabytes to three petabytes worth of data, you have the exact same budget as you did last year. So I said, well, OK, here's just the cost analysis. Just what a cost to buy, all these different solutions. And that's where I got the finance and executive wand to say, you know what, we really need to look at this and at least give it a shot because the risk associated with giving it the shot, which a lot of executives understand, the risk to reward, was actually really small. Like Scott was mentioning running it in parallel, it was very easy for us to make a use case of what the heck? Let's just try this because if it works out, we're going to save millions of dollars a year on storage. And if it doesn't, it's no big deal. We'll just go down another path and there's not much time wasted. Executive sponsorship, yeah. Yeah, so I think in my case, it was our CTO who retired last year, Michael Joy, who made the decision to basically move into a cloud infrastructure organization, brought me on board to run that. So I kind of had that mandate. And I played it judiciously. But when I needed to, I'd pull out the Michael Joy card. Mike hired me to build an organization to do this. I guess I'll go tell him you're not really behind this. Oh, no, no, we're there. So yeah, you definitely have to have some of the support to do this. In our case, our case was similar to Brandon's. It was financially motivated, but it was also strategic. We had made decisions earlier in the decade. We weren't going to build physical infrastructure. We weren't going to change the buildings, add power, add cooling. Fortunately, the power per core and the power footprint for stuff had gotten so good that we could increase the size of our on-premise footprint without having to add power. We got a little bit, Intel bought us some time. But the net result is at some point you're going to fill your physical infrastructure and you have to go someplace else. So we had been looking at agility as a way to save money. And it was a strategic initiative to go look for ways that we could use other people's compute, other people's resources. Also, our industry is very bursty. Release a movie twice a year. We consume a whole ton of compute the few months right before the film and then very little right after the film. So it's a very spiky business. And the idea of being able to burst into the public cloud is very appealing when you do the math and realize that owning your infrastructure only makes sense if you can keep it utilized. And if you can't keep it utilized, even though it costs more to go rent from Amazon for a couple months, you're only renting it for a couple months. So it's easy to justify if your workload's bursty, if your staff is limited, if you're willing to take a little bit of risk, the rewards are huge. And then the storage play was the same thing. We had the goal was to replace capex with not having to buy maintenance. And it's not just a little cheaper, it's substantially cheaper to use an object store, invest once in reworking your software stack than to continue to buy enterprise, old-school NAS-type products in block storage. Great. Well, listen, we're just about at 510. So I appreciate the panelists. It's given me a big round of applause. Thank you very much for sharing all your insights. Thank you all for coming and have a great rest of the show.