 Good afternoon. Thanks for joining us. Riding down the week, I'm Jesse Proudman, CTO of Blue Box. And I'm the TaiPet, Office of the CTO at Blue Box. We're here today to talk to you about hyperscale technologies and what they are and why you probably shouldn't be using them. So, hyperscale you are not. Well, to start, we should probably define what hyperscale technologies are. Unthinkably large systems, fully automated systems, commodity everything, cloud native. We hear these terms all the time, but really they look like this. Does anybody know what this is? Everybody knows what this is. It's Google's data center. Google obviously has an awesome designer to coordinate the paint colors on their pipes. My data center looks just like this, back at home. But it's building physical infrastructure with such care to design and such innovation in the design that it is so unique. Or it's like this. Anybody know what this is? Yep, it's Facebook. So, this is Facebook's Primeville data center. So, Primeville is in Oregon, just about seven hours from Seattle where we're from. Actually, a friend of mine was the data center manager for this site when they built it. And it is a marvel. They've got all kinds of interesting things going on in this facility, including using essentially no air conditioning. So, just forced air from outside. You can see here they've got essentially engineered motherboards, raw motherboards sitting in their acts, similar to what Google's doing. They've got an incredible PUE rating, which is the energy efficiency in the facility. All kinds of innovation here. In fact, Intel pops up everywhere, right? Intel helped them custom engineer these boards in this site. This is probably not at all what your data center looks like. Anybody see this article come out a few weeks ago? So, Azure essentially decided in building their data center, which I believe started on force 10 networking gear, decided that that wasn't going to work for them, and they built what they call the Azure Cloud Switch, which is essentially a full switching platform that sits on ODM white box switches. And they used that in their facility to re-architect their entire network design. Had to do that because of the scale at which those sites are operating and the scale of the traffic that those sites are passing. Amazon's done the same thing. Facebook's done the same thing. Facebook Switch is actually blue. They released it at Velocity a couple years ago as into the open compute initiative. But who in this room is spending their time writing switch firmware? It's not what you're doing. Nobody's doing this outside of these companies. Anybody know this art? So, Netflix. Netflix has an entire team of engineers that writes software solely designed to destroy their infrastructure. And it started with Chaos Monkey, which I think a lot of people have heard of. As I was putting this talk to you, there's like seven different services now all with wonderful names that do all kinds of things. Certainly Chaos Monkey just essentially stops VMs across the entire fleet to make sure that the infrastructure is fully resilient and can heal itself. There's a security testing engine. There's seven. And so there's a whole team of people whose sole job it is to write software to cause the other team to make better software. It's a different scale than what many of us are operating at. And then I work at IBM, so I've got to go through this to throw this one in. Here's software. We've got 27 data centers around the globe, all connected to the world's largest IP transit networks. So it's actually one of the things that attracted me to IBM when we were in the M&A conversation earlier this year. It's this global footprint, the scale of the global footprint, the number of points of presence around the globe, and that IP backbone. It's an enormous investment in infrastructure and it's global. As independent Blue Box, we had four data centers. Now we've got access to 27. So when we think of hyperscale, it's definitely not this. And I laugh, I found this photo on a stock site, but back when I started Blue Box in 2003, it was about 2006. This actually looked quite a bit like the data center that I had wired up. So we had a bunch of Cisco catalyst switches, 4500s. We had them racked up and I basically weaved the cables together to build a nice heat map in a COA facility. And so the point here is that what the largest providers are doing is so fundamentally different from what everybody else is typically doing, whether it be from a data center design perspective, building the actual facilities, whether it be from a hardware perspective, architecting the motherboards, architecting the cooling, the disks, whatever it is in the facility itself, whether it be in the software, in the way things are automated or built or deployed with containers, or whether it be in points of presence, just having that global footprint. That is not necessarily something that every organization has access to or should have access to. Okay, so why does it matter? Why do we care? Well, the future is software defined. We hear this all the time. Jonathan Bryce talks about it at least once a year on one of the keynotes. Every industry is being disrupted by companies that are reinventing that industry in software. You look at Airbnb, it has a higher market cap than Marriott and they have no physical hotels. You look at Uber. We hear these examples all the time. We're all familiar with them. But the reality is that software has to run on hardware. That hardware is likely a cloud. And this sticker, whoever made this sticker was my favorite person in the world. That cloud is really just somebody else's computer. So the work that you're doing, the applications that you're building, whether they're in-house in your data center or they are on an external cloud, they're running on the same set of hardware or a similar set of hardware as everybody else. We're all kind of doing the same things to some extent. Did you have a story on this you'd like to share? I do. Good. So in my journeys over the last couple of years of talking to both large enterprise companies and some, I guess I would say, I don't know, companies like eBay and PayPal and those kind of guys, there's one thing that really resonates throughout these conversations. The first one is for companies that you guys may be familiar with, eBay and PayPal are heavily invested in OpenStack. But companies like those companies still have conversations of not wanting to continue to be in the data center business. Even them operating at scale, large, large scale, they get to the point where they have to make some decisions on what their business is. So you hear more conversations even though we talk about how a lot of people started in the cloud or even have used large service providers before, they come back in and they say insourcing is something that we can do because we can build it cheaper, do it better. We've got really talented people. But even those companies start to reevaluate and say, look, it's going to be a hybrid of both of those types of things. So, you know, in those cases, you're going to find that. And I, you know, Jesse, now we're talking about this earlier, especially in the enterprise space where large scale that it's going to be, you know, a mix of both. And we'll bring up some points later about how, you know, you have to decide what you want to build and what you want to buy. And building it all yourself is not necessarily the best thing to do. And so at the end of the day, we got to think about what can we learn from the hyperscale providers? How can we leverage them in our business? How can we take lessons from what they've done and adapt them into the things that we're all doing? And so we were sitting around talking about this and thinking about what do we boil it down to? What is hyperscale? What are they actually trying to do? What do you get by architecting your own hardware or changing the data center design? And at the end of the day, it's a sufficiency game. So it's either, it's about power efficiency. It's about server density. It's about cost. It's about time to market. It's time to value, right? Everybody is trying to squeeze as much margin or drive costs as small as possible in what they're doing to make the operations teams that they have as small as possible or as efficient as possible. And so each decision that those organizations are making is focused on that goal. Yeah, and another point is if you think about the units of compute in terms of scale, we think of a normal enterprise being on a three-year hardware refresh cycle. So you'll have a few racks go away after three years and be replaced by others. The people who are in the hyperscale game will get as a data center game. So they will build an entire section of a data center and they will essentially run that gear for three years. Then they'll fill out another space with new gear very quickly, right? They're acquiring this equipment within three to six months and they're deploying it and standing it up. And then that's what a role is to most enterprises moving a data center. I mean, that's the entire infrastructure. So it's operating at a completely different scale than what we're used to. But you can still take principles from how they're doing it and bring them into your organization. So can you, as the general enterprise general user, can you hyperscale? Well, absolutely. But it takes an enormous investment across the entire organization. It has to be a structural part of what you're doing as your business. It needs to be a differentiator for your business. This is why companies like Google and Facebook make that decision. Because they believe that those aspects of their business really are fundamentally different. That they will succeed because of the work they're doing in that realm. So the alternative is no. You absolutely can't hyperscale yourself because you don't have the resources. You don't have the capacity. You don't have the team. You don't have the expertise. You could certainly go join Google or Facebook and work there and then you could hyperscale. But as an independent enterprise, you think about where you're spending your time and your energy and what the team that you have has the ability to do. It's not necessarily accessible to everybody. So I wanted to step back and talk a little bit about what we've learned at Blue Box. Because we're nowhere close to being hyperscale. So I have no credibility to be able to talk about it. But I can talk to you about what we've actually done in our organization which leveraged a lot of the lessons and learnings from some of these bigger companies. We're all trying to figure out how do we make our individual organizations a little bit more efficient. So if I step back and look at the history of Blue Box, we started in 2003 as a managed hosting company, pretty similar to Rackspace, doing Ruby application hosting. So we're working with customers in really hand-building environments back when doing that for Ruby was difficult. And over the years that business grew but started to see a trend sort of in 2010 with the rise of Amazon and from a public cloud perspective and looking at the customers that we had. We started to see that those customers didn't necessarily need or want that hand-built environment. They wanted to begin to bring some of those skills in-house. The engineers with sort of the evolution of DevOps, the engineers that were writing that code wanted to be able to deploy and operate that code themselves versus having that managed hosting model where somebody else is doing that work for them. They felt like that they could gain more value from understanding how their application was deployed and operated than having somebody else do it. So in 2011, we made the decision to go and look at private clouds. We had customers in our data center that had essentially taken the technology we'd built and deployed it in a private cloud capacity. And we started to see a need for a private cloud as a service product. So that's the offering we launched in 2012. There's a big difference going from sort of hand-building Ruby environments to launching a product. A huge difference, right? They're two fundamentally different things. We had grown up as a company essentially building snowflakes over and over and over. Every time we did environment, it was a very different setup versus the Blue Box cloud offering that we wanted to launch. We wanted that to be consistent. We wanted that to be a product. And so as a product, we needed every installation that we had out deployed to be identical. You think about what makes a public cloud provider able to operate their business with such efficiency? Well, it's that they have consistency in that deployment. And with private clouds, historically, that had been a challenge. Everybody was doing private cloud deployments a little bit different. That's one of the benefits of OpenStack, that there are so many options. But it's also one of its challenges. So when we went down this path, we really took us that back and said, we've got to approach this problem in a unique way. We've got to do something different than how we've done everything else in the past. And at the end of the day, it came down to this sense of ruthless automation. As a product, you can't have anybody changing anything in the environment. The environment needs to be fully managed through configuration management. And it sounds goofy, like we all talk about it. But from a service provider perspective, that's what we baked into the offering from day one. We had to eliminate people. We had to get rid of the people from every deployment. So how do we make the stand-up environment go from sort of the order process to online as simply as possible with as few people in the mix? We needed that consistency. We needed every environment to be the same. And so as we added features into the offering, we looked at capabilities like feature flagging instead of having one deployment have specific code for that deployment, being able to build that into the product itself and use a feature flag to turn that on and off on a customer by customer basis. So there were a lot of pieces that went into building that original offering. Another thing that was interesting for us is we decided to make that whole initiative open source. So the code that runs all of our production customers today is up on GitHub. Anybody can go download it and can use that code to do whatever that they want with OpenStack on their own. And we learned, if you think about sort of the hyperscale economy, differentiation through software becomes a losing battle because their software is available everywhere. So we felt like that the differentiation we could deliver was through the service. And so by making that software available in an open manner, we're able to get contributions from our customers. We're able to get contributions from other people in the community. And it made the offering generally more approachable to the customers we were talking about. So that was a bunch of lessons from being blue box. So what can you do? I'll let you tell some stories. So I think some of the principles around, again, the ruthless automation hold true here. The other thing is, I said it a little bit before, but I used to work at Ink Tank, did a lot with Ceph and saw some customers in their early days going through the journey of building OpenStack clouds. And there were very large organizations that did this. But one thing that still I think rings true today is that if you look at the ecosystem around OpenStack, there's a lot of companies where you can use their network stack, storage, so on and so forth. It still is a very deliberate decision as you build things at scale where you are trying to pick the bits and parts that you want. The other thing is that you have to do things like pay attention to hardware burn-in, hardware configuration, hardware deployment, and be able to rapidly iterate on those processes just like people do when they build software. Because we're not just talking about a software stack and what you end up doing with that software stack. We're talking about the entire lifecycle of all the gear that you end up deploying. The other thing is you want homogeneity. You want consistency. So you pick the base units that you can do and keep them consistent. So he's going to be able, like I said, do it at a data center scale where you're rolling an entirely consistent data center over to new infrastructure. But you pick those bits and parts so you don't necessarily have to, you know, maintain a lot of heterogeneous stuff, which is a challenge in the enterprise. I don't know if you guys talk to those customers or you are enterprise customers, but it tends to be a little bit of a disaster. But, you know, you generally want to keep things simple. And to keep it simple piece applies to more than just hardware, right? So think about the applications that actually sit in enterprises. Like we've seen examples of the calendar reservation tool, right? The calendar reservation tool does not need to be fully automated or necessarily highly available. If that tool goes down, what's the worst thing that could happen? Well, somebody fixes it or brings it back up. So do you need to make the investment to get a chaos monkey set up to kill your calendar reservation tool over and over and over? There's a business value question here that you've got to decide is the outage that I might suffer for this program, this tool, worth the investment I'll make to automate or to make the environment so highly available that it will never go down. It's that counterbalance question. And many of us are in this room are software developers, right? We like to work with new technologies. We like to work with the latest technologies. And we have a tendency to over architect things and big group Goldberg messes. And we don't need to do that. And that complexity in many extents actually adds to the failure of these applications. It's interesting. I think also, since a lot of us do come from software backgrounds and we see a lot of convergence with software devs and so Dallas is in the audience. I mean, we remember when Ink Tank first got started and we had all the self-developers, those guys were Linux systems guys. By the way, I came from EMC prior to that. So it was a night and day difference when you start to think about how, you know, you have to marry these two concepts of, hey, I'm used to systems that I can build and tinker with and then I have to build something that's consistent and scales in software, completely different. You know, some of the other components here that I think are important to note, focus. So choosing what you're going to work on and then centralizing the talent to work on those components. So we've seen in a lot of customer organizations where there are multiple teams solving the same problem in slightly different ways across that organization. You can centralize that talent and have one team solving a problem one way and provide that as a service to the organization. It allows you to learn from the mistakes that you're making which ties into fail fast. I think we see this all the time in enterprise organizations that the plan and private clouds are a traditional example of this. We're going to go build a private cloud and then 18 months from now we finally have the thing online because it took so long to do the planning, the build, the design, the implementation. And 18 months from now the thing we built that we were designing 18 months previously is no longer relevant. Being able to break work up into small chunks and get them deployed quickly and into the field quickly is critical. So, I mean, that goes back to MVP, right? You essentially should be building things that are minimum viable product. The interesting thing that happens in the enterprise, though, is that I think teams start to go off on this journey of, okay, we're going to get into things like automation. So, one team starts to work with Chef. Another group, which their function is not infrastructure, will say, oh, well, we decide to automate some of these things with Puppet. And it was never really saying, hey, we're taking a bite of the larger piece of the pie in terms of automation and we're going to focus together on something, but these snowflakes end up becoming problems over time, right? It grows into a bunch of different silos doing different things with maturity and then you have to start to kill your children. So, at the end of the day, you've got to ask yourself, where do you want to invest your time, right? Time is the most expensive resource for anybody in any organization, whether it be budgets, time's the shortest piece. And you can invest your time trying to build the infrastructure. You can invest your time trying to build the data center. You can invest your time trying to actually build differentiation in software to build something that actually makes material impact for your business. And that might mean you have a private cloud on premise. That might mean you use a public cloud provider. There's no sort of one right answer to any of this, but it's important to actually step back and actually ask that question. Where am I spending my time? What am I doing? That's good. And then lastly, I think that the question is, what is the business that I'm in? We were working on these slides on the airline, on the air flight over and the flight attendant walked up and she said, I'm in the airline business. I was like, great, you know the business you're in. You are either in the business of delivering value to your customer and there are many components that make up doing that. Pick the ones that actually make a difference to that customer experience. Drive the best customer experience possible. If running the data center isn't a core part of that business, why are you running the data center? Now it's easier to say, some organizations, many organizations have existing investments. You've got to think about how do you leverage what you have today and build for what you can have, where you're headed in the future. And that's where we think sort of these on premises, private clouds that are being developed, create some interesting opportunities where you can take that data, those systems of record, those applications that are running today, leave them where they are, don't move them, and you can build your next generation applications adjacent to those existing platforms. Now you've got something that is cloud-native. You've got an application that is ready to go speak to a public cloud if and when you make that decision. It gives you a lot more flexibility versus continuing down the path you've always been on. Yeah, I would say too that, again, we stress that the maturity of OpenStack has been great as we've seen it evolve over the years, but now I think we're starting to see some real cases where running a true hybrid type of environment while leveraging things like OpenStack, which is completely disruptive if you compare it back to things like VMware and stuff like that, right? That you can actually truly find some middle ground with what you do in public, private off-premises and then private on-premises. So with that, we'll close. We'll open for questions. So I'm Blue Box Jesse on Twitter. And I'm Ed Tsai on Twitter. Thanks for spending your afternoon with us and we'll open it up. And if there are no questions, we can get on with our day. Or drink beer. Get on with my day. Awesome, thanks everybody. Thank you.