 Live from the San Jose McCannery Convention Center, it's theCUBE at Open Compute Project US Summit 2015. Okay, welcome back everyone. We are live in Silicon Valley for theCUBE, special presentation of the Open Compute Project Summit 2015. I'm John Furrier, my co is Jeff Frick. We're here at theCUBE to extract the citizen of noise. Our next guest is Cush, Cushagra Bad, GM, a server engineering, Microsoft Cloud and Enterprise. Welcome to theCUBE. So Microsoft is now pumping on all cylinders, the new CEO, we've been in theCUBE actually, I stepped away and Jeff Kelly did the interview, had to take a bio break and I didn't know he was going to be promoted as CEO. I would have definitely interviewed Satya a little bit better, but you guys are really amazing right now. You guys last year here at Open Compute, essentially laid down and open sourced a lot of the key jewels of Microsoft Cloud, Azure and all the technology and the people out in the world might not know that you guys have a massive infrastructure, MSN search, full infrastructure for all your products, full on cloud, et cetera, et cetera. So take us through what's going on with Microsoft's current cloud offering and what did you guys do this year to build on that kind of donation or investment or open sourced? Whatever we're going to call it, the goodies. So last year around this time of year, we joined OCP and essentially all the hardware we designed for our internal applications, like Azure, Bing, Xbox, Office, we took all that hardware and we contributed everything to OCP. Everything. Everything, yeah, so it's the same. So the belief is that we want to accelerate the adoption of cloud computing and we believe we can do that if there is consistency between what we use in the public cloud and what is being used in enterprise. So to have that consistent platform between the two environments is kind of our goal which will help people to move to cloud computing faster. So with that spirit, we contributed all the hardware specifications to OCP last year. Let's drill into that for a second because a lot of people talk about this in other environments like, yeah, the project's not working out, so I'm just going to donate an open source that hopes something happens. That wasn't the case with Microsoft. Again, that's a generalization. I know it's a generalization. It's not working out, just open source it and see what happens. Give it a good college try. Startup didn't get enough cash. But you guys didn't, that wasn't what was happening with you guys. You actually had some real IP. What was the core things that you guys brought to the table and what was its impact this year? The big thing is we operate a global set of services across like 70 countries or so. There's about a million servers in production. It's across tens of data centers. So what we learn from operating these global data centers, the experiences we have, how to operate at scale, that is captured in the hardware design and that's what we contributed. So the idea is that we take these hardware designs and we make it open so everybody else can benefit from the experiences that we have had. So what learnings that were magnified from this? Because obviously you're bringing some serious goodies to the table. So people who are tinkering and moving from tinkering to actual development and prototyping, even the little baby we saw today from like, say, HP and Foxconn, those guys have a little, it's not elegant but it's first generation. They got to get up and running, they're not at scale. So it's like in the old programming days, local host push to the cloud. Now design a prototype to large scale. So what was the learnings you guys did on the large scale side that's now built into the ecosystem? It's about how do you manage the servers? How do you operate the servers? How do you deal with failures when they happen? How do you have software and hardware interoperate with each other at cloud scale? So all those learnings are only when you get to run a public cloud yourself. So those were the things that we have baked into our specification and we donated that to OCP. Now this year, what we're doing is continuing the contributions. So there's quite a few new technologies that we developed over the past year and we are now contributing that to OCP as well. So it's basically in the same theme of whatever we are driving as innovation in Microsoft's cloud for the hardware, we want to bring it to OCP, make sure it's available to everybody else. So this year there's a couple of big contributions we are making. The first one is what we call local energy storage or LES. And the big contribution there is it's a different radically new way to design data center power backup systems. So in the classic model, you have a big battery room that the lead acid batteries you have in cars, think about a whole room full of those and that's what happens when the power goes out. The batteries pick up the load and you make sure that your servers keep running. Of course it costs a lot to have a room that big and to operate it, maintain it, service it. So what we've done is we got rid of that whole thing and we moved the batteries inside the server. So the power supply that the server has has the batteries built in. So this is a completely radical way to do data center design. We estimated it will save 25% of the footprint of the facility. 25% 25% yes Big number. Yeah. That's an issue, density is a problem for many customers. 25% Yeah and the efficiency of power delivery should get better by 15% and then the year's a big one, the cost benefits are five X better than traditional designs. So, if you're a- Mainly driven by the power. Yeah, so the amount of money you invested in building a traditional data center, and the amount of money that goes into the traditional power backup, you can cut that by five X, by going to this new design. Wow. So it's, we are deploying this in volume, at Microsoft. So there's some benefits to getting up and running, so obviously those are obvious benefits, but this other intangible one, which is for closing growth, if you make a design decision. I mean, let's unpack that a little bit, because we've heard this from entrepreneurs and also developers. They don't have the resources, so they have to make medieval decisions around some design stuff. They might not have the capability. So in addition to the downstream benefits, what are some of those things that those developers are facing that you guys saw, architecturally, that you guys say, hey, you don't have to worry about these architectural decisions? But so we essentially, if you think about the hardware specifications, we are contributed as sort of an API. Like, if you make an analogy to software, there's a software API's and then you can build on top of that and focus on your application work. It's the same thing with hardware. You have a set of specifications, you just pick those up and then you focus on adding the value where it makes a difference to your environment. So that way you don't have to go reinvent the wheel, start doing all the designs all over again. And that's, I think, where OCP makes a big difference because you don't have to worry about redesigning hardware because the big guys have done it, Facebook, Microsoft, they've contributed the specifications. You can just take that and run with it and then go make changes specific to your recent environments. So are you getting the benefits back as the classic open, one of the main reasons to open source, right, is to put the innovation out to a broader community so you get, you get innovation benefits back. Are you guys getting innovation benefits back or is it pretty much you're a big guy, you've done huge scales, so it's really more kind of a downward push? Yeah, it goes both ways. We have seen a lot of cases where the contributions we made, people have taken that, they've enhanced it, made some modifications and they've contributed it back into OCP. Well, for the majority part, at this point, it's the contributions Microsoft's making. But we're seeing momentum around an ecosystem that's building. One thing we announced this time is we have a canonical who has become a Microsoft partner and they built on top of the specifications we contributed. So now you can have the next distribution and you can have a third party neutral way to provision servers. So that's a good example of how people have taken the specifications and built on top of that. Right. Now I wonder if you could talk about the business discussion around open sourcing with some would probably consider part of your competitive advantage around running the Microsoft cloud, the Azure cloud. Take us to that kind of conversation as to why you would open source a big chunk of what was clearly a competitive advantage as opposed to Facebook where it really wasn't their core business. It was kind of an execution detail. So our view is that our competitive advantage is really in the services that we offer. So if you think about Bing, about Azure, about Office, those are the services. That's where our competitive advantage is. How we run the infrastructure is a area where we would like to share that with others so that we can drive a common platform between the public cloud, private cloud, enterprise. Okay. So it's a different view of how we view the competitive advantage versus what is common. Yeah. Interesting. So as a GM now, you've been a tech athlete as we say, working at Intel. Intel just doesn't hire guys who aren't strong technically. As the processor CPU designs, getting stuff in Silicon is a big deal. That's a big trend right now. So you guys are donating a lot of the open source stuff that's a great best practice and the way you kill on the market with kindness, it's really good. So now the next wave is in-processor system on chip stuff. Intel's talking about this, Facebook's talking about it. Software native on Silicon is going to be huge. What are you guys doing with that? How are you seeing that you're a software company but you also have hardware expertise. So it's interesting. So can you share your perspective as a GM, investment strategies, how you're looking at the market, what you're looking at. Big picture, you don't be specific on the numbers. I know you're a public company but mindset-wise. Yeah. So companies like Intel, they're doing a fantastic job at the Silicon aspects. But I think the key thing is how do you take that Silicon innovation that Intel and others are driving and how do you integrate that into a bigger environment like cloud-scale environment? So when it comes to how do you do the systems management? How do you use power features? How do you take advantage of instructions at extensions? How do you offer new services based on Silicon features? That's where we go and start adding customization around the Silicon that exists. So it helps to bring new features to market faster by working with folks like Intel. And then that ends up in the open-source community. But now the business model's shifting now. So I always love the conversation race to zero because it doesn't really mean anything. In a way, you can argue race to zero commoditization, value will shift. Exactly. So value's shifting. So value's shifting all over the stack, right? So up and down, SDN, a lot of network action going on. And also at the top of the stack. How are you guys going to look at playing in that and what are you enabling and what is your ecosystem focused in on? So the value addition that we have is essentially in the services side. So the way I think about it is there is something that we need to do in hardware that will eventually result in a differentiated offering at the service level. It should end up making Azure better or it should end up making Office better. It should either do that or it should reduce the cost equation. Maybe we spend, if we spend a dollar doing something today then we should spend 50 cents doing it tomorrow. So the question then becomes what can we do in those aspects to drive innovation at the hardware level? So, and like I said, the hardware pieces are open sourcing that because that's what we want to provide commonality for. We were talking at the big data SV event we had here in campaign with Hadoop World about in-memory analytics. Of course it's flash, which is not technically memory. It's flash memory, they're persistent, but then you got memory, DRAM, then you got now in-processor. So analytics are moving from tape to spinning rust and disk to flash to in-memory to in-processor. So now you're going to have speed then you got virtualization on top of that. So the next question comes, okay, are we truly now living in an SOA world service oriented architecture where the dream 10 years ago of SOA is actually playing out? It's in the sense of a web services model it is playing out. I mean, that's what offerings like Azure are. You have a service that's available to customers. It's behind the web API. Machine learning is a good example. It used to take such a huge amount of effort to do statistical analysis and find patterns in data and make predictions on top. In Azure, we offered a machine learning service it burned GA a couple weeks back. Now you can, if you're a credit card company, you can upload all your credit card records, you can feed in some patterns and you can detect fraud right away, just like that, all through a simple web API. So it's, yeah, the world is changing. The convergence, you talk about system of record, systems of engagement, and then now making, getting insights out of it. So the speeds are critical. I mean, you got to have the performance. You couldn't do the machine learning and that extends stuff 15 years ago. You didn't have Flash, you had DRAM, and now you have amazing amount of resource now. So how has that changed the game? Certainly a lot, but from GM, you got all this innovation going around you. How do you prioritize? Where's the key enabler for you right now? I think it's about driving R&D. In the classic model, the challenge always was that you had what you had and you had to figure out how to use that. Now the model has shifted to, you want to keep offering new and new services, new models, and then you look at that and then you think about what does hardware need to look like? Okay, I have what I have today, but what should it look like three years out, five years out? And then you start thinking about what innovations you want to drive, whether it's in silicon, whether it's in systems, whether it's in power, and that's the big thing happening in the cloud space is that it's completely reinventing how hardware is sugar-designed. So that, that, and that's- I almost hate the word replatforming, but in a way there's some replatforming going on. So that's coming out of the hardware ecosystem. And now you were here at OCP, it's really amazing to see the collision course of hardware and software, and open source coming together. It's coming together. The open source was more of a hacker who homebrew kind of mentality, tinkering, pretty geeky, firmware, programming. Now you have, in a scene, kind of a software development model, lifecycle, DevOps, Agile. And it's getting tremendous adoption from the industry. So in the switch ecosystem, for example, OCP has done a fantastic job at breaking up the different monolithic layers in the switch. And now you have companies on the show floor here who are demonstrating different solutions for how to disaggregate the switch. So the whole ecosystem is completely changing. And that gives you the ability to do agile innovation and drive features that it needed. So we always love to ask this question on theCUBE. We'll ask this couple more I want to get to because it'd be great to have you on theCUBE, by the way. Because you have a good visibility on a lot of legacy and or cutting edge stuff. Is the future, right? So you have a future generation of developers out there. You have an old school developers my age, I'll be 50 this year. I've lived through the generation of early days of, I guess, software, computer software. Whatever generation of open source we're in now is changing. So I got to ask you, what is the key open source computer science skillset that's really needed from a young gun and or old school as a systems and as a compiler? And what kind of degrees in computer science that's cutting edge out there? And don't say machine learning because that's kind of been overused. Machine learning is standard in my mind right now. Machine learning is kind of out there cutting edge. But what is really going on? Is it compiler design? Is it virtualization? Virtual compilers? Is the science and the hardware? Can you share your opinion of the landscape of the folks who are thinking about careers and developing? I think that's a great question. You know, when I was graduating, the point used to be about what new things can you innovate from the ground up? And now the conversation has shifted to, there's a lot of work already done by the open source community, whether it's hardware or software. And the conversation now is what can I reuse so that I can build value on top of that by assembling the box that are already out there. So that requires a whole different skill set. It requires more of an integration skill set and a solution-based skill set versus let me go write a new compiler. There's a ton of compilers out there. Let me go write a new database. Well, there's a ton of databases out there. So the question then- We used to build our own graphics library from scratch with pixels on the screen. Remember back in the 80s? Right, back in the 80s. The goopy was a homegrown. So the innovation I see, oh. So in the Silicon Valley, the innovation I see happening is, the really good developers are saying, well, I have a good understanding of everything that's out there that people have worked on. I just need to figure out a way to stitch them together. And you think a different way, right? It's kind of like, it's like going to a restaurant. You want the fast food, you're in and out burger, you want the fast breakfast meal, or you want the fine dining space. It's architectural, it's a Lego block. It's a Lego block of food. It's really cooked a meal. The outcome is the outcome, right? So, tool for the different job. And the faster we can do it, the faster we can get you a solution to market, that's what differentiates you. Well, security, final question. Security, updates, thoughts? Security, well, it's a- Besides being sucky right now. It's an online challenge. It's always this one-step, two-step thing. So whatever the good guys do, the bad guys catch up, and then the good guys do it again, and the bad guys do it. Well, you're open sourcing things, so the notion is you've got more eyes on it. But you also now eliminate perimeters, right? So there's no more perimeter-based security. It's all open, so people can review the code, and they can sign on it, so that helps. All right, final prediction. This community here is very small still, but it's still successful. And thanks to you guys and others. What's next for this growth? What has to happen next? What's the evolution of the ecosystem? What's the sunlight on this organic soil here? I think, so we're already seeing signs where the industry is getting disrupted, and the rest of the industry is adapting to it. HP announcing this morning about their new product, Cloudline. That was a very interesting announcement that helps to bring HP in more in line with where the industry is going. I expect to see more along those lines. In the networking space, in the server space, in the storage space, basically OCP will become the driver for disrupting the industry and making it more in line with open systems that can be composable. Disaggregation is not a bad thing. Disaggregation, yeah. Open source, this is the queue. We're out in the open. We are on the trenches, out on the edge of the network. Low latency data here from Microsoft, extracting the seeds from the noise. I'm John Furrier with Jeff Frick. We'll be right back on the next guest. Live here in Silicon Valley, Open Compute Project Summit 2015. We'll be right back. All right, thank you John. Thanks a lot. Thanks.