 Computer Museum in the heart of Silicon Valley, extracting the signal from the noise. It's theCUBE, covering OpenStack Silicon Valley 2015, brought to you by Morantis. Now your hosts, John Furrier and Jeff Frick. Okay, welcome back everyone. We are here live broadcasting. This is SiliconANGLE Media's theCUBE, our flagship program. We go out to the events and extract the signal from the noise. I'm John Furrier, my co-host Jeff Frick this week. Two days of wall-to-wall coverage live in Silicon Valley for OpenStack Silicon Valley or hashtag OpenStackSV or the hashtag for this event today, hashtag OSSV15, join the conversation, join our crowd chat, crowdchat.net slash OSSV15. Our next guest is Craig McElucky, who's with Google, he's on the Google Cloud team. Cube alum, welcome back to theCUBE. Got a keynote there, welcome back. Thank you so much, great to be with you. So Silicon Valley, obviously the center of the innovation engine. There's a lot of investment capital here, a lot of big players, you guys, Facebook, VMware, Intel, you name it, it's the giants of the technology industry. And the bubble of conversations happening, China's going down in terms of economics to see in the stock market crash there, but yet underlying infrastructure change is happening. Cloud certainly is floating that wealth creation engine. You guys are a big part of it here in Silicon Valley. Let's talk about the state of the cloud. OpenStack has momentum. You have some stability in the core compute side with OpenStack, virtualization is not going away, new things like Kubernetes containers, fast on the scene, rising very fast. What's your take on this innovation engine in the cloud? So I think there's a couple of things that are really exciting and interesting that are happening right now, as we speak. The first is a transition to Open, and it's a way of rethinking about how you evaluate, acquire, and integrate your software. And I think that OpenStack has established a legitimacy as a technology that's really bringing the value proposition of traditional infrastructure service to everyone everywhere. And we're really starting to see a convergence of that community, a set of technologies that are consistent, high semantic consistency, and it's really becoming a thing, which is phenomenal. At the same time, we're also seeing another disruption happening. And it was really a disruption that was triggered by the emergence of Docker as a technology to support a new way of thinking about packaging and deployment. And it's really part of a bigger story around a move towards cloud native computing. This is a computing set of patterns that were really inspired by the internet giants, by the Googles, the Facebooks, the Twitters. But it's really been cracked open and made accessible by folks like Docker who have opened up those container technologies. And now we're seeing a lot of the players start to really focus on this and look at bringing the value proposition of that new start of computing to enterprises everywhere. You know you start to see maturity in a market, especially when platforms are involved, platform wars, whatever the bloggers want to put the headline out there. When you see abstraction layers develop. And one of the things that you talked about in your keynote I'd like you to elaborate on is ending the distinction between what's under the hood, container as you mentioned, bring out this notion that, you know, as a developer I want interoperability, I want cross-platform APIs, this is the economy. So I want you to explain that. What is this disruption with containers and Kubernetes due for this abstraction? Do we care about the features anymore? And that's one of the signals of maturity is that you're not talking speeds and feeds and infrastructure as a service, platform as a service. That when those conversations go away, you know things are moving. Or is that true? What's your take on all that? I think that's a very good observation. I think that one of the things we as a community have looked for for a while is a separation between the world of tools and infrastructure that people interact with on a day-to-day basis to build applications. And then the systems that actually take those built applications and run them for you. And a big part of our focus has been to make the set of subsystems that are actually responsible for the operations of applications transparent to the end developer. And we're looking to formalize that interface that exists between, you know, how you create an application, how you package up its dependencies and how you offer up to infrastructure and then how you run it. And one of the most exciting and energizing things for me is, you know, to see the emergence of a standard set of abstractions that interface between these two walls. So it creates massive opportunities for innovation. By standardizing that interface, you have incredible innovation in the tooling area with technologies like Docker or continuous integration and delivery frameworks, you know, new development environments that are producing an artifact that can be universally consumed everywhere else. And then on the infrastructure side, you have a lot of innovation around running that artifact for the developer, for the enterprise efficiently and intelligently, whether it's being deployed into a virtual machine on open stack, whether it's being deployed into a Maysauce cluster running on the metal, whether it's being deployed into a next generation Kubernetes cluster running in one of those environments or somewhere else. We're looking to create this common abstraction and it's going to drive a lot of innovation at every level of the stack. You know, at Wikibon Research, one of the things that they're putting out some cutting edge research around the innovation around some of the technologies under the hood, converging infrastructure, cloud technologies, flash storage, software defined networking, all that stuff under the hood is evolving as fast as well. So you have underlying core technology and tooling exploding. So some really good stuff coming out at Wikibon.com. And with that in your comment, I want to ask kind of the pointed question, which is, does hybrid cloud really exist? Is it a concept or is it a category? Do people buy hybrid cloud? Do they buy into it? It seems to be that's the conversation people are talking about now but I just don't see hybrid cloud existing other than being part of private and public. And it's a great question. I love that question. It exists but not the way that people think of it existing, right? So you could think about it this way. When you are building an application on your laptop and deploying it into a cloud, it's kind of hybrid cloud, right? But it's not the way that people think about hybrid cloud. When you want to run a continuous integration server for your company and have it hosted in the cloud and have it create artifacts that are deployed into your on-prem production clusters, that's hybrid cloud. But it's not the way people have come to think about it. And so what I think about is really about the ecosystem, about establishing a common set of tools and capabilities. So at first and foremost, people can choose the destination for an application based solely on the technical merits of the infrastructure that they're running on. Google offers some very high quality, robust, fast, affordable cloud infrastructure. But we recognize and embrace the fact that for some customers you have very legitimate regional requirements. For some of the applications you might really want to run them on-premises. And so the first step towards achieving legitimacy for hybrid cloud is establishing a common set of patterns and tools and capabilities that exist in both places. The next step is going to be around creating a common services abstraction that lets you start to access things from other environments. And then over time, you might actually see people deploy these sort of cloud bursting scenarios, et cetera. But the path to get there is really through infrastructure, like a common set of abstractions, a common set of tools, a common set of patterns, and making those available to people everywhere. And then over time, we will start building these fused together, legitimately hybrid solutions. So hybrid cloud then is a paradigm. It's a concept that highlights the common tooling, the interoperability, so developers can actually work in these environments without having to do anything. That's where Docker comes in. That's where Kubernetes comes in. And it's really, hybrid needs to be first and foremost about being able to use a common set of technologies to build an application for A or B. Okay, so let's take it forward. So let's put the brainstorming head on. Let's talk about the future and let's kind of play with some scenarios. Internet of Things opens up a huge can of worms and challenges, engineering challenges around, how do I manage the data? How do I drive workloads to these devices, whether they're wearables or cars or stacks or devices. Anything that's on the edge of the network is now considered a device, PC, mobile, Internet of Things. So for a developer to work in that kind of environment, they need these tools. Is that how you see it? Absolutely, I think that's a great way to think about it. You know, it's an interesting thing you raise because if you think about it, Cloud Native has really been the domain of Internet companies, right? It's really been something that Google's done because it's the only way to practically achieve a certain level of scale. We've seen co-evolution of these patterns inside Twitter, eBay, Facebook, Netflix. Everyone's been doing it on their own terms. Now the reality is when IoT happens, every enterprise has to kind of become an Internet company, right? And what we've seen consistently across all of the Internet companies that exist today is there's one pattern that really works well to actually deploy computational infrastructure at scale efficiently. And that's this pattern around container package, dynamically scheduled, microservices oriented computing. And so our mission is really to bring these technologies in a democratized way to enterprises so that they can actually tackle problems that were previously only really solved by the Internet giants without having to make Google-level investments or Facebook-level investments in technology. So we hear that Internet company, I just clarify, like a hyperscaler, like what Yahoo or Google did, building large-scale systems in a seamless way that's kind of abstract to the user. Just pure performance, everything's running, and it's kind of a brilliant concept because that brings up the point of Google envy. And we hear this all the time in the enterprise. I want to be more like Google. I want to be more like Facebook. And what they really are saying is, I want DevOps, right? So DevOps, cloud native, do you hear that often? And when you hear that, I want to be more like Google. What does that really mean from your standpoint? How do you guys internalize that? And how do you talk back to customers? So I think, you know, when I say, it's like I want to be more like Google. I think there's a lot of different sort of angles that you might have there. I've heard people coined this phrase, Jiffy, to describe what we're trying to do. Google infrastructure for everyone else. But I think the heart of it is really this. If you're a Google engineer, it's like you have a superpower, right? You have access to this amazing, almost unlimited mass of infrastructure that's just at your disposal immediately at very little cost or overhead. And you don't have to worry about the mechanics of actually where the thing I built is run, right? Operations is just a function of the platform. The developer gets to focus on their application and their application operations. And what they get for free is this cluster environment where cluster operations is handled for you. The process of actually mapping an item of code into a distributed systems environment. The ability to use some very powerful services that make it trivial to build distributed systems. The fact that I'm not paged all the time because what I deploy is understandable by some very smart subsystems. They can watch it. They know what it's supposed to be doing. They can tell when it's not doing that and they know how to fix it, right? And so traditionally, when you go out of operating parameters in a traditional system, you get paged. And for me, a lot of what this, you know, operate like Google really means is, one is I want to be able to access compute at an unprecedented level easily. And two is I don't want to get paged by my applications that are doing that. Yeah, so let's bring that up the API economy. Let's bring this to the next level. Today applications are either legacy or they're cloud native. So I asked everyone the question, even our own Wikibon team, we have a debate. You know, I asked Dave Vellante. Dave, Dave, name the cloud native apps that are out there. Any cloud native apps out there? I mean, who has a cloud native app? Now it's a trick question because he goes, Amazon's an app, Google has cloud native. Well, they're already hyperscaled. So the question is, where are the cloud native apps? Where are the examples? Now Facebook's a cloud native app because they built it from the ground up to be cloud native, Google's the same way. So as an enterprise, what is the cloud native app to the enterprise and how do they get there? And what legacy do they have to throw away because asynchronous and API interactions is fundamental? How do you tease that? So this is actually a fascinating topic. And I think one of the most dangerous things people assume is that to accomplish cloud native, you have to go fully along the API application path, right? Now the reality is, if you look at the way that people access data today, the vast majority of business data is stored in relational databases. People have great tools to access data in relational databases. They want to be able to move that forwards. And to me, if you force API application, if you force a protocol specific approach to actual integration, if you force people to use a specific authentication scheme, you're going to alienate a very broad array of your customers and you're going to create this cognitive hurdle that is very hard for people to get over. So when I think about cloud native, I think about it as providing a different paradigm for deployment, management, activation, et cetera. But it has to make allowances for integration with your existing systems. And so I think at the forefront of this is the notion of a service or a microservice. And a microservice has to be a minimal atom of software consumption, the easiest way to find and consume something. And you can't force an opinion around how people project that, right? So if you want to build something that runs in a cluster, you should be able to access an Oracle database as if it were a microservice running inside your cluster. You should be able to access a Salesforce SaaS endpoint as if it were a microservice running inside your cluster. And so as I think about my mission and Google's mission around the move towards cloud native computing, you can't create these experiential cliffs. You can't create these artificial boundaries to your system. You have to make natural allowances where, look, there's some stuff that just works better in a vertically scalable VM. If you want to run a big database with a Dune kernel and a few other things, by all means put in a VM. And we are absolutely committed to the idea of creating a natural set of experiences when you want to go from that to some portion of the application that's doing stateless front-end serving, or a portion of the application that's running in a cloud-friendly distributed, scaled-out database. You shouldn't have to take the poll and be stuck in this world. You should be able to mix these kinds of things. So you're saying it's dangerous to force API affecation if that's the term. I can't even spell it. It's too many i's at the end there. I like to have a hyphen in there. But if you force the API affecation or a movement, you can foreclose future performance and functionality by alienating existing i's. By alienating existing system. It's a very dangerous, there's a lot of, it's very attractive to drive API affecation. But it has to be, you have to create this pressure gradient that attracts people up it by adding value at every stage of the game. And you can't build your management systems around a predicated, sort of opinionated API framework. We saw this in the world of SOA. I mean, I don't know if you remember the SOAP and SOA stuff the way back when. That was just another way of describing API affecation. And we saw where it went. The problem was that- It wasn't ready. The market wasn't ready for web services at that time. But it was beyond that. It was like, no one's willing to make a massive infrastructure investment to get you to ground zero, where you can actually start building. So let's look at that web services back in 2000, 2001 when you saw SOAP, XML, SAML, those things emerging. At that time, who did take advantage of that? It was the hyperscalers. It was the internet companies, because they needed it, right? So the mainstream market now is adopting that kind of concept around microservices. Explain that. But it wasn't, the interesting thing is when you look at what the adoption was around microservices, it wasn't around interoperable SOAP. It was around discrete, highly optimized RPC protocols. It was around relatively closed systems at that time. And it worked well, right? The challenge was- It was controlled. It was controlled and it worked well inside a closed ecosystem. Now, the thing that really helped people back is that to get there, you had to do a big ESB deployment. You had to then go and SOA-FI a bunch of your components. And it required a huge investment in terms of sort of infrastructure and capabilities before you started realizing value. And it was inaccessible to most people and it alienated technologies that didn't fit well into that model, right? Like how do you take your database and fit into that model? It was purely optimized around certain portion of it. And so now we're in a world where we make it available to everyone. We reduce barriers to entry and you get immediate value without having to make huge investments. So let's take microservices and let's unpack that for the audience out there. You're seeing DockerCon, ContainerCon, KuberCon, MesosCon, all these conferences are around developers and this is all about scale, right? Operating a scale, abstraction layers. I think it's, we need to be careful not to pigeonhole this as about operating a scale. It is the only practical way to operate an internet scale but the value proposition is just as applicable if you're running something in five virtual machines at a more humble scale. So let's talk about development versus operation teams. Where does the Kubernetes, where does the microservices model fit in and how do companies avoid the trap of alienating existing apps? How do they get this up and running? What is the roadmap and differentiate from a dev standpoint and an op standpoint? So I think one of the most important things you're going to start seeing is a specialization of the operations function. Today, it's all kind of glommed together and if you ask a developer to actually run an application they have to be cognizant of which virtual machine it's in. You force them into the ugly world of infrastructure ops and sort of common services ops. And what we're going to start seeing and what I hope to help companies achieve is a specialization of the operations function. So infrastructure ops should be relegated to a set of people that actually understand the physical infrastructure. They will create an optimal physical environment to run your application. There'll be a small number of specialized people that know how to do that and they will rack and stack and wire and configure and do whatever needs to be done to tune the infrastructure. Above that, you're going to see this cluster operations or a common services operations team that provide a basic operational platform and common services to everyone. So these are a highly specialized set of people that provide you the tools you need to be able to autonomously run and distribute system. They are unlikely to be involved in the day to day operations because most of these systems will be autonomous but they're there to answer the call if something happens in that system. So it becomes a very specialized function and Google does this with our SRE folks that actually manage the board clusters that run all of our infrastructure. Small number, highly specialized people providing a very valuable service to a lot of folks. And then at the top level, you're going to have application operations and that really just becomes the developer's function. And it should really be about understanding and managing your code and you should never have to think about where it's running, how it's running. You should never have to SSH into an instance to try to debug it. All of that should be presented to you through your tools. So the developers experience becomes one of using logical infrastructure. And so I think what we're going to start seeing is companies making investments in these clustering technologies, offering up these simple clustered service environments for their departments. And then having portfolios of container package applications that can be easily taken, adjusted and run in these environments. And we'll naturally see the specialization of operations emerge. So we're running out of time. Jeff didn't get one question in, but maybe next time. He has a role in that. Brendan Burns, Brendan Burns I think on your team. So he brought up something, he brought up the hybrid cloud is kind of the way, meaning the way you described it, not as a category. We also brought up the different aspects of Google Cloud in our last crowd chat last month. How do customers mix and match with the cloud? I mean, you guys offer Linux, you guys offer Windows. I mean, if I want to work at Google Cloud, what are the touch points? How do people ingratiate in? How do they engage with Google? What are some of the use cases? Can you share, just to put the plug in for Google Cloud, what you guys have up and running that's mature, stable, shipping and how do customers get into the Google Cloud? So we've really seen Google Cloud, it needs to be in all of the above capabilities. The operating characteristics, the thing that made Google Cloud unique is the quality of the basic infrastructure. We offer by far the most price-performing basic infrastructure out there. It's an innovative cloud. It's driving and active in a lot of the disruptions we're seeing around the container space. It's an open cloud. It's a cloud that's invested in making sure that we engage and connect with the open source community. So if you want to work with Google Cloud, there's a lot of different ways to do it. One is you can go and just buy beautiful, clean, pristine, powerful, affordable infrastructure in large chunks through Google Compute Engine. And we're seeing a tremendous amount of adoption. You don't have to make massive cap tax down payments to get our best price. We rarely focus on doing that. You can also come in if you just want to write a bit of code and have it run. We have a wonderful pass product called Google App Engine that's becoming very naturally integrated into the container ecosystem and is a natural sort of path. It's a great entry point for people that just want to operate at a higher level and want to take some code and then have it easily deployed and run on your behalf. And then we're also, another entry point that isn't obvious to people is you can help us build the Google Cloud. What we're building with our next generation set of offerings with technologies like Google Container Engine is an open source cloud. It's being built in public. Come join our community, work with us, try it out, give us feedback and be part of actually building the next generation. So question I have for you is let's just say I'm an Amazon customer and I want to go to Google Cloud. Do you have like an Elastic Beanstalk application container, is that App Engine? How do I get in there? I mean, there's some things that Amazon has. You might have some things. How do you talk to that? Beanstalk in particular. That's a great question. So Beanstalk provides the ability to deploy and run applications. The closest analogy is App Engine. So Beanstalk traditionally was a Java based platform that you could provide your Java code and it would run for you. App Engine gives you that equivalence capability. And with the new generation of App Engine, we're actually providing the ability to deploy into directly into VMs. So it feels a lot like the Beanstalk experience, but it comes with a lot of other high value services. And so that's a natural starting point. And App Engine itself is being re-based on a lot of the Kubernetes concepts so that you have this immediate, easy, accessible experience for code. But when you reach an edge and you want to actually integrate it naturally with a vertically scaled database that runs in a VM, we have Compute Engine waiting for you and it will feel very natural to actually just integrate those two things together and snap together these more holistic solutions. You guys have a final question. I know you guys have a lot of track record with developers. Certainly Google's history and open source, everything's great. But other competitors, more commercial IBM and Amazon, they're providing marketplaces for distribution where people can make some cash and some cabbage. What's the plans? Is there anything there? How do I make money if I'm a developer with Google? Or is there plans there? What's the speed data? No, it's a great question. And obviously we have aspirations in that space. I can't go into all the details right now, but we are obviously investing in that area. And one of the things that we really like though is looking at containers as a standard distribution framework that you plug into everyone's marketplaces. So one of the things that I see around marketplaces historically is that they offer immediate value in connecting a producer and consumer of software, but they're not offering steady state value. So once those two have been connected, the marketplace isn't adding significant ongoing value. And so when you think about what we want to do, we want to make sure that one is we become a market maker and we let lots of different marketplaces emerge and that we support those. But then in our own efforts, we actually add legitimate value to both the producer and consumer of the software and then we're not just taking a cut off the top. But you'll become much more clear in the front of the talk. Greg, thanks for spending some time and congratulations to Greg Kino. Good to see you again. Thanks for jumping in and sharing the data here on theCUBE, really appreciate it. We are live here in Silicon Valley. It's theCUBE at OpenStack SV. Join the conversation, hashtag OSSV15. We'll be right back after this short break.