 Live from Boston, Massachusetts, it's theCUBE, covering OpenStack Summit 2017. Brought to you by the OpenStack Foundation, Red Hat, and additional ecosystem to support. Hi, welcome back. I'm Stu Miniman joined by my co-host John Troyer and happy to welcome back to the program Brian Stevens who's the CTO of Google Cloud. Brian, thanks for joining us. I'm glad to, it's been a few years. All right, I want to bounce something off you. So we always talk about, you know, it's like open source. You worked for, you know, in the past, what is most considered the most successful open source company for monetizing open source, which is Red Hat. We have posited at Wikibon that it's not necessarily the company, it's not only the companies that sell a product or solution that make money off it, but I said if it wasn't for things like Linux in general and open source, we wouldn't have a company like Google. So, you know, you agree with that? Or you look at the market cap of a Google, I said if we didn't have Linux and we didn't have open source, Google probably couldn't exist. Yeah, I don't think any of the hyperscale cloud companies would exist without open source and Linux and Intel. Right, I think it's a big part of the stack, absolutely. All right, you made a comment at the beginning about, you know, what it means to be an open source person, working at Google. The joke we all used to make was the rest of us are using what Google did 10 years ago and eventually goes from that white paper all the way down to some product that you used internally and then maybe it gets spun off. I mean, we wouldn't have Hadoop if it wasn't for Google. You know, just some of the amazing things that have come out of, you know, those people at Google. But what does it mean to be open source at Google and with Google? Well, you get both, right? Cause I think that's the fun part is I don't think a week goes by where I don't get to discover something, coming out of a research group somewhere, you know, now the latest is machine learning, you know, Spanner because they learned how to do distributed time synchronization across Geo data centers. Like, who does that, right? But Google has both the people and the desire and the ability to invest in on the research side. And then sort of, then you marry sort of that innovation with everything that's happening in open source. It's a really a perfect combination. And so instead of building these proprietary systems, it's all about how do we actually, not just contribute to open source, but how to actually build that interoperability framework because you don't want cloud to be island. You know, you want it to be really integrated into developer tools, databases, infrastructure, et cetera. Right? Yeah, and a lot of that sounds like it plays into the Kubernetes story is, you know, Kubernetes is a piece that allows some similarities between wherever you place your data. Yeah, yeah, yeah. And, you know, maybe give us a little bit more about what Google, you know, how do you decide what's internal, I think about like the Spanner program, which there's some other open source pieces coming up, look like they read the white paper and they're trying to do some pieces. You said less white papers, more code coming out of Google. How does that, what does that mean? It's not that we'll do less white papers. I mean, it's just the fact, because white papers are great for research and Google's definitely a research, you know, strong academic oriented company. It's just that you need to go further as well. And so that was, you know, what I was talking about, like with GRPC, creating an Apache project, I think was the first time for streaming analytics, right? It was the first time that I think Google's done that. Obviously been involved for years at Linux Kernel, compilers, et cetera. You know, so I think it's more around, you know, what did developers need? What areas, where can we actually contribute to areas? Because what you don't want, what we don't want is you're on premise and you're using one type of system, then you move to Google Cloud and it feels like there's impedance. You're really trying to, you know, get rid of the impedance mismatch all the way across the stack. And one of the best ways you can do that is by contributing, you know, new systems designs. There's a little bit less of that happening in the analytics space now though. I think the new ground for that is everything that's happening in machine learning. You know, with TensorFlow, et cetera. Yeah, absolutely. There was mentioned in the keynote this morning, all of the AI and ML, I mean, you know, Google with TensorFlow, even Amazon themselves, you know, getting involved more with open source. You said you couldn't build the hyperscales without them, but is that the, do they start with open source? Do you see or? Well, I think that most people are, you know, running on a Linux backplane, right? I mean, it's a little bit different in Google because we've got an underlying provisioning system called the Borg, you know, and that just works. So some things work, don't change them. The areas where you really want to be open source first are areas that are just under active evolution because then you can actually join that movement of active evolution. Like developer tools are kind of like that. You know, even in machine learning, you know, like machine learning is super strategic to just about every company out there. And, but what Google did by actually open sourcing TensorFlow is now they created a canvas that community, I mean, we talk about that here, but for data scientists to collaborate, and these are people that didn't do much in open source prior, but you've given them that ability to sort of come out with the best ideas and to innovate and code. I wanted to ask a little bit about the enterprise, right? We could all make jokes about enterprise, you know, enterprise-iness is, you know, what everybody should have been doing, you know, but they're, you know, 10 years ago and they're finally getting to. But on the other hand, you know, Red Hat, very enterprise-focused company, OpenStack, service provider, and very enterprise-focused. One of the things that Google Cloud is doing, well, I guess the criticism has typically been, you know, how does Google as a company and as a culture and as a cloud focus on the enterprise, especially bringing advanced topics like machine learning and things like that, which to a traditional IT person are a little foreign. So I just am interested in kind of how you're viewing, how do we approach the needs of the enterprise, meet them where they are today, while yet giving them an access to a whole set of services and tools that are actually going to take them into a business transformation stance. And that's because you end up, you know, as a public cloud provider with the enterprise, you end up having multiple conversations, right? You certainly have one of your audience, your primary audience is the IT team, right? So you have to earn trust, you know, and help them understand sort of the tools and your strategy and your commitment to enterprise. And then you have CISOs, right? And the CEO that's worried about everything, security and risk and compliance was a little bit different than the IT department. And then what's happening with machine learning and some of the higher services is now you're actually building solutions for lines of business. So you're not talking to the IT teams with machine learning and you're not talking to the CISOs, you're really talking around business transformation, right? And when you actually, you know, if you're going into healthcare or if you're going into financial, it's a whole different team when you're talking about machine learning. So what happens is, you know, who's really got a segmented, you know, three sort of discreet conversations that happen at separate points of time. But all of which are enterprise focused because they all have to mirror together because you even though there may be interesting machine learning, if you don't wrap that in an enterprise security model in a way that IT can sustain and enable and deal with identity and all the other aspects, then, you know, you'll come up short. Yeah, building on that, you know, one of the critiques of Open SAC for years has been, it's tough. And I think about one of the critiques of Google is like, oh, well, you know, Google builds stuff for Google engineers and, you know, we're not Google engineers. You know, Google's got like the smartest people and therefore, you know, we're not worthy to be able to handle some of that. What's your response to that? How do you put some of those together? I mean, of course Google is really smart, but there's smart people everywhere. And I don't think that's it. I think the issue is, you know, Google had to build it for themselves, right? And they'd build it for search and build it for apps and build it for YouTube. And Open SAC's got a harder problem in a way, when you think about it, because they're building it for everybody, right? And that was the Red Hat model as well. It's not just about building it for Goldman Sachs. It's building it for every vertical. And so it's supposed to be hard, right? It isn't just about like building a technology stack and saying we're done, we're going to move on. They have to, you know, this community has to make sure that it works, you know, across the industry. And that doesn't happen in six years, right? It takes a longer period of time to do that. And it just means, you know, keeping your focus on it. And then you deal with all the use cases, you know, over time. And then you build, that's what getting to a unified, you know, commoditized platform delivers. Yeah, I love that. Absolutely. We tend to oversimplify things and right, you know, building from the ground up, some infrastructure stack that can live in any data center is a big challenge. Similar deploy, train the world. I wrote an article years ago about, you know, Amazon hyper-optimizes. They only have to build for one data center. It's theirs, you know, at Google, you understand what set of applications you're going to be running. You build your applications and the infrastructure supports it underneath that. What are some of the big challenges you're working on, you know, some of the meaty things that are exciting you in the technology space today? In a way, it's similar, right? In a way, it's similar, it's just that at least our stacks are stacked. But what happens is then we have to marry that into, you know, the operational environments, not just for a niche of customers, but for every enterprise segment that's out there. And what you end up realizing is that it ends up becoming more of a competency challenge than a technology issue, because cloud is still, the public cloud is still really new, right? And I mean, it's consolidating, but it's still relatively new when you start to think about, you know, these journeys that happen, you know, in the IT world. So a lot of it for us is really that technical enablement of customers that want to get to Google Cloud, but how do you actually, you know, help them, right? And so it's really a people in processes kind of conversation over how fast is your virtual machine? One of the things I think is interesting about that Google Cloud that has developed is the role of the SRE. And, you know, Google has been, you know, invented that, wrote the book on it, literally, is training others, has partnerships to help train others with their SREs and the CRE program. So much of the people formerly known as sysadmins, you know, in this new cloud world, some of them are architects, but some of them will end up being operators and SREs. How do you see the balance in this upskilling of, you know, kind of the architecture and the traditional infrastructure and capacities and app dev versus operations? How important is operations in our new world? It's everything. And that's why I think people, you know, what's funny is that if you do this code handoff where the software developers build code and then they hand it to a team to run and deploy, the developers never become great at building systems that can be operationally managed and maintained. And so I think that was sort of the aha moment is the best I understand in the SRE model at Google is that, you know, until you can actually deliver code that can be maintained and reliable, then the software developer owns that problem. The SRE organization only comes in, you know, at that point in time where they handoffs there and their software developers. Like they're, you know, every bit is skilled software developers as the engineers are that are building the code. It's just that's the problem they want to decode, which I think is actually a harder problem than writing the code. Because when you think about it for a public cloud is like, how do you actually make change, right? But keep the plane flying and to make sure that it works with everything in an ecosystem at a period of time where you never really had a validation stage. Because in the land of sort of delivering ISV software, you always have the six month, you know, nine month validation phase to bring in a new operating system or something else or all the ecosystem tests around it. Cloud's harder. The magic of cloud is you don't have that window, right, but you still have to guarantee the same results. And one of the things that we did around that was we took the page out of the SRE playbook, which is how does Google do it? And what we realized is that even though public clouds move the layers up, your enterprises still have the same issue. Because they're deploying, you know, critical applications and workloads on top. How do they do that? And how do they keep those workloads running? And what are their mechanisms for managing availability, service level objectives, you know, shared fate dashboards? And that's why we created, you know, the CRE team, right? Which is customer reliability engineering, which is a playbook off of SRE, but they work directly with end users. And that's part of the, how do we help them get to Google Cloud? Part of it is like really understanding their application stacks and helping them build those operational procedures so they become SREs, if you will. Brian, one of the things that I, if you look at OpenStack, it's really, it's the infrastructure layer that it handles when I think about Google Cloud, where the area that you're strongest, and you know, welcome to correct me, but it's really, when we talk about data, how you use data, how, you know, analytics, some of the, you know, your leadership you're taking in the machine learning space. Is it okay for OpenStack to just handle those lower levels and let, you know, other projects sit on top of it? And curious as to, you know, the developing where Google Cloud sits. I think that was a little bit of an aha moment. For me, you know, even prior to Google was, was it was a, I did have a lens that it was all about infrastructure. And I think the infrastructure is every bit as important as it ever was. But the fact that some of these services that don't exist in the on-premise world that live in Google Cloud are the ones that are transformative change. As opposed to just, you know, you know, giving you, you know, operational, you know, easing the operational burden, easing the security burden, but it's some of these add-on services that are the ones that really change your, you know, bring about business transformation. The reason we've met, you know, have been moving away from Hadoop as an exempt, not entirely, but just because Hadoop's a batch-oriented application. You can go to Spark, Flink, everything beyond that. Sure, and then also now when you can get to real-time and streaming, it means you can have ingested data pipelines and data come from multiple sources, but then you can action on that data instantly. And a lot of businesses require, a lot of us certainly doesn't. I think a lot of our customers' businesses do. The time-to-action really matters. And those are the types of services that at least at scale don't really exist anywhere else. And machine learning, the ability of our custom ASICs, you know, to support machine learning. So I don't, but I don't think it's a one versus the other. I think that brings about how do you allow enterprises to have both, right? And not have to choose between public cloud and on-premise or do any services or any services. Because if you ask them, the best thing they could have is actually how do you marry the two environments together so they don't look, again, back to that impedance differences. Yeah, and I think that's a great point. We've talked, OpenStack is fitting into that hybrid or multi-cloud world a bunch. The challenge, I guess we look at, is some of those really cool features that are game changers that I have in public cloud that I can't do in my own data center. How do we bridge that? They're started to see the reach or the APIs that do that, but how do you see that playing out? Because you don't have to bring them in because if you think about the fabric of IT, the fabric of IT is that Google's data center in that way just becomes an extension of the data center that a large enterprise is already using anyway. So it's theirs, right? So they aren't going to see the lines of distinction. Only we, and sort of the IT side see that. They're just going to be seeing, as long as they have a consistent platform and they can take advantage of those services and it doesn't mean that the workload has to be portable and the services have to exist in both places. It's just a data extension with some pretty compelling services. I think back, Hadoop was, let me bring the compute to the data because the data's big and can't be moved. Look at edge computing now. I'm not going to be able to move all that data from the edge. I don't have the networking connectivity. There's certain pieces which will come back to a core public cloud, but I wonder if you could comment on some of those edge pieces, how you see that fitting in. We've talked a little bit about here at OpenStack, but Google. I think that's the evolution. When we look at, well we just even see it for the edge of our network. The edge of our network is in 173 countries and regions globally. And so that edge of the network is full compute and caching. And so even for us we're looking at what sort of compute services do you bring to the edge of the network? Where low latency really matters and proximity matters. The easiest obvious examples are gaming, but there's other ones as well in trading. But still though, if you want to take advantage of that foundation, it shouldn't be one that you have to dive into the specificities of a single provider. You'd really want that abstraction layer across the edge, whether that's Docker and a defined set of APIs around data management and delivery and security. That probably gives you that edge computing sort of sell. And then you really want to build around that on, Google's Edge, build or run that on a telco's edge. So I don't think it really becomes necessarily around whether it's centralized or it's the edge, it's really what's that architecture to deliver. All right, Brian, want to give you the opportunity, final word, things that either from OpenStack retrospectively or Google looking forward that they'd like to leave our audience with. Wow, closing remarks. You know, I mean, I think the continuity here is open source, right? And I know the backdrop of this is OpenStack, but it's really around open source is the, you know, the accepted foundation and substrate for IT computing up the stack. So I think that's not changing, the faces may change and what we call these projects may change. But that's the evolution and I think there's really no turning back on that now. Brian Stevens, always a pleasure to catch up with you. We'll be back with lots more coverage here with theCUBE. Thanks for watching.