 Hi, from Copenhagen, Denmark, it's theCUBE. Covering KubeCon and CloudNativeCon Europe 2018. Brought to you by the CloudNative Computing Foundation and its ecosystem partners. Well, welcome back. This is theCUBE's exclusive coverage of the Linux Foundation's CloudNative Compute Foundation, KubeCon 2018 in Europe. I'm John Furrier, Coast of theCUBE and we're here with two Google folks, JD Velocics, who's the PM, product manager for Stackdriver. We've got some news on that we're going to cover and David Aaron Check who's the co-founder of Kubeflow also with Google news here on that. Guys, welcome to theCUBE. Thanks for coming on. Thank you very much. So we're going to have Google next coming up. theCUBE will be there this summer looking forward to digging into all the enterprise traction you guys have and we had some good briefings at Google, ton of movement on the cloud for Google. So congratulations. Thank you. Open source is not new to Google. This is a big show for you guys. What's the focus? You've got some news on Stackdriver and Kubeflow. Kubeflow, not Kubeflow, that's our flow. David, share some of the news and then we'll get into the Stackdriver. Absolutely. So Kubeflow is a brand new project. We launched it in December and it is basically how to make machine learning stacks easy to use and deploy and maintain on Kubernetes. So we're not launching anything new. We support TensorFlow and PyTorch, Cafe, all the tools that you're familiar with today, but we use all the native APIs and constructs that Kubernetes provides to make it very easy and to let data scientists and researchers focus on what they do great and let the IT ops people deploy and manage these stacks. So simplifying the interactions and cross functionality of the apps using Kubernetes. Exactly. When you go and talk to any researcher out there or data scientists, what you'll find is that while the model, TensorFlow or PyTorch or whatever, that gets a little bit of the attention. 95% of the time is spent in all the other elements of the pipeline, transforming your data, ingesting it, experimenting, visualizing, and then rolling it out to our production. What we want to do with Kubeflow is give everyone a standard way to interact with those, to interact with all those components and give them a great workflow for doing so. That's great. And the Stackdriver news, what's the news we got going on? We're excited. We just announced the beta release of Stackdriver Kubernetes monitoring which provides very rich and comprehensive observability for Kubernetes. So this is essentially simplifying operations for developers and operators. It's a very cool solution. It integrates many signals across your Kubernetes environment, including metrics, logs, events, as well as metadata. And so what it allows is for you to really inspect your Kubernetes environment regardless of the role and regardless of where your deployment is running it. I mean, David's bringing up just the use cases. I'm just, my mind's exploding because you think about what TensorFlow is to a developer and all the goodness that's going on with the app layer. The monitoring and the instrumentation is a critical piece because what Kubernetes is going to bring to people is thousands and thousands of new services. So how do you instrument that? I mean, you got to know, I'm going to provision a service dynamically that didn't exist. How do you measure that? I mean, is this the challenge you guys are trying to figure out here? Yeah, for sure, John. And the great thing here is that we, and at Google, primarily many of our SRE practices, go beyond monitoring. It really is about observability, which I would describe more as a property of a system. How do you are able to collect all these many signals to help you diagnose the production failure and to get information about usage and so forth. So we do all of that for you in your Kubernetes environment, right? We take that toil away from the developer or the operator. Now, a cool thing is that you can also instrument your application in open source. You can use Prometheus and we have an integration for that. So anything you've done in a permissions instrumentation, now you can bring into the cloud as needed. Talk about this notion, because everyone gets like, oh my God, Google's huge. You guys are very open, you're integrating well. Talk about the guiding principles you guys have. When you think about Prometheus as an example, integrating in with these other projects, how are you guys treating these other projects? What's the standard practice API base? Is there integration plans? How do you guys address that question? Yeah, at a high level, I would say at Google, we really believe in contributing and helping grow open communities. I think that the best way to maintain a community open and portable is to help it grow. And Prometheus, particularly in Kubernetes, of course, is a very vibrant community in that sense. So we are, from the start, we design our systems to be able to have integration via APIs and so on, but also contributing directly to the projects. And I think that one thing that's just leveraging off that exact point, you know, we realize what the world looks like. There's literally zero customers out there that are like, well, I want to be all in on one cloud. You know, that $25 million data center I spent last year building, yeah, I'll toss that out so that I can get some special thing. The reality is people are multi-cloud. And the only way to solve any problem is with these very open standards that work wherever people are. And that's very much core to our philosophy. Well, I mean, I've been critical of multi-cloud by the definition. I mean, statistically, if I'm on Azure with 365, that's Azure. If I'm running something on Amazon, those are two clouds. They're not multi-cloud by my definition, which brings up where this is going, which is latency and portability, which you guys are really behind. How are you guys looking at that? Because you mentioned observation. Let's talk about the observation space of clouds. How are you guys looking at, because that's what people are talking about. When are we going to get to the future state, which is I need to have workload portability in real time. If I want to move something from Azure to AWS or Google cloud, that would be cool. Can we do that today? That is actually the core of what we did around Kubeflow. What we are able to do is describe in code all the layers of your pipeline, all the steps of your pipeline that works based on any conformant Kubernetes cluster. So you have a Kubernetes conformant cluster on Azure, or on AWS, or on Google cloud, or on your laptop, or on your private data center. That's great. And to be clear, I totally agree. I don't think having single workloads spread across cloud, that's not just not realistic, because of all the things you identify. Latency, variability, unknown failures. Cap theorem is a thing because it's well known. What people want to do is they want to take advantage of different clouds for the efforts that they provide. Maybe my data is here, maybe I have a legal reason. Maybe this particular cloud has a unique chip or unique service. Use cases can drive it. Exactly. And then I can take my workload, which has been described in code, and deploy it to that place where it makes sense. Keeping it within a single cloud, but as an organization, I'll use multiple clouds together. Yeah, and the data is key because if you can have data moving between clouds, I think that's something that I would like to see because that's going to be, the metadata you mentioned is a real critical piece of all these apps, whether it's instrumentation logging and or provisioning new services. Yeah, and as soon as you have that, as David is mentioning, if you have deployments on multiple public or private clouds, then the difficult part is that of servability that we were talking before. Because now you're trying to stitch together data and tools to help you get that diagnostic signals when you need them. This is what we're doing with Stackdriver Kubernetes monitoring precisely. You know, we're early days in the cloud. It still feels like we're 10 years in, but a lot of people are now coming to realize cloud native. So I'm not a big fan of the whole Amazon, although they say Amazon's winning. They are doing right well with the cloud because they're a cloud. It's early days. And you guys are doing some really specific good things with the cloud, but you don't have the breadth of services, say Amazon has. And you guys are above board about that. You're like, hey, we're not trying to meet them speed on services, but you're doing certain things really, really well. You mentioned SRE, Site Reliability Engineers. This is a scale best practice that you guys have bringing to the table, but yet the customers are learning about Kubernetes. Some people have never heard of it before. Say, hey, what's this Kubernetes thing? What is your perspectives on the relevance of Kubernetes at this point in history? Because it really feels like a critical mass de facto standard movement where everyone's getting behind Kubernetes for all the right reasons. It feels a lot like interoperability is here. Thoughts on Kubernetes relevance. Well, I think that Alexis Richardson summed it up great today. The chairperson of the Technical Oversight Committee. The reality is that what we're looking for, what operators and software engineers have been looking for forever is clean lines between the various concerns. And so as you think about the underlying infrastructure and then you think about the applications that run on top of that, potentially services that run on top of that, then you think about applications, then you think about how that shows up to end users. Before, if you're old like me, you remember that you buy a $50,000 machine and stick it in the corner and you'd stack everything on there, right? That never works, right? The power supply goes out, the memory goes out, this particular database goes out. Failure will happen. The only way to actually build a system that is reliable, that can meet your business needs, is by adopting something more cloud native where if any particular component fails, your system can recover. If you have business requirements that change, you can move very quickly and adapt. Kubernetes provides a rich, portable, common set of APIs that do work everywhere. And as a result, you're starting to see a lot of adoption because it gives people that opportunity. But I think, and let me hand off to JD here, the next layer up is about observability because without observing what's going on in each of those stacks, you're not going to have any kind of ability. Well, programmability comes behind it to your point. Talk about that, that's a huge point. Yeah, and just to build on what David is saying, the thing that is unique about Google is that we've been doing for more than a decade now, we've been very good at being able to provide innovative services without compromising reliability, right? And so what we're doing is in that commitment, and you see that with Kubernetes and Istio, we're externalizing many of our opinionated infrastructure and platforms in that sense. But it's not just the platforms, you need those methodologies and best practices, and now the tool set. So that's what we're doing now, precisely. This is a commitment to externalizing. And you guys are matrix strides, just to kind of point out to the folks watching in the enterprise. I know you've got a lot more work to do when you're pedaling as fast as you can. I want to ask you specifically around this because, again, we're still early days in the cloud, you think about it, there are now table stakes that are on the table that you got to get done. Check boxes, if you will. Certainly on the government side, there's like compliance issues, and you guys are now checking those boxes. What is the key thing? Because you guys are operating at a scale that I think enterprises can't even fathom. I mean, millions of services going on at huge scale. That's going to be helpful for them down the road, no doubt about it. But today, what is the Google table stakes that are done and what are enterprises need to have for table stakes to do cloud native right from your perspective? Well, I think more than anything, I agree with you. The reality is all the hyperscale cloud providers have the same table stakes, all the check boxes are checked, we're ready to go. I think what will really differentiate and move the ball forward for so many people is this adoption of cloud native. And really, how cloud native is your cloud, right? How much do you need to spin up an entire SRA team like Netflix in order to operate in the Netflix model of complete automation and building your own services and things like that. Does your cloud help you get cloud native? And I think that's where we really want to lean in. It's not about IaaS anymore. It's about does your cloud support the reliability, support the distribution, all the various services in order to help you move even faster and achieve higher velocity. And standing up that is critical because now these applications are the business model of companies. When you talk about digital. So I tweeted, I want to get your reaction to this yesterday, I got a quote I overheard from a person here in the hallways. I need to get away from VPNs and firewalls. I need user application layer security with un-fishable access, otherwise I'm never safe. Again, this talks about the perimeter list cloud. Spear fishing is really hot right now. You get people getting killed with security concerns. So I'm going to stop if I'm an enterprise. I'm going to say, hold on, I'm not going to, you know, I'm going to proceed with caution. What are you guys doing to take away that the fear and also the reality that as you provision all these, to stand up all these infrastructure services for customers, what are you guys doing to prevent phishing attacks from happening, security concerns, what's the Google story? So I think that more than anything, what we're trying to do is exactly what JD just said, which is externalize all the practices that we have. So for example, you know, at Google, we have all sorts of internal tools that we've used and internal practices. For example, we just published a white paper about our security practices where you need to have two vulnerabilities in order to break out of any system. We have all that written up there. We just published a white paper about encryption and how to do encryption by default, encryption between machines and so on. But I think what we're really doing is we're helping people to operate like Google without having to spin up an entire SRE team as big as Google's to do it. And an example is we just released something internally. We have something called BeyondCorp. It's a non-firewall, non-VPN-based way for you to authenticate against any Google system. Using two-factor authentication for our internal employees. Externally, we just released it. It's called identity-aware proxy. You can use it with literally any service that you have. You can provision a domain name. You can integrate with OAuth. You can, including Google OAuth or your own private OAuth, all those various things. That's simply a service that we offer. And so really, you know, I think that- And there's also more than two-factor coming down the road, right? Exactly. Well, actually identity-aware proxy already supports two-factor. But I will say one of the things that I always tell people is a lot of enterprises say exactly what you said. Geez, this new world looks very scary to me. I'm going to slow down. The problem is, is they're mistaken, under the mistaken impression, that they're secure today. More than likely, they're not. They already have firewall. They already have VPN, and it's not great. In many ways, the enterprises that are going to win are the ones that lean in and move faster to the new world where- Well, they have to, otherwise they're going to die. With IoT, all these benefits, they're exposed even as they are, just operationally, just to support it. Okay, I want to get your thoughts, guys, on Google's role here at the Linux Foundation, CNCF, KubeCon event. You guys do a lot of work in open source. You got a lot of great fan base. I'm a fan of what you guys do. Love the tech Google brings to the table. How do people get involved? What are you guys connecting with here? What's going on at the show? And how does someone get on board with the Google train? Certainly the TensorFlow has been, it's like great, great open source goodness. Developers are loving it. What's going on? We have over almost 200 people from Google here at the show helping and connecting with people. We have a Google booth, which I invite people to stop by and talk about the different projects we have. Yeah, and exactly like you said, we have an entire repo on GitHub. Anyone can jump in. All our things are open source and available for everyone to use, no matter where they are. Obviously, I've been on Kubernetes for a while. The Kubernetes project is on fire. TensorFlow is on fire. Kubeflow that we mentioned earlier is completely open source. We're a great deal with Prometheus, which is a CNCF project. We are huge fans of these open source foundations, and we think that's the direction that most software projects are. Well, congratulations. I know you guys invest a lot. I just want to highlight that. And again, to show my age, these younger generations have no idea how hard open source was in the early days. I call it open bar and open source. You guys are bringing so much. Everyone's drunk on all this goodness. Just these libraries you guys are bringing to the table. I mean, TensorFlow is just the classic poster child example. I mean, you're bringing a lot of stuff to the table. I mean, you invented Kubernetes. There's so much good stuff coming in. Yeah, I couldn't agree more. I hesitate to say we invented it. It really was a community effort, but yeah, absolutely. Well, you opened it up, but you did it right and did a good job. Congratulations. Thanks for coming on theCUBE. I'm going to see you at Google Next. theCUBE will be broadcasting live at Google Next in July. Of course, we'll do a big drill down on Google Cloud platform at that show. It's theCUBE here at KubeCon 2018 in Copenhagen, Denmark. More live coverage after this short break. Stay with us.