 Live from Austin, Texas, it's theCUBE, covering KubeCon and CloudNativeCon 2017. Brought to you by Red Hat, the Linux Foundation, and the Kube's ecosystem partners. Okay, welcome back everyone. This is theCUBE live in Austin, Texas for our exclusive coverage of the CloudNative Conference and KubeCon with Kubernetes, not theCUBE, theCUBE, which we're live, eight years running. I'm John Furrier, the founder of SiliconANGLE Media, my colleague, Stu Miniman, and excited to have Kube alumni and his distinguished industry legend, Lou Tucker, Vice President and CTO of the Cloud Confusing at Cisco Systems. Welcome back to theCUBE. Great to see you. Great to be back, it's one of my favorite shows. Lou, we've had many conversations over the years and it's always great to have you on because you're on the cutting edge perspective but you have a historical view as well. You've seen many waves of innovation. And obviously you have an intellectual property in the Computer Systems Museum. I mean, your resume goes on and on but you got to admire this community. Three years old, it was you, me, and JJ was sitting around at OpenStack in Vancouver three years and a half years ago, having a beer after the event one of these days. We were talking about Kubernetes and we were really riffing on orchestration and kind of shooting the arrow forward, kind of reading the tea leaves and we were predicting inter-clouding, like internet working, Cisco core competency, the notion of application developers wanting infrastructure as code. We didn't actually say microservices, we were kind of describing a world that would be microservices and this awesomeness that's going on with the cloud. What a- You were right. You were right. It wasn't me, it was the community. This is how communities operate. It is, it is. I think that what we're seeing and particularly in these open source communities you're getting the best ideas. And therefore a lot of people are looking at this futures base and then we bring together the community, get the projects that we work together on it and that's how we move it forward. You've been a great leader in the community. I just want to give you some props to that. You deserve it. But more importantly is just the momentum going on right now. I want to get your take. You know, you're squinting through the growth, you're looking at the innovation, looking at the big picture, certainly from a Cisco perspective but also as an industry participant. Where's the action? I mean, obviously containers grew, that tide came in, a lot of boats floated up. We saw microservices, boom. Then we now, Kubernetes getting better and better multiple versions. It's some say commoditized, some would say more interoperable. Really, that's sort of the connection tissue for multicloud. Exactly right. Do you see the same thing? Where's the action? So cloud computing is going everywhere now. And so it's natural that we see sort of one of the next phases of this is in the area of multicloud. The customers, they are in public cloud. They have private data centers where they want to run similar applications. They don't want to have completely different environments but they really want to see the consistent environment across which they can deploy applications. And that consistent environment also has to have security policies and authentication services and a lot of these things. And to really drive the innovation what I find interesting is that the services that are coming now out of public cloud, whether it be in AI or server lists, event-driven kind of programming models, enterprises want to connect into that. And so one of the things I think that that leads to is that you're beginning to hear talk now, just beginning to hear it which is this project called Istio, which is a service mesh because what they're really allows- What's the project name? It's called Istio, IFTIO. Dot IO. Everything is open source. It's a project that contributed to by Google and IBM and Lyft and now Cisco is getting involved in it as well. And what it really plays into is this world of multi-cloud. That now we can actually access services in the public cloud from your own private data center or from the public running applications in a public cloud, you can access services that are back in your data center. So it's really about this kind of application level networking stack that means that application developers can now offload all of that heavy work to a service mesh and therefore that will accelerate application. So it was interesting, I heard some talk about things like Envoy, Edge and service proxies and service proxies have been a nice tool to kind of cobble together kind of old legacy stuff but now you're seeing stuff go to the next level. Stat I heard in the keynote, I want to get your reaction to this because this kind of jumps out at me. Lyft was, Lyft had created a mesh over hundreds of thousands of services over millions of transactions per second. Lyft, Uber's got some stuff on the monitoring side, Google's donating. This is high scale, large scale cloud guys who had to build their own stuff with open source. Now contributing all this stuff back, this is the mesh you're talking about, correct? This is exactly right, yeah. Because what we're seeing is, we've talked about microservices and Kubernetes is about orchestration of containers. And that has accelerated application development and deploying it. But now the services, each one of those services still has all of this networking stuff they have to deal with. They have to deal with load balancing. They have to deal with retries. They have to deal with authentication. So instead what is happening now, we're recognizing these common patterns. This is what the community does over there. You see a common pattern. You abstract it and you push that out into what is known as sidecars now. So that the application developer doesn't have to, the application doesn't get changed when you need to change, like bring up a couple more services over here. Put this on a different cloud. The individual components now aren't unaffected by that because all of that work has been off-loaded into a service mesh. Bring us inside a little bit when we dig into that next level of kind of networking because you used to be kind of network administrator running around the data center, everything from pulling cables to zoning and everything like that. Now it's multi-cloud, multi-service, everything's faster. The role of the architect, the person running it, automation, we don't have an hour, but it gives a little bit about what it means to be a networking person these days. Well, it's interesting because one of the things that we know application developers did not want to become is to become a network engineer. And yet to do a lot of what they had to do, they had to learn a lot of those skills and instead they would rather set things up by policy. For example, they would like to be able to say, if I'm deploying now the version two of my application, it's a classic thing we talk about in this deal, the next version we want to just direct 5% of the traffic to it. Make sure it's okay before we turn over the whole thing. You should be able to do that at the application level and through a service mesh that is built in networking at the application level, the application guys can do it. Now the role of the network engineer is still the same. They have to provide the basic infrastructure to allow that to happen. And for example, a lot of the infrastructure now is extending the cloud from public cloud through the cloud VPN services that they have back into the data center. So Cisco, for example, is putting technology that are running at AWS and at Google and Azure that allows that to come back into the data center. So we can run Cisco virtual routers in the cloud connected back up into data center. So their standard networking policy that the network engineers really want to see enforced, they can be assured that that's enforced and that's the layers on top of it. And that's decoupled from the application. This is what we've been talking about since 2010, our eighth year of the queue infrastructure as code. This is what DevOps was all about and now is evolving mainstream. Absolutely right. You really want infrastructure to be as boring as possible and capable and insecure and now give a lot more control over to the application developer. And we also know that, I mean, right now it's really based largely on Kubernetes. That's a great example, but that will connect into virtual machines. It'll connect into legacy services. So all of this has to do with connecting all of those pieces that are today in an enterprise, moving to a public cloud and that transition doesn't happen wholesale. You move a couple over. One thing, I want you to look back. John talked about, we interviewed a bunch of years at OpenStack. What's your take on the role of OpenStack today? Is there still a role in OpenStack? And how's that kind of compare contracts to what we're doing here? Happy to answer because I actually am on both boards. I'm going to see it's the F board and I'm on the OpenStack board and I have contributors in my teams to both efforts across the board. And I think that the role that we're seeing at OpenStack is, OpenStack is evolving also and it's becoming more embracing and becoming about open infrastructure. And it's really about how do you create these open infrastructure plays. So it is about virtual machines and containers and bare metal and setting up for those services. So Kubernetes works just great on top of OpenStack. And so now people get to have a choice because one of the hard things I think for mostly enterprise developers and everything else is that the pace of change is so fast. So how do they try out some of the newer technologies that still can be connected back into the existing legacy systems? And that's why I think that we're seeing the role for OpenStack is to make that you can put with virtual machines, you can stand them up in there and then you can have the same virtual machines essentially running in the cloud. So virtual machines versus other approaches has come up as a trade off. We heard in the keynote between cost, I mean speed and security. Security is super important. So look and get your thoughts on how that plays out because we've got the pluggable architectures, another big theme that we heard in the keynote which is essentially just meaning like have a very focused, leverageable piece of code that can be connected into Kubernetes. But with VMs now, some are saying VMs are slow when you try to do security, but you want slow, boring when you need it, but you want speed and secure when you need it too. How do you get that both out of that? Yeah, well without being too geeky in terms of the virtual machine is emulating an entire computer. And so it looks like a computer, so you're running your traditional applications on top of a virtual machine. The same as they would if they were running on what we call bare metal machine. So therefore, that is by necessity much heavier. You bring around a whole operating system and things like that. Containers, there's a role for that too. There's absolutely a role for that. Now containers. But containers then are really much more about it's an application packaging exercise so that you can say, I'm going to run this application, I just want all its dependencies packaged up. I'll assume there's an operating system there. I'm going to count on the fact that there's a single operating system so you can spin up containers, they're much more lightweight, much more quickly. And now there's even things such as Cata containers that are coming out of Intel, which is now merging those technologies. The clear containers. Clear containers that came originally clear containers. And now it's merging because we're saying we want the security and the protection that you get with a virtual machine tied into like the VTX instruction set in the hardware. So you can get that level of security assurances but now you get the speed of containers. So I think we're continuing to see the whole community evolving in this direction of making things easier for application developers faster to do it. They're increasing in scale. So management and orchestration, we talked about that three years ago that that would be a big issue. And guess what? Of course it is. That's exactly what Kubernetes is. And the role of the data is going to be critical. And this is where a lot of people in the enterprise that we talked to love the story. They love the narrative but they're hearing things that they've never heard before and they kind of slow down. So I'd like you to take a minute, Lou and explain to the person watching, CIO, chief architect, network guy, whatever. What the hell is this Kubernetes hubbub about? What is Kubernetes from your perspective? How would you wrap that up and describe what it is and the impact to the customer? Yep, so formally it's an orchestration of a container. So what that means is that when you're developing an application and if you want it to be resilient, you want several instances of that application running and you want traffic then to be low balanced across it. Kubernetes provides that level of orchestration to make sure there's always three running. If one fails, it can bring up another one and it can do that completely automated. So it's a layer that really manages the deployment of containers. As an application developer, you still write your application, you package it up into a container, be a Docker container, and then you deploy it using Kubernetes in there. What was interesting, and I think this is what we recognized in this last year, I think, is that Kubernetes has a very simple networking model which is basically that of having a way to low balance across multiple containers and keep them running. If you have anything more complicated about different services that you want to talk to from those containers, that may be different places in the universe, we don't have a mechanism for doing that and everybody was having to write their own. So again, that's where the idea of a service mesh is. That's where the mesh comes in. That's where the mesh. Hundreds and hundreds of services. LinkardD has been doing it for a while. Envoy. And Lyft and Uber, they had to do it because it had massive explosion of devices. Right, right, exactly right. And so that's why getting together the code from Lyft and Envoy, adding a control plane to it, which is what Istia really is about, brings that out to. So that sounds like an operating system to me, but I have one more question for you. You mentioned, as you describe Kubernetes, isn't that auto-scaling, if I'm familiar with AWS, isn't that just auto-scaling or is it auto-scaling for application instances? Or is auto-scaling more defined differently? It does do the scaling parts. It does the resiliency part, but it has a very simple model for that. And that's why you need to have others. But it's the beginning of that orchestration. Because it's got the container level, it has all those inherent properties. And it can make sure to keep those containers alive and well and manage the life cycle. And that's the difference. And that's the real difference. We're the auto-scaling from Amazon as a service. It's purely a networking capability then tied into bringing up new instances. So this is like auto-scaling on steroids? It is, but one of the differences also is that Kubernetes, and what we're doing here is all open source. So you can run it anywhere. You don't get, I mean a lot of people are very concerned about being locked into, it used to be you're locked into Oracle or to Microsoft or whatever your, or Java on premise or something like that. What a proprietary operating system. And now they're have concern being locked into these services that are in the public cloud providers. And what we're seeing now with Kubernetes and we're seeing in almost everything around here by open sourcing that, the advantage is now the enterprise can run the same technology inside without being locked into a vendor and then as they do in the public cloud. Lou, so we spent a bunch of time talking about kind of multi-cloud. Some of the more interesting pieces is what's happening at the edge in IoT. We've heard Cisco talking about it for many years, networking of course important. What's your take? What are you working on with regards to that these days? There's a couple new trends that we've been, IoT is actually now really getting realized I think because it is pushing a lot of the computing out to the edge, whether it be in cell phone towers or a base stations, retail stores, those kind of edge. At the same time we're seeing this multi-cloud that we want the big services. If I want to use a machine learning service, I want to use it up in the cloud and I need to now connect it back to those devices. So multi-cloud is really about addressing how do you develop applications that run across multiple in the cloud, on the edge, in an IoT device. There's also I think you've probably been hearing server lists and function as a service. These are again a lighter weight way to have kind of an event driven model. So that if you have an IoT device and it just causes an event, you want to be able to spawn essentially a service in the cloud that only runs to process that one event and then it goes away. So you're not paying to run instances of virtual machines, whatever. You're sitting there waiting for some event. You get a trigger and you only pay so it has this micro billing capability as a part of it so that you just can use only the resources that we've finally realized the promise that we always had in cloud computing which is that pay for only what you need, for what you use. And so this is another way to do that. Lou, it's great to have you on theCUBE again. Good to see you. Great day at the update. I'd like to ask you one more final question to you. And the segment here, you always have your ear to the ground, reading the tea leaves. You have a unique skill to understand the tech at the root level. What's coming next? I mean, if we go back and we have these nice conversations we'll be riffing on what's kind of coming out in the next two, three years. I mean, it's some clear that some of the visionaries out there, so I've got to ask you, what's going to be hot? What do you see emerging? I mean, as we saw Kubernetes and discuss, we couldn't have predicted this. I mean, I couldn't have. I knew it was going to be hot. I knew it was going to be big, but not this big, changing the industry. What do you see out there? I mean, what would be the conversation you say, you know, we're going to watch this. This is going to be a value creation opportunity. Enabling technology is going to make a lot of things flow nicely. What kind of tech should? Well, it may be a tried answer, because I think a lot of people are seeing the same thing, is that we're actually laying the groundwork here when we talk about multi-cloud, things that are distributed across multiple things, accessing different services. I'm still a big believer in it's going to be in the strength of those services, whether they be speech translation services, whether they mean recombatiation engines, whether it means big data services. Access to those services is what's going to be important. And, you know, three or four years now we're going to be talking about the intelligence. Without a lot of heavy lifting to integrate. Yes, that's exactly the point. We want it so that somebody can almost visually wire up these things and take advantage of tremendously powerful machine learning algorithms. That they don't want to have to hire the machine learning experts to do it. They want to use that as a service. Slinging API, slinging services, wiring things up. Sounds like it's an operating system to me. It's always an operating system at the end of the day. Lou Tucker, vice president, CTO of system systems, industry legend, on the board of CNCF, the fastest growing organization, where projects equal products equals profit. And of course, the open stack. Lou, thanks for coming on theCUBE. I'm John Furrier with Stu Miniman. Back here live in Austin for more live coverage of Cloud Native Count in KubeCon after this short break. Thank you.