 Live from Austin, Texas, it's theCUBE, covering KubeCon and Cloud Native Con 2017. Brought to you by Red Hat, the Linux Foundation, and the Kube's ecosystem partners. Hey, welcome back everyone. Live here in Austin, Texas, the Kube's exclusive coverage of cloud native conference and KubeCon for Kubernetes conference. I'm John Furrier, the co-founder of SiliconANGLE, my co-student admin, our analyst, the next is Matt Klein, software engineer at Lyft. Ride, hailing, service, car sharing, social network, great company, everyone knows, everyone loves Lyft. Thanks for coming on. Thanks very much for having me. All right, so you're a customer of all this technology. You guys built, and I think this is like the shiny use case of our generation. Entrepreneurs and techies build their own stuff because they can't get product in the general market. You guys had large scale, the demand for the service. You had to go out and build your own with open source and all those tools. You had a problem, you had to solve, you build it, use some open source and then give it back to open source and be part of the community and everybody wins. You donated it back. This is the future. This is what it's going to be like. Great community work. What problem were you solving? Obviously Lyft, everyone knows it's hard. They see their car, a lot of real time going on. A lot of stuff happening. Yeah. The magic's happening behind the scenes. You had to build that. What, talk about the problem you solved. Well, I think when people look at Lyft, like you were saying, they look at the app and the car and I think many people think that it's a relatively simple thing. Like how hard could it be to bring up your app and say I want to ride and get that car from here to there but it turns out that it's really complicated. There's a lot of real time systems involved in actually finding what are all the cars that are near you and what's the fastest route and all of that stuff. So I think what people don't realize is that Lyft is a very large real time system that at current scale operates at millions of requests per second and has a lot of different use cases around databases and caching and all those technologies. So Lyft was built on open source, as you say. And Lyft grew from what I think most companies do which is a very simple monolithic stack. It starts with a PHP application where a big user of MongoDB and then some load balancer and then, you know. That breaks. Well, no, but people do that because that's what's very quick to do. And I think what happened, like most companies is or that most companies that become very successful is Lyft grew a lot and like the few companies that can become very successful they start to outgrow some of that basic software or the basic pieces that they're actually using. So as Lyft started to grow a lot, things just didn't actually start working. So then we had to start fixing and building different things. Matt, scale's one of those things that gets talked about a lot. So I mean Lyft really does operate it's a different scale. Maybe you could talk a little bit about what kind of things were breaking and then what led to envoy and why that happened. I mean, I think there's two different types of scale and I think this is something that people don't talk about enough. There's scale in terms of things that people talk about in terms of data throughput or requests per second or stuff like that but there's also people scale. So as organizations grow we go from 10 developers to 50 developers to 100 where Lyft is now many hundreds of developers and we're continuing to grow and what I think people don't talk about enough is the human scale. So we have a lot of people that are trying to edit code and at a certain size that number of people you can't all be editing on that same code base. So that's I think the biggest move where people start moving towards this microservice or service oriented architecture. So you start splitting that apart to get people scale. People scale probably usually comes with requests per second scale and data scale and that kind of stuff but these problems come hand in hand where as you grow the number of people you start going into microservices and then suddenly you have actual scale problems. You know the database is not working or the network is not actually reliable. So from Envoy perspective, so Envoy is an open source proxy. We built it at Lyft, it's now part of CNCF, it's having tremendous uptake across the industry which is fantastic and the reason that we built Envoy is what we're seeing now in the industry is people moving towards polyglot architecture so they're moving towards architectures with many different applications or many different languages and it used to be that you could use Java and you could have one particular library that would do all of your networking and service discovery and load balancing and now you might have six different languages so how as an organization do you actually deal with that and what we decided to do is build an out of process proxy which allows people to build a lot of functionality into one place around load balancing and service discovery and rate limiting and buffering and all of those kinds of things and also most importantly observability so things like tracing and stats and logging and not allowed us to actually understand what was going on in the network so that when problems were happening we could actually debug what was going on and what we saw at Lyft about three years ago is we had started our microservices journey but it was actually almost, it was almost stopped because what people found is they had started to build services because supposedly it was faster than the monolith but then we would start having problems with tail latency and other things and they didn't know how to debug it so they didn't trust those services and then at that point they say, not surprisingly we're just going to go back and we're going to build it back into the monolith so we're almost in that situation where things are kind of in that split. So Matt I have to think that's the natural where you led to service match. And Istio specifically and Lyft, Google, IBM all working on that. Talk a little bit more about what Istio, it was really the buzz coming in with service mesh. There's also, there's some competing offers out there Conduit, new one announced this week. Maybe give us the landscape, kind of where we are. It's what you're saying. Yeah, absolutely, yeah. So I think service mesh is incredible to look around this conference. I think there's 15 or more talks on service mesh between all of the Boyan talks on LakerD and Conduit and Istio and Envoy. It's super fantastic. And I think the reason that service mesh is so compelling to people is that we have these problems where people want to build in five or six languages. They have some common problems around load balancing and other types of things. And this is a great solution for offloading some of those problems into a common place. So the confusion that I see right now around the industry is service mesh is really split into two pieces. It's split into the data plane, so the proxy and the control plane. So the proxy is the thing that actually moves the bytes, moves the requests. And the control plane is the thing that actually tells all of the proxies what to do, tells it the topology, tells it all of the configurations, all of the settings. So the landscape right now is essentially that Envoy is a proxy. It's a data plane. Envoy has been built into a bunch of control planes. So Istio is a control plane. It's reference proxy is Envoy. So other companies have shown that they can integrate with Istio. Linkerdee has shown that, and Ginex has shown that. Boyant just came out with a new combined control plane data plane service mesh called Conduit that was brand new a couple days ago. And I think we're going to see other companies kind of get in there because this is a very popular paradigm. So having a competition is good. I think it's going to push everyone to be better. How do companies make sense of this? I mean, if I'm just a boring enterprise with complexity, legacy, I have a lot of stuff, maybe not the kind of scale in terms of transactions per second because they're not left, but they still have a lot of stuff. They got servers, they got a data center, they got some stuff in the cloud. They're trying to put this cloud native package together because the developer movement is clear by pushing the legacy guy old guard into cloud. So how does your stuff translate to the mainstream? How would you categorize it? What I counsel people is, and I think that's actually a problem that we have within the industry, is that I think sometimes we push people towards complexity that they don't necessarily need yet. And I'm not saying that all of these cloud native technologies aren't great, right? I mean, people here are doing fantastic things. You know how to drive the car, so to speak. You know how to use the tech. Right, and I advise companies and organizations to use the technology and the complexity that they need. So I think that service mesh and microservices and tracing and a lot of the stuff that's being talked about at this conference are very important if you have the scale to have a service-oriented microservice architecture. And some enterprises, they're segmented enough where they may not actually need a full microservice real-time architecture. So I think that the thing to actually decide is number one, do you need a microservice architecture? And it's okay if you don't, that's just fine, right? Take the complexity that you need. If you do need a microservice architecture, then I think you're going to have a set of common problems around things like networking and databases and those types of things. And then yes, you are probably going to need to build in more complicated technologies to actually deal with that. But the key takeaway is that as you bring on, like as you bring on more complexity, the complexity is a snowballing effect. More complexity yields more complexity. So Matt, it might be a little bit out of bounds for what we're talking about, but when I think about autonomous vehicles, that's going to just put even more strain on kind of the distributed nature system, things that happen at the edge. Are we laying the groundwork at a conference like this? How's Lyft looking at this? For sure, and I mean we're starting to obviously look into autonomous a lot. Obviously Uber's doing that a fair amount. And if you actually start looking at the sheer amount of data that is generated by these cars when they're actually moving around, I mean it's terabytes and terabytes of data, you start thinking through the complexity of ingesting that data from the cars into a cloud and actually analyzing it and doing things with it either offline or in real time. It's pretty incredible. So yes, I mean I think that these are just more massive scale real time systems that require more data, more hard drives, more networks, and as you manage more things with more people, it becomes more complicated for sure. What are you doing inside Lyft, your job? I'm actually involved in open source. Like what are you coding specifically these days? What's your current assignment? Yeah, so I'm a software engineer at Lyft. I lead our networking team. So our networking team owns obviously all the stuff that we do with Envoy. We own our edge systems, so basically how internet traffic comes into Lyft, all of our service discovery systems, rate limiting, auth between services. We're increasingly owning all of our GRPC communications, so how people define their APIs, moving from a more polling based API to a more push based API. So our team essentially owns the end to end pipe from all of our back end services to the client. So that's APIs, analytics, stats, logging. Yeah, right, right, right to the app. So on the phone. So that's my job. I also help a lot with general kind of infrastructure architecture. So we're increasingly moving towards Kubernetes. So that's a big thing that we're doing at Lyft. Like many companies of Lyft's kind of age range, we started on VMs in AWS and we use SaltStack and it's the standard story from companies that were probably six or eight years old. Classic DevOps, Gen 1 DevOps. And now we're kind of trying to move into the, as you say, Gen 2 World, which is pretty fantastic. So this is becoming probably the most applicable conference for us because we're obviously kind of doing a lot with server smash and we're leading the way with Envoy. But as we integrate with technologies like Istio and increasingly use Kubernetes and all of the different related technologies, we are trying to kind of get rid of all of our bespoke stuff that many companies like Lyft had and we're trying to get on that kind of general train. I mean, this is going to be written in the history books. You look at this time and then generations. I agree. This is going to define open source for a long, long time. I agree. Because I say Gen 1, it kind of sounds pejorative, but it's not. It's really, you have to build your own. You couldn't just buy Oracle Database because you probably have to maybe work on there. But like, you build your own. Facebook did it, you guys are doing it. Why? Because you're bad ass you had to. Well, otherwise you don't know customers. Right, and I absolutely agree about that. I think we are in a very unique time right now and I actually think that if you look out 10 years and you look at some of the services that are coming online, like Amazon just did Fargate, that whole container scheduling system and Azure has one and I think Google has one. But the idea there is that in 10 years time, people are really going to be writing business logic. They're going to insert that business logic. They may do PowerPoint slides. That would be nice. I mean, it's these, for me, PowerPoints are so easy. I'm not going to say that's coding, but that's the way it should be. I absolutely agree. And we'll keep moving towards that, but the way that's going to happen is more and more plumbing, if you will, will get built into these clouds so that people don't have to worry about all this stuff. But we're in this intermediate time where people are building these massive scale systems and the pieces that they need is not necessarily there. I've been sitting in theCUBE now for multiple events all through this last year and kind of crystallizing. We were talking with Kelsey about this high tower yesterday, craft is coming back to programming. So you got software engineering and you got craftmanship. And so, there's real software engineering is being done, it's engineering. Application development is going to go back to the old school of real craft. I mean, agile, all it did was create a treadmill of de-risking rapid build scale by listening to data and constantly iterating. But it kind of took the craft out of it. I agree. But that turned it to engineering. Now you have developers working on, say, business logic or just building a healthcare app. That's just awesome software, right? Yeah, I mean, do you agree with this craft? I absolutely agree. And actually, what we say about Envoy, so kind of the catch word buzz phrase of Envoy is to make the network transparent to applications. And I think most of what's happening in infrastructure right now is to get back to a time where application developers can focus on business logic and not have to worry about how some of this plumbing actually works. And what you see around the industry right now is it is just too painful for people to operate some of these large systems. And I think we're heading in the right direction. All of the trends are there, but it's going to take a lot more time to actually make that happen. I mean, I remember when I was graduating college in the 80s, something sounded old, but I had to date myself, but the jobs were for software engineering. Right. I mean, that was what they called it. And now we're back to this, but Devos brought it into the cloud, the systems kind of engineering, really at a large scale, because you got to think about these things. Yeah, and I think what's also kind of interesting is that companies have moved towards this DevOps culture. We're expecting developers to operate their systems, to be on call for them. And I think that's fantastic. But what we're not doing as an industry is we're not actually teaching and helping people how to do this. So like we have this expectation that people know how to be on call and know how to make dashboards and know how to do all this work, but they don't learn it in school and actually we come into organizations where we may not help them learn these skills. So part of every company has different cultures. Exactly. That complicates things. So I think we're also, as an industry, we are figuring out how to train people and how to help them actually do this in a way that makes sense. Well, fascinating conversation, Matt. Congratulations on all your success. Thank you. Obviously a big fan of Lyft, one of the board members gave a keynote. She's from Palo Alto, from Floodgate, great investors, great fans of the company. Congratulations, great success story. And again, open source. This is the new playbook. Community, scale, contribution, innovation. theCUBE's doing its share here live in Austin, Texas, for KubeCon, for Kubernetes conference and CloudNativeCon. I'm John Furrier, Stu Miniman. We'll be back with more after this short break.