 Hi, this is Yoho Saptan Bhartiya and welcome to tier for newsroom and today we have with us Varun Talwar Go founder of the trade baron script you have your other show. Thanks. I'm happy to be here Yeah, and today we are going to talk about this to you which has you know, it was to graduate phase or status at CNCF Of course, we will talk about what it means for project like it's true to get you know, graduated but before that I would like to Talk a bit about your own history with the project today I'm the co-founder of tetrate which I started in March of 2018 my co-founder JJ Prior to that I had a long sit at Google The last five years of which were in Google Cloud. So I Was Initially the product manager for GRPC and other CNCF project Which I was responsible for which was The modern RPC fabric and that's now, you know, very well adopted, you know API project in the same foundation So really the idea of Istio came about from there So while we were talking to companies about how they're adopting Microservices and how different development teams are Building their services in different languages and stacks of their choice and how it's becoming harder for them to Troubleshoot services how networking between them is becoming an issue in terms of reliability so the idea of Istio came about from those conversations and so I Was the founding product manager for Istio at Google We conceptualized the idea from the get-go. I was also responsible to bring in Envoy as the another CNCF project that people may be aware of as the you know data plane or the proxy within the Istio project and I was also responsible for getting The rest of our partners around it namely like, you know live through Envoy, but also IBM As the founding sort of you know partners into the Istio project and Which was eventually launched in May of 2017 at Glucon. Can you talk about when we look at Service mesh, of course, there are a lot of projects and we also know a bit about the history where the whole move to CNCF and all I mean it's it's not that crowded But there are a lot of service mesh project and service mesh itself has evolved Over ever since you know, it came into existence. So first of all, I would like to just you know To explain to the users, you know the the rule of service mesh in today's cloud native kubernetes native word. Yeah, I mean there is a Other Reason that it's getting popular I think if you The problems that it solves are very Relevant and apt today as you adopt More and more of a distributed services based architecture so The problems are very, you know simple in the sense of like if Once you adopt a distributed services based architecture The networking becomes harder. What becomes harder is who is responsible for Failures in that networking. So if you and I are developing two microservices Initially, we were in one code base. Now we are in two separate Code bases living in, you know, two different places Who's responsible when the network fails in between who's gonna make sure that It's always reliable between us, right? Who's now that network is in between? How do we make sure that? No one intercepts requests in between and everything is encrypted going back and forth Because otherwise we can have like man in the middle kind of attacks And then when something goes wrong, let's say end user has a latency and we are troubleshooting Is it my service? Is it your service? Is it the network in between? Or is it the underlying compute? That kind of thing becomes harder, right? So those problems are very real and When you take this, you know, what I was just explaining with two services to you know Hundreds and thousands of services across multiple teams, you know, the problem just becomes Elevated and it is Quite hard for to every service owner to encode all these Cross-cutting logic into their own microservice. So for each of us to embed, you know reliability logic, monitoring logic, TLS logic, security logic Retry logic into each of our microservices is Expensive to write and expensive to maintain, right? So something that can a dedicated piece of infrastructure that can abstract this out and Developers don't have to write and organizations have a common way to, you know, control it via configuration is very promising. So I think that's a Conceptual idea behind why everyone likes it and the more people are going towards Multicloud architectures, you know microservices architectures, which the world is going towards the more this is going to become relevant and you know, it started off with Istio started off with a proof point on proving it out into one Kubernetes cluster what it can do and I think it now the space has evolved into Enabling that to happen across their entire fleet of infrastructure and that's frankly why, you know I got motivated to start that rate as well when you look at projects like Istio, which were already being used in production What is what does the graduation mean for the project for the community? My community means, you know folks who are involved with maintenance and also the whole ecosystem where folks are consuming it. No, it's a Very important signal for end customers in terms of giving them comfort that this is mature, right? Like if you are the Executive in a big bank or a big telco trying to make a decision on adopting a technology You want to make sure that it's mature. It's proven It has It is not a single vendor a backed project it is not and It has a community of people around you like if you need it for support if you need to hire people around it Which you will if you're betting on it then that ecosystem exists, right and So I think the graduation is a signal for end users to give them that comfort that they can adopt it and it's mature The API's are mature and they can rely on it the Reason it went as you mentioned like it Graduated fast is because you know it already had a lot of adoption when it went into inception So sort of accelerated faster through the curve it also it also has a wide variety of contributors and Around it like I don't know the exact number but hundreds of companies that contribute to it And you know we as tetrate of course, you know, I have a business around it We are one of we are the largest contributor when you look at on boy and is to your combined over the last like year or two years So obviously we heavily help shape influence as tetrate Both on boy and is to you as well when you look at some of these open source projects, you know Kubernetes is a good example and external is a good example that is they start with solving one a specific problem And you have been involved with the project at initial phase But as the project, you know the adoption is growing, you know And folks are like running into different kind of workloads the scope of the project also grows So talk about the role and the scope of service mesh or STU in this case that you are seeing is kind of expanding as I said before Istio was A single cluster Solution when it launched, right? It's like you can within a cluster you can get Couple of aspects of security now you touched on security, which was primarily Authent and not Z as it calls or authentication and authorization, right? So any two microservices I can do authentication. I can do authorization without having to like write code for it But as you look at organizations, especially large organizations like obviously nobody has one cluster and everybody has Multiple of them and the larger they are they're very likely have many of them in many different public or private cloud environments So the and none of these services that they build live in isolation They all talk to each other, right? Some will be in service mesh. Some will not be in service mesh. So I think those are the Real environments for which we try to solve for the idea for The project was around like Giving a notion of identity to a service which was a new concept at the time and you know Still something that people find very You know New to absorb which is that everybody is familiar with use end user identity but service identity is something that people, you know find as a new concept and so But really why this is a meaningful difference to security is Because of that like service identity like as codenamed by like, you know, which is used Spiffy is the one that is used underneath within is still it's sort of like It's sort of the new IP address Right because in a world of Kubernetes and containers where You know things scale up and down in a world of cloud where Autoscalers scale, you know things up and down down to zero or scale to whatever is needed IP addresses are no longer very relevant as a noun and when it comes to Access control rules when it comes to who is allowed to talk to what So really you need a new You know layer and nomenclature there and that's what The underlying identity fabric is about right so therefore it's a meaningful shift in security and then the cons like the idea of like Encrypting all the communication between all my environments is a very expensive exercise as I said before with developers tried to do it via libraries So doing that in an automagic fashion is a is an extremely You know valuable thing and a step forward for security of organizations and then we take it to the next level in terms of access control right in terms of Which service can access which other service in which environment in which region in which cloud? and all these things are You know Dynamic in a world where compute is dynamic like modeling all that is not easy. So It's a very meaningful step forward in security as it relates to Security of all the traffic and that's what obviously the project and space cares about And I think that's why we see We are writing a lot of standards in this space like we are writing with NIST What microservice security really means we wrote 204 800 204 a and B last year We just rewrote with NIST as to treat the new Zero trust security standard 800 207 a which is like the referred standard for you know, zero trust environment So I think we're trying to educate both industry and community in terms of this meaningful shift in In my insecurity. Can you talk about how track trade is kind of helping in lower the barrier of entry? So Organizations can easily, you know, kind of embrace adopt these the service mesh tools easily Without having to compromise also, we know that teams are getting a smaller not everybody has expertise in everything So so talk about how you are lower the barrier of entry making making folks easier to deal with this complexity I think there are a few ways. We are trying to help one is The complexity is real because it's dealing with networking, but it doesn't have to be exposed to all the people in the organization so few people in the Platform team can deal with that complexity But the rest of the application teams and operations teams don't have to learn all that complexity can just be a consumer So that's one Technically how we are We enable that is by having, you know, higher level of abstractions and API's and interfaces which are More familiar to those users. So for example application teams are used to API specs and Open API specs and if they can just be in their familiar land and just Define their intent of what they want to do with their API's and The rest can be handled by the complexity of Istio under the hood. Then they don't need to see it So that's one example of how you sort of hide the complexity from the majority of consumers, right? The second one would be like the same thing for, you know operations teams like if they just need to Troubleshoot issues between whether it's service or network or compute like I mentioned then they don't need to learn all the nitty-gritties of The underlying Istio the third thing where we help is we Generally recommend companies to go one use case at a time and one set of applications at a time Which generally makes it a lot more, you know, grokable and scoped exercise and then they get the confidence of You know knowing the technology in many cases we recommend Okay, just start with an ingress gateway and don't disrupt your all your other microservices and that's an easier start It's less intrusive. So there are ways and means in which you can approach the problem Such that it's a lot more consumable. Well, I'll probably that we talk about in association with the Istio service mesh is off course and why the proxy there Talk a bit about What is and why how are you folks you involved with it and using it? I think one way is the Core Engine inside where all the traffic is flowing through so If you look at Istio, it's two parts broadly it's envoy which is the data plane which is where all the bits and bytes are flowing through and The Istio D which is the control plane which is programming all these Envoys that are in the data plane. So just that's what the broad architecture is The envoy is itself a very popular CNCF project. It's It's been there before Istio We decided to use that in Istio because of its You know modern, you know code base open in nature and it's support for all the modern protocols and APIs and so on So we have been a you know as tetrate a heavy contributor to envoy One of we have been like the top three contributors to envoy for many many years since we started the company We are using it as our data plane in all of the product offerings that we have we also have You know courses and certifications around it in our tetrate Academy where people can learn what this technology is about It's fully self-serve virtual courses that do exist and thousands of people have taken it Now we are also extending Envoy to become On its own like an inbuilt API gateway and load balancer for Kubernetes within on by itself because what we found is People were when they were adopting Kubernetes, they needed something at the front door and Everyone in the community was building different versions of ingress controllers on top of envoy and Cust end customers were like should I choose X or Y or Z and there was too many of them And so we started this effort about two years back of something called on by gateway Which is basically let's just build this into on why so people don't have this You know confusion and it's one thing in upstream is always, you know more well maintained by a community of people so as the as a company we are also now advancing on my gateway and Really, that's advancing the direction of on why what kind of future we see of HTO one of the things that the community you folks are working on. I think it's still work continue to see growth in especially in Kubernetes environments Things that are being worked on are There is efforts to look at Sidecar less models for Places where you know sidecars are adding like either latency or resource costing or management headache But I think the future is going to look like a mixed mode in some places. So sidecars are here to stay according to me And so we will see a mixed mode of somewhere. There are sidecars somewhere. There are not and I think that's a good Directional advancement for the project Obviously, you get some trade-offs of security and so on when you choose one or the other but if I think companies and end users should have their choice the other areas for advancement are around Standardizing on gateway API. So as you know Kubernetes has been working on the next ingress spec, which is the gateway API and and You know both Envoy and Istio are working towards conforming to that and that's a good thing because You know whenever if open projects can can have the base standard then you know vendor products can build on top of the base so That's one other thing the other areas are around Performance and scalability as it's being getting used in larger and larger environments like fine-tuning for even better scale and Performance at scale. I think that's just a natural Evolution right in terms of as as technologies go broader Varun, thank you so much for taking time out today and of course talk about these Service mesh and related projects and of course, I would love to chat with you again. Thank you Thank you Swapnil. Happy to be here