 Hello, everybody. Welcome to another episode of Kubernetes by Example Insider, where we try to interview people who are actually kind of doing the work in the community so that we can get a sense of what they're trying to accomplish. And by extension, if we talk to the actual people who are doing the work, hopefully we'll get a better sense of what's going to happen next rather than relying on press releases and things like that. So today, we would like to invite Idit Levine, who is the, excuse me, the CEO and founder of Solo. And if you would like to, well, let me introduce Josh Wood real quick and say, hey, you know, thanks for coming. As I often say with Red Hat People, titles and groups and all that change so often that I like you to introduce yourself. So Josh, if you'd want to introduce yourself and then we'll talk to Idit. Right on. Yeah. So hello, Landon and Idit. I'm Josh Wood. I'm a Principal Developer Advocate for OpenShift at Red Hat. And while it is true that many of our titles change rapidly, mine never has in the whole time. All right. So it's either a mark of stability or a certain stagnation in my career path. Either way, I'm happy. And I'm one of the guys who makes those press releases and things like that. So Idit can give us the real story instead of my stylized vision of the future. Exactly. So Idit, you want to tell us a little bit about your background and why you're here? Yeah, sure. So I mean, as you mentioned, I'm the founder and CEO of Solo. And Solo is a company that honestly making, it's not that different than Red Hat. We're making open source projects simpler to consume, right? In the nature of that, when OpenShift and Red Hat is doing as well. But we are focusing more about networking. So the project that we are involved in is project like STO, which is a service mesh, or project like Celium, which is kind of like a CNI. So that's the work that we are more doing. And again, the idea is just to make it way more accessible for people, make sure that they will be able to consume it, work more about the user experience and how to basically be able to really fit it to each organization, organization structure and people, and honestly just make it easier to consume. So that's not that different than what you guys, I guess. Yeah, yeah, totally. I mean, and that's in my opinion, right? I mean, that's really what a vendor brings to the table a lot of the time, right? Is that, you know, is how can I make it a little bit easier to consume for, you know, the typical enterprise, you know, or, you know, sometimes, you know, the big thing that Red has all offered is, you know, indemnification, which can be harrowing for a lot of corporations. So it's really kind of important. We do like to start though, with like a little bit of background in the sense of what got you into a kind of open source to begin with? Yeah. So I mean, in open source, I think that I'm doing it for a long, long, long time. I think that honestly, when I started, I think I started actually on the area of honestly, a long time ago, Cloud Foundry and Mezos, that was exactly on the, you know, Docker just came. And I think that, you know, I was in the first Docker coin. So I think that's where it kind of like, it's made me excited about open source. So I was in Docker Co and I was doing a lot of work with Docker knows Solomon Hyde very, very well. And then basically, when the war, let's say started between Kubernetes and Cloud Foundry and Mezos, I was actually really, really active in all those community. And then, yes, I mean, that's basically what we did. We did a lot of stuff that related to Unicur, Unicur and all and other kind of like trying to push the boundaries. So yes, I mean, I basically started since then and honestly, I love it because, you know, I'm usually a person that always looking at what next, what next, right? And the good thing about that market is that it's not you sitting in a closed room and kind of like working by yourself and moving incrementally is that we're working on it together everywhere, right? And so I love it. I mean, as you say, I'm kind of like the person saying, what's next, what's next? So yeah, yeah, I know that part of solo is giving it the ability to be to decide on to decide to create the what next, which is really, really exciting. Nice. So kind of, you know, asking a question of somebody who is involved in kind of all the communities, what feature or whatever of Cloud Foundry do you think is your most kind of missing from Kubernetes? Or do you think it's, it's like on its way there? Or, you know, is there anything that you really kind of miss? Because I say that all the time about subversion is like, I really wish it did, like kind of the submodule thing and git is like, you know, but in subversion, it did a much nicer job. And I'm just, you know, it's like, I missed that. It doesn't mean I don't want to, I want to go back to subversion by any stretch of imagination. But I do miss that like one piece. Yeah, I think that what Cloud Foundry did really, really well is the user experience. I mean, I think this is still something that is missing a little bit from Kubernetes. Not sure why, honestly. I know that the, that K-Native was a intent to create something like that, but it's still, it's not, it's not, yeah, it's not as great as Cloud Foundry was. But honestly, that's the only thing that I think I would take from there. I think all the back end of it, like, you know, once it's actually getting to the Cloud Foundry, I think that's, you know, that was way, way more complicated in my opinion. I just thought it was a huge problem. And I think that also, in my opinion, generally as someone working in the open source quite a lot, I think that one of the problem that was there is that the way the community was operated was a little bit different than Kubernetes or any other open source that I know. In order to commit something, you have to create, to do these things called dojo, right? Which is honestly, it's a huge barrier in order to be part of a community. And, you know, they're not just going to get your poor request unless you were in this dojo. So I think that honestly, that's a little bit, you know, like, I think that what's beautiful in the Kubernetes ecosystem or any other, honestly, is the fact of the exception. Expected, like people, whoever want to help, like, please do, right? We want you to join. And I think that in Cloud Foundry was a little bit different. It was more like, we will choose with these people and we will have to do it our way, which I think honestly created a little bit of issue. That's such a hard balance there. It's a really interesting balance to me and I'm interested, Adid, if you think that that difference in sort of the governance and style of the communities around these projects might have something to do with that difference in user experience that is the end result of their outputs, right? Like, in Kubernetes, there's a whole lot of openness and a whole lot of folks can be chefs in a very large kitchen and that has a lot of great outcomes. But like, if you can't tell, I'm sort of hinting at an opinion I have here, but I wonder if you agree. It's like, in a way, that more close nature of the Cloud Foundry ecosystem maybe led the end result for the user to be a little more focused, a little easier to digest. Do you kind of see that effect? Of course, because I can tell you that I wanted to contribute and it was very hard for me because I never passed the Dojo, right? And I never went just because that wasn't my job, right? I wasn't sending my EMC to do the Dojo. So that's very, very limited because, you know, I can't code and it's a shame that I couldn't influence. So I think that that's a huge barrier. I think that also, if you think about it, let's be honest, right? I mean, a lot of the contribution to going to an open source usually is people that that's their focus at work, right? I mean, Red Hat is paying the money, but those guys are dedicated to Kubernetes and they're doing whatever they need there. I think that that's extremely different, you know, so first of all, that's where the most contribution, but as I said to you, like, I mean, as a company like Solo, we also doing this, right? We're basically contributing wherever we can. So the question is, you know, if you're thinking about the experience with Cloud Foundry, if I was solo right now, would I actually send, you know, my people right now to a week or two weeks, I think it was way more than a week, right? I wish it was a week, was something like a month or something kind of like training somewhere in a location of those issues. It's have to be in San Francisco. It's have to be with everybody. It's have to be overwhelming. I'm just not sure that this is something that I would as a startup do honestly. So it's really limited to you. Who is this company that can focus on this? And that will be the big organization that honestly can do that. So, you know, yeah, you're losing a lot of good at the end. So honestly, a lot of good engineers that working in companies like, you know, like solo, right? More like, you know, quick and, you know, and innovative. So I think that you're losing a movie here. Yeah, definitely. So I know Solo is kind of more generally focused on kind of networking at large, right? But as I recall, right, you started kind of in the service mesh space. And I'm kind of curious, what is it about like kind of the service mesh idea is like, was attractive to you that kind of said, Hey, I should go and make this better. Yeah. So I mean, I mean, honestly, to me, when I started the company and I look at the market, right, because it's very depends on what the time, right? If I probably do the company, I don't know, five years ago, probably I will do also orchestration or something like that. So when I got them on the start of the company, basically, what I tried to figure out what will be the next problem that people will have, right? I mean, okay, so we already know as the people using Docker, right, or container in general, nobody know that now everything moving from analytic to microservices. In that point, I even bet it wasn't that clear in the market yet that Kubernetes will be these things that will win it all. So now the question is what will be the people problem. And to me, it was very simple. If you're taking something that is, you know, but one binary, one big binary and cut it to pieces, somehow you need to reassemble them, right? I mean, eventually it should look like one application. So I understood that the problem that people will have is around, first of all, connecting, right, those services. Second of all, make sure that when you're doing it, you know, in a secure way that no one can come in the middle because now everything is over the wire. And the last one is observability, right? Honestly, there's so many replication and when the request coming, it's seriously like a murder mystery to figure out what's wrong. So when I looked at all of this, I said, okay, that's obviously the problem that people will need to solve. When I looked around, there was the concept already of service mesh. It came from the Boyan guy, Linkerdy, basically. But it wasn't well implemented, I will be honest. It was the first Boyan, you know, the best, the first implementation of Linkerdy. Honestly, like in service mesh, there is this concept of sidecar, which hopefully we will go away soon. But it is. And I think that in Boyan, or first Linkerdy, we call it sidebus because it was so huge. So I looked at this and I said, well, that's not great. That's solving the right problem, right? It's focusing on the right feature. They did a good product market, you know, you know, product design. But actually, the implementation wasn't great. So that was the first option. And the second option, STO just was announced basically. And when I looked at this again, there was a lot of stuff that I like. It makes a lot more sense. But there was also some decision that I was kind of like questioning. For instance, this mix server thing that every time that the request is coming, you need to do kind of like a run free for a GRPC server. That doesn't make any sense to me on the request path. I think that in a crash path, latency is extremely important. So I kind of accepted and said, well, that's interesting. That's something that probably is going to be a better solution. But yeah, it will take a long time until we get there. So this is why we basically started first on the get way, then move to the mesh, then extending to CNI, you know, basically. But the vision didn't change, which is, you know, we need to do, you know, it's basically the hashtag or whatever is application networking, right? I mean, everything your application need in order to basically work in terms of networking. Yeah, I mean, I think you raise a really good point, right? Which is that, you know, what a lot of people don't realize is that as soon as you start getting into any kind of service oriented architecture, whether we call it, you know, SOA, or we call it microservices or, you know, even, you know, there's lots of other ones, Corba, right? The challenge starts to become is like, okay, now I got all the little pieces. Now, how do I, you know, how to put them back together again? And so I think for the audience, right, it's kind of service mesh is a big part of that glue and how you bring it together. And I think, you know, the other point I really want to highlight, too, is that observability factor. You know, I worked on a system that actually used COM over HTTP many, many years ago, and we actually built our own observability as well, because tracing across the services, it's like it's impossible unless you use those kinds of tools. And so I think that observability is it. Yeah. And we have like, you know, SOA is growing like crazy right now. So we're getting a lot of people joining us from organization like Spotify or, you know, a lot of others, right, AWS and others. And they're all basically coming in, you know, in the first kind of like meeting on the onboarding, they're telling us the same thing. The reason they join is because they try to build it on the other companies. And if they had service mesh, it was make their life way, way simpler. So honestly, it's kind of like a very good validation for us. Okay, we're doing something right. Yeah, yeah, totally. Yeah, no, I strongly agree. I mean, before I left Red Hat, my primary focus was on service mesh stuff, because I'm also a big believer, big believer in services. But the trade off to all those nice little services is keeping track of them is very, very difficult, you know, and, and, you know, if you go back to the SOAP SOA days, right, I mean, they tried to do the same thing, except it was all very top down. And, you know, and with the microservice, you know, it's all very bottom ups, but, you know, and so it gives you a, but you can do a piecemeal, but you still are trying to solve the same problem at the end of the day. Yeah. So Josh, did you want to add to that or should we move to the next thing? Well, I think that that issue of sort of the technology at the service mesh is an answer to this proliferation of services and a way of addressing that problem, like sort of leads me into, so I have this basic understanding of that concept, both behind service meshes, generally what you're doing at solo.io. How, how do you connect that to that first bit we were talking about, like user experience and developer experience? Like, what, what are the real improvements in developer experience of a service mesh? Because if I could phrase it in a joke, is as a developer, a guy who talks to developers a lot, the way you could improve service mesh UX for me would make it disappear, right? So I want to hear about how does that happen? Yeah. No, so exactly, exactly what you described. I mean, I think that the reason there is service away service mesh is, as you said, is basically to say, you know, you need to develop or focus on the business development and business logic and you let us, the SREO, the organization, to basically come with all those policy and make sure that it's secure, that it's observed, that it's everything that you need. I think that, so that's what's the purpose of service mesh, right? Back on the day, I said that it's like virtualized that from the user, right? I mean, all the ideas to take it away. I think that a few things. So first of all, if you're looking at the way the API of those projects are happening, I don't think that that's becoming, you know, the persona idea of who is actually using it, who should know about that. It's something that is really messed, right? It's basically most of the time, your organization, people still need to know about the mesh, maybe we'll configure it, who is in charge of whom is kind of like very arbitrary. I think that one of the things, for instance, that we built to our product is basically understanding that the people that is doing the application are not the people who's basically configuring some time. And in each organization, by the way, it's different. There is a lot of customer of us that actually are more, you know, the users are very advanced and they actually interested and kind of like do it all. But we have startup that honestly, you don't have choice in that. You're writing the code, you're running it, you're doing whatever, right? And there's people that it's totally abstract. They don't even know that there is cluster behind this thing. So I think that it's very, the different, the question is, who is this organization, the way we build a product is there is the concept of workspace that I do know that this concept is actually coming right now to the Kubernetes ecosystem, which is great, which is basically the ability of, look, again, why are we doing all of this? Why there is OpenShift and why there is Kubernetes and why there is VM, why there is all of this, all together. Eventually, we're trying to do one thing. And these things is basically care of piece of your infrastructure and delegate it to the application thing. That's all you want to do, right? And what you also need to do is tell what they can and cannot do in that infrastructure. And if this is a very, very advanced team, so maybe you will to give them, you know, yeah, you can do everything, like the ability and the security and whatever, right? Go for it and do it. But there will be a team that there will not rush too much. And maybe all I want to do is tell them, you know, you're only in charge of the read rise and the timer. That's all, that's all I wanted to do. So we build it to a product. It basically has the concept of workspace. You're basically choosing, you know, clusters. So it's multi-clusters. So which cluster you want and which name space you want in this cluster. We're grouping it together and basically we're making sure that all of this is going to work, going to be secure and so on. So that's pretty, pretty strong. And that way, honestly, it's fitting to every organization because you can decide who can do what and in what level you can decide if they're pushing the configuration to the local clusters, which usually, I don't think you should because you need to do a github or you can actually do it to the management cluster. But basically, all of this is built very nicely with a user experience that way that you will know, the developer will know what they need to know, right? You can come with your own CRD. The CRD is way more simpler, way more, you know, friendly. So it's kind of like another layer that we build on top of the service mesh, which is make it more accessible, but also, you know, taking configuration, you know, multi-tenancy and multi-clusters, which I think STO today is not, honestly, very great. It is. So I don't think... It's the first time I've heard CRD and friendly in the same sentence I will say. I mean, it's a yeah, it's not that bad. I'm quite sure that I call them friendly in the operator's book as often as I got. But anyway, before, I don't want to take us too far off into that, into like the little details of this, but I am interested in something. So right now, you mentioned the workspace's effort in Kubernetes as sort of an augmentation of the namespace, which is like this classical term in the industry for defining a virtualized space, you know, dedicated to a user or a process or a view of a file system on plan nine. What is the difference between the namespace and the workspace or a better question, really, more specific for you, Adidas? What does a workspace mean in your product? You just mentioned the disconnect and sort of like STO maybe not designed for this kind of environment. We're building on top of that. What does a workspace mean for a developer using it as configurability of my view of the world and how much I need to know about and what else is like sort of virtualized into that workspace? Yeah. So I mean, basically, you know, as an admin, you basically see everything, right? Then you can basically create a workspace. And workspace basically, as I said, is a grouping of namespace in different cluster potentially, right? It doesn't have to be on the same cluster. Now, once you grouping it, you need to tell me, first of all, who is the user that can use it? Who are you delegating to? And what they can and cannot do? Where is the default configuration namespace, right? Where should they push in the configuration and so on? And that's it. I mean, basically, now what it's meant is that there's an admin that can see everything, right? Including a beautiful graph and everything that you can see. And you need all the policy, all the stuff that attached to it. We have, for instance, GraphQL into the mesh. We build it into the STO or Envoy. So basically, you can see the policies, the scheme, whatever. It's all the big management that you need. But as a workspace admin or workspace user, when you log in, you're only going to see your workspace. You can even plugable to make a decision if you even want to show them that there is a cluster or service mesh. Potentially, they shouldn't even know about it, right? And then the other thing is the idea of catalog. So, you know, right now, maybe I trust my team. So I will tell them, yeah, you're responsible for everything. Well, I can say you're responsible only for reach by an admin timeout, but I can even do it more. I can basically say, I don't trust you at all, you guys crazy. All I wanted you to do is basically, I will give you basically a three option. You can do timeout five, timeout 10, and timeout 15. That's all I trust you to do. And then as a user, it's basically a catalog. Seriously, they can come and choose this policy. So that's number one. The biggest one, which I think is very interesting, is the feature of import and export. So let me give you an example. We have a big, huge customer. It's like huge customer, right? 60 data setter, the biggest destroyer of STO running in the world today, right? Thousands of instance of STO. And on that organization, it's built from a lot of team, a lot of groups. And huge, huge, huge organization that they bought, right? So it's like, imagine like the biggest organization that you are. Probably can guess it. But anyway, they have one billing system. That's what they have. So like in all that organization, they have one billing system. So that's one workspace. But they want the other team to actually consume it. So basically, we have the concept of import and export. So you can take a workspace and basically export a service, which is the billing service. And that's where we embed a developer portal. So basically, when you actually export it, the other team can see it. They can click at the tile. They can see the dock. They can ask an API key or whatever else they're using. And basically onboard it in the self inside the organization, right? And basically leverage that. So it's really, really, honestly, really, really make the experience really, really easy. And again, we just thought about, for instance, what developer portal meaning that concept, for instance, right? That's what it means. Somehow you want to consume all those great services. So again, we did a lot of that kind of work. And I think that it's really, really make it exciting for people. And you mentioned developer, which is very important. What is the user developer? But honestly, we are not selling to the developer, right? We are selling to the SREO, the ITO, the platform owner, with the OpenShift owner, right? So when we are selling it to them, honestly, also their user experience needs to be better. Because honestly, it's hard to manage. It's not an easy thing to do, right? For instance, today, if you wanted to consume it, SDO, you need to do n minus one, which is basically mean that you need to upgrade your system every six months, let's say. Honestly, this is not the way. I don't see our user doing it. I mean, right? We have a lot of OpenShift user and I can tell you they are not upgrading every six months. So now the question is, how we can help them? And for instance, for our product, it's n minus four, plus we're bringing all the patches and CVEs all the way, basically, to n minus four. So even that little thing, right? Which is minor. Or maybe they want FIPS compliance. Maybe they want arms because they want to save money. Only that by itself, the life cycle of SDO install, upgrade, only that by itself, I think it's extremely powerful. And as I said, we're doing exactly the same thing right now with Celium. Because in the nutshells, we're an application networking, right? I mean, it's also something that we care about. And we can do some interesting defense in depth because we basically own the layer four and the layer seven. Yeah, we should talk about a rel adoption at some point. That can be a slow activity. So I know you wanted to talk a little bit about what's going on with Envoy and what Solo's been doing with Envoy. And I was kind of thinking here, we'd talk a little bit more about kind of forward looking stuff. And so if you could tell us a little bit about what you've been doing with the Envoy proxy, that would be cool. Yeah. So I mean, we're working on Envoy for, I don't know what, five years since the company started. Honestly, that was our main focus. Because when we, as I said, when I looked at the market back then, I saw a service match, but SDO wasn't ready. It was clear to me that it would take a long, long time until they will get it right. So for the meantime, I said, okay, obviously, I do believe that these things are going to be everywhere one day. So what can I do for the mean, and I'm a startup, right? I need to figure out how eventually to create a product, I can just sit down and, you know, I said, show five years and do like, so what I tried to figure out is that what I can build that I honestly, I can sell today. And today is five years ago, but will make us very attractive when service match is going to be everywhere. And I bet on the proxy. Proxy was, you know, might get got it right from the first go. It's not surprisingly, because it was building it a few times before that. So when I saw that I said, okay, that's really, really powerful. This is the thing that honestly matured was running in, you know, in Lyft already, right in production for a few years before. So I said, and I really liked the fact that it was different than anything else, like for instance, comparing to nginx or hf proxy, it was an API driven and you will be able to customize it because there is what called the filter chain. So you can put your own logic and so on. So I was honestly really, really important to us. So we took the proxy and we basically started to look of what we can do with it, right? And honestly, to me, I'm a big believer, you know, someone asked me one time, what is the best product or what is the best to me is the product that people use. So to me, what was important is that, you know, yeah, it will take time for this thing to come to place. But if I will be able to run and win production and get familiar with this, get it better, make sure that it's working skillet, that will give me a huge benefit. So what I did is I started with the API gateway market. I basically target the API, get my market and said, okay, that's a market that honestly, I think changed to date, I will say honestly. The only thing that changed there is basically the messaging on the marketing, like it's basically become between instead of API gateway to API gateway for microservices. But that's what's going to change, honestly. So we basically build the best API gateway. The API gateway that I will want to run in production is CRD, base, you know, like, think about it, right? We were changing the world with DevOps, we were doing container and Kubernetes, all this exciting thing. And then, you know, what I want to run is this huge monolithic, active, active Cassandra cluster. Just did not feel right to me. So we built Glue, right? And by the way, when you describe it, you mentioned that what we need to do is to glue again the application. That's exactly why we call it Glue. My English is really great. So every time that I try to describe what we're going to do, I said, you know, we're gluing, we're gluing. So it was the best word I found, and therefore we changed it to OO, but that's the reason we call it the way we call it. So we started with this, which was really good. The advantage that it gave us is to, number one, we're running Envoy for production probably better, longer than anybody. I mean, not lift-end guys, but you know what I mean? Like, you know, we know Envoy, we saw it in stress, we saw it in huge stress, you know, most likely using our product everywhere right now on the gateway even, forget about the mesh. So I felt that that gave us a lot of tooling to see what can be wrong, how that you are great, that you manage, that we honestly, it's the best experience. And the second thing that I think it was really, really good for us is that we work with a lot of the customers. And Solo has a lot of customers. We actually, you know, we are building a company right now mainly because we have a lot of customers. So we learn from them a lot about what are they looking for? Like, why should they move to something like Envoy? What is the feature that they are missing versus Cent Unix or any other? And while we're basically all this time for the last five years, we're basically enhancing or extending Envoy. So it started with simple stuff like transformation filter was very popular for us or GRPC or, you know, simple stuff like this, rate limiting external art. But I think that, for instance, because we became so familiar, a lot of the stuff that we're doing recently is enhancement that is honestly a little bit crazy. Like, for instance, we built GraphQL into Envoy. So Envoy has the ability to have the filter as I said, and each filter is basically a C++ Async code, which means that you can leverage a lot of those libraries. So GraphQL by itself is very complex. C++ Async is really, really complex. Now, merge them together and make sure that it's scaling, honestly, a nightmare. But we have an amazing team. So we actually work on it for a year and we made it done. And that's huge because, you know, you think about what our customer telling us, but they're telling us that, you know, a lot of the application team basically trying to reinvent the API gateway right now, by basically putting this like maybe Apollo server or writing the own server for GraphQL. And now you have basically two hop every time because it should go from the proxy and then it's going to these things that they build in Node.js usually. Honestly, not the greatest thing that I will want to see in my infrastructure. So we basically kind of like united it together and teach Envoy how to speak GraphQL. So when the request of GraphQL will come to Envoy, you will know to do the laundry, but everything in it, including the fact that we can actually take a lot of advantage for what ends up giving us out of the box, like security and observability. So it's really really powerful. Yeah, you kind of touched on it a little bit, but like, you know, and I noticed, you know, Solo is kind of investing in GraphQL. Like, why should I care about GraphQL? Like, what is what's interesting about it? You know, it feels a little to me like writing SQL in JavaScript. You know, like, you know, where what is it? What's it for in your opinion? So I think that the most advantage that there is in GraphQL is the the velocity that you can get from your team or using GraphQL. I mean, we'll give you a simple example, right? We needed to do a sock to put compliance, right? And we needed to get some data for our audit. So, you know, we went to GitLab, right, to GitHub and basically try to collect all this data. If we couldn't use GraphQL, we should have actually needed to do a bunch of res kind of like query to everywhere. And that will be really, really hard because you need to merge and then you write a lot of logic and so on. Instead of it, we just ask one question, right? In GraphQL, boom, we got all the data is it's saved us a lot of time. So that's us, right? We are not, you know, that's not, you know, that's just an example of use case. But think about people that all what they're doing is basically writing a UI application or, you know, basically. So if you think about it, the amount of the work that they're doing by merging data, by collecting it and the performance of getting all this data and then merge it on the server. So honestly, it's really, really hard or worse, they either can do it on the client side or they need to go to the server people and say, hey, can you have these things? Again, which is more complexity and now go figure out maybe the backend engineer is busy and you cannot do that. So all this process honestly, it's very annoying and just slowing down your team. So I think that the advantage of GraphQL is honestly and the tooling that the community is running build around it is that it's damn simple. Like everybody can do it, they can do it really quick, they can write an application in no time versus if you're doing rest, it's a little bit more complex and taking more time. So yeah, no, I'm a big fan, but honestly, it's not only me. Honestly, the reason we did it is because we heard it from the customers and if you look, I don't know, just for instance, of the trending of Google, go Google and check how many people looking for rest versus GraphQL boxes, you will see it's like, right, people very, very interesting in GraphQL. It's really big for the front end engineer, really, that's like the biggest cool thing that's happening there. Yeah, like with many things, it's funny because as I recall, right, GraphQL has actually been around for quite a long time. And I used to use it with graphing databases. So it's kind of like, sometimes when something starts being used kind of slightly differently, I have to kind of like, rewrap my brain around it, right, because, you know, I used to use it this way. And now people are doing something different with it. So yeah, so I've been playing around with it a little bit. Yeah, I definitely the the ability to kind of connect information together that doesn't normally go together in your data mart or whatever is a huge advantage. And, you know, the fact that that so many tools like solos, right, that are kind of like almost like natively processing it, that makes another huge advantage to it. You know, I think all the databases starting to be all the things, you know, so like a database these days is not just relational or just document store or just, you know, name value, they're often just doing them all. And, and kind of based on the query that you're asking are starting to translate into the very like the best method to get to it. So I think it's not only the database, it's not only the database, right, it's all your services. Think about it. We created those little, little microservices everywhere. Right. And now think about the UI person who needs right now to go and connect to 10 microservices in order to get a very simple UI. That's a lot of work, you know, if we can actually in one query enabling to do all of this, that's really, really big in performance way. Also make sure that it doesn't need to worry about, you know, about the security. And that's where I was going to kind of say like the security and the performance and all the stuff, which, you know, if you have kind of that, you know, a front end UI developer now doesn't have to go and figure out how to securely access each of those or rate limit, you know, because some of them are more expensive than others, you know, et cetera, et cetera, right. And a lot of the staff, honestly, they're either trying to rebuild it themselves, right, which is ridiculous because you have the proxy next to it already know how to do it very well. And the second thing, go to the basically center, you look, don't worry, the security should be on by the application. It shouldn't, right. I mean, all the point of service management is trying to make sure that the people will not need to, you know, trust the engineer to, I don't know, add those libraries, something like that, right. So to me, this is like really big, the fact that we can actually do right now, even on the level of the resolver, kind of like, do, you know, whatever, or anything do you want, basically, or I see, you know, I think it's a big one. Yeah, I mean, I think, you know, it really kind of opens up your ability to kind of hire as well, right, because, you know, as soon as you don't have to have the, you know, every junior engineer worrying about security, right, or that kind of thing, you know, that's it's risky to have junior engineers worrying about security, you know, they need, that's where you get, you know, you get better at it with experience, right. So if you can have kind of the senior engineers taking a look at the overarching thing, right, and then letting the junior engineers collect the pieces together, I think it's a much more scalable for your organization as well as your software, you know, which is huge. Honestly, one thing that, you know, I'm just going to add it, as you said, even if he's a senior, you know, people making mistakes, people, you know, basically, you know, it's the different of, are we trusting people to remember to do this? So I was just going to make sure that it will be forced by the mesh. And I think that, yeah, it's better to make sure that the people who know what it is will take your responsibility for that. Yeah, it was fun. Aditha and I had the same thought when you said senior people get better at this. I was like, do we? Well, I guess what I, yeah, I mean, truly at the end of the day, right, what we want to do is feed into the programmer's default, which is lazy. So, you know, it's like as long as we can solve the problem once, and then, you know, and solve it correctly the first time, and then never touch it again, we will generally be a much happier set of campers, right? So the other one that I wanted to ask you about is just because I also see it very much on the horizon, and not a lot of play is WebAssembly. You know, I was talking to a friend of mine about it the other day, and he's like, oh yeah, it's going to take over the entire earth, you know, it will be the hot, you know, the hottest thing since sliced bread, if it's not already. What are your thoughts on WebAssembly? Is that something that, you know, the audience of, you know, kind of Kubernetes deployers and operators should be thinking about? Is that, you know, is it important and on the horizon? So it's a great question. I mean, when you're thinking about WebAssembly, it's a technology, and it's leveraging a lot of layers, right? One of the layer words started is basically the browser, and it's making a lot of sense, right? I mean, the reason they started it is because I think it's the Mozilla team, and the reason is they wanted to allow people to create something that is extremely, extremely fast, right? So you don't have to write only JavaScript. Maybe you want to write C++ or anything else, and basically enable you to extend basically your application or your browser, but we need to make sure that you cannot crash the browser, right? Because that's too important. So that's where WebAssembly is basically started. This is great technology because the way they build it because of that functionality, there is a lot of advantage here. First of all, you can write it in a lot of language and translating to a low level code. Second of all, it's dynamically loaded to the browser, which means that you can dynamically load it to somewhere else. And the last one is basically the fact that it's sandbox. It's basically contained. So all of this was very great. And for instance, when we were looking about it with Google, it's not a big problem, but to us anyway. But a lot of the stuff that we built in Envoy was basically we extended Envoy with this filter chain ability, which means that you need to write a C++ async and then you need to recompile Envoy. And honestly, that's not the funnest things to do. So basically the question was, and this is why STO did this like round trip on the beginning with Mixer, is how can we make people that are not solo engineers basically to extend their Envoy, right? And add logic. And I think that WebAssembly was really nice because basically what they were thinking is that there's a similar functionality. You need to make sure that it's not going to take down the Envoy. So it has to be contained. It needs to dynamically load to Envoy because you don't want people to recompile Envoy. And ideally, we would want them to do it in any language they want. But the performance should be close to native. So that's exactly kind of like WebAssembly could be these things that extending Envoy. And that's what we built together with Google to basically enhance that. So we did this and so worked a lot on this experience and how to make it simple to consume. But a few things happen in the market. And this is very, very interesting in my opinion. So I'm just going to kind of like come again. It started in the browser. We brought it to Envoy. But there is a lot of other people that trying to replace Docker or container deal with this. So I think again, three different reasons, right? I mean, in the browser, it's make a lot of sense. And it's using, it's fantastic in Envoy. We'll talk about it in a second because there are some issues. In the containers, honestly, I don't think that the benefit will be too big. Think about why Docker did catch versus, I don't know, other technology or like Unicode that we tried to do before. The reason is because people will change all the market, only if the benefit is huge. So like you said, well, we'll need to pay a price, for instance, create a tooling for debugging or something like that. But I know that when I will do that, I mean, you know, we'll get to the best experience ever and it will be fantastic. That's why Docker catch, right? It was 10 times better than VM. When we came, for instance, with Unicode, people said, well, we already have Docker. Yeah, it's a little bit faster. Yeah, it's a little, but who cares? Should I right now reinvent everything because it doesn't make any sense. But even over LXC, right? I mean, it's not even just performance, it was just ease of use. It was like Docker was just like perfect storm, right? Exactly, exactly. So to me, it's exactly the same idea with Wasm. Is that makes sense to make Wasm run instead of container? Yeah, maybe. Sometimes there is, but honestly, it's not that easy. You know what I mean? It's not getting dramatically better performance. You're not getting, why? That's a question that will ask myself. So I don't think, honestly, personally, I believe this is not something that I know people working on this and excited about it. I just don't think that the benefit is that big that all the community right now will change and run Wasm instead of Docker. Then you could write your container runtime in like Haskell or something. So, you know, maybe that was a joke. Yeah, it's not that exciting. So now in Envoy, I will say that here's the problem. So I think that, you know, everything, I'm big on technology. That's what I care. But honestly, I should recognize what's going on in the market. You know, as I said, Wasm got a lot of excitement specifically for stuff like the web browser as well as Envoy. But in order that to be very good in Envoy, we need tooling. And unfortunately, there were some changes that happened, right? For instance, Mozilla team, I think, would laid off and a lot of them went to another places. So a lot of the contributors suddenly, you know, less focus on this, there was a lot of politics involved and so on. When these things happen, honestly, it's slowing down the community and the, you know, and the technology itself, when the technology is slowing down, honestly, it's hard to adopt. So is that going to be the next thing that happening? I mean, it's a good technology. It's depends if the community will tend to get on and put their faith. But right now, I don't see a lot of work done like specifically on Envoy part. And that's worrying me because it's filled to me that even though that's a nice idea, and you just don't know that a lot of people using it, which is a problem. If you're not using it, you don't know where it's the problem. So yeah, this is honestly, I was just going to say, so is this, if we do have, you know, developers in the audience, right, who want to contribute to open source, would this be an interesting place to do so? Is this a place that you could kind of make a career out of would be kind of like building out how WebAssembly works? So yeah, but again, if you want, probably it's not going to be enough. We need a community to go and decided that we're making it successful. Again, as I said, like we are, I don't know if you remember, but when we announced WebAssembly support for Envoy, we actually, we were the user experience, we created a document like tool links and so on. The thing is I said is that there was a lot of promising, but less implementation. So it's it's getting better, but honestly, just too slow. That's the problem that I have. You know, like we talked about the ability to write in every code, but honestly, you know, and there's not a lot of people working on this. So to me, that's my wariness. There's someone who's looking at this, I say, well, I don't know that it's going as fast as I hope it to go. And eventually, I know if it's not, it's just going to be, oh, you know, no one will use it. That's what will happen. No, in Envoy specifically, I think that in the browser is there, there are stages, different use cases. So that's my opinion, a personal opinion. I think in general, what people trying to build in every organization is basically plugins, or I think that this thing that is more sexy right now is EBPF. It's not that different. It's basically plug into the operator system. But it's good because if you think about it, a lot of the stuff that we, like a lot of the stuff that we're building always started at high level because it's easier to implement. And then we optimize it as low in the stack as we can. So I think that that's what creates a lot of optimization for service matches and other other observability tooling and so on in the kernel. So I'm actually pretty excited about that. And we are very a little kind of sidebar. We actually talked to Liz Rice about EBPF in episode seven, if you want to go back and watch it for anybody who's wants to hear more about that. I think WebAssembly is also super interesting, like on the kind of envoy level, because I mean, who doesn't want dynamically loading libraries that are replaceable on the fly, you know, like that seems like a very strong feature. So I really like them in the envoy. I think they're really interesting. But I hear you, the community has to kind of be there for it, right? And it needs to be that, you know, kind of the uses of it need to be more, you know, somebody's most important components, you know, we need a bunch of people who are, you know, kind of mainlining that tool chain as part of their deployment. Otherwise, you know, there's not, there's not enough focus on it. So which is, but like I said, I think it's, I think it's a very cool. No, I hear you. I was very excited about that. Yeah. Yeah. All right. So let's see, moving to, I don't know, did you have more questions about that, Josh? Or do you want to move on to something else? Okay, cool. All right. So we do, you know, I think we should ask, right, is like, so how has your experience been as a kind of a community member and as, you know, kind of trying to run this startup and, you know, and launching it off the ground and, you know, it sounds like quite successfully, you know, and doing that in Kubernetes community is, is it, is it embracing? Like, is the organization, is the community embracing concepts like people doing startups or is there a certain amount of, you know, fear of capitalism or, you know, is there, have you found a nice happy medium there? What's the, how has that experience been as being a community member and also trying to run this successful startup? Yeah, I think, I think that the community itself is awesome in Kubernetes. And I think that's what makes it so successful. I think that as always, there's the people as community members are having a blast and doing great and very accepting and doing awesome. And there's the cooperation that they can be less nice. But I think this is fine. I mean, in the nutshells, I think that the Kubernetes ecosystem is great. I mean, as I said, they are so welcoming and so including and everything that we can do. So, I mean, to me, personally, and it's not only, I mean, Kubernetes is one of the environment, I mean, the other one is STO, right? I mean, that community, there is insanely good and awesome Envoy community, right? We are, I don't know, this and Cilium, right? I mean, Cilium right now is becoming a CNCF product. Like every good, honestly, project being successful, you need to have more than one provider selling it. Otherwise, it's basically their product. It's the community. So, I think you guys know that, but it's solo is selling Cilium and we enhancing it quite a lot and excited about that a lot, because I think it's going to give us another layer that we can optimize the mesh and optimize the application that's working and so on. So, to me, the communities are great. I mean, as I said to you, I mean, it's interesting because each of them is different, right? I mean, STO from the beginning was very inclusive, like very interesting and getting new people and just help and be together. I think Envoy the same thing. I think that Kubernetes, that's what I saw as well. I think that, you know, Cilium is a new community, so it would be interesting because mainly right now it's less than a million people. So, hopefully, they will know how to separate between, I'm the owner and I need to make money to, you know, to, I'm an open source CNCF project, which means that I want people to come and help me to make that better. And I think that that will be interesting to check which direction it's going. Interesting. Yeah. Yeah. Well, I think we should probably wrap it up there, and, you know, but thanks so much for, you know, sharing your thoughts with us. You know, I really appreciate, like I said, the whole point of the show, right, is to get some insight from people who are kind of in the trenches and working on the stuff about what you think's going to happen next. And I really do think you've been able to share that with us. So, I really appreciate it. So, thank you so much. Yeah, thanks so much for having me. Indeed. I want to extend my thanks to you for joining us as well. I really enjoyed some of the kind of breakdown of how you thought about your forward analysis of the space that you ended up building a company. And I thought that was actually quite revelatory. Really interesting. So, thank you. Oh, man. Thank you so much. Thanks. All right. So, we wanted to invite Tam, who is our community manager for Kubernetes by example. And we went back and forth on pronouncing her last name, and I'll say Nuyen if I recall correctly. And so, we just wanted to show off kind of one of the learning paths that's on Kubernetes by example that is coming out. Is it out or should be out by now? It's already out. Yeah, five new tracks at KubeCon EU a couple of weeks ago. So, hi, everyone. Thanks for having me, Langdon and Josh. Good to see you. As we used to say on OpenShift TV on all our different shows, you know, on the show, we kind of live in the future. And so, we're never quite sure where the real world, if it's caught up or not yet. So, yeah. So, I have clicked the share button, but I'm not sure if it is sharing to the actual stream. I don't see me sharing right now. I don't either, which is making me concerned. Okay. Well, you troubleshoot. Well, I mean, as far as I can tell, it is sharing. I don't know why. Yeah. I don't see it on my side. Josh, do you see it all? I don't see it on Twitch. Yeah, I do not see it on my side. Is it? It's not showing up on YouTube either. Okay. Why we don't normally share the screen because we don't know how. Yeah. It totally seems to indicate that I am sharing. I wonder if, in the back end, does somebody need to turn on the other person, like a fourth person that is the screen sharing? Maybe that's the challenge. Oh, look at that. Either somebody did something or my double clicking worked. All right. So, you were going to show us where it was, right? So, I went to the app go because, you know, it's cool. Yeah. So, first, you know, some of the audience members that may not know, Cube, by example, is a free learning community site. So, it's designed so that you can learn by example. So, if you go to cubebyxample.com, and then I can walk you through some of our new stuff. So, right now, we have 15 learning paths and we introduced five at KubeCon a couple weeks ago. So, since, yeah, so that's all of our stuff right now, which is pretty extensive. And then if, you know, all of our learning paths are developed by contributors and there's a mix of content. So, there's hands on guided exercises so that you can learn by doing. And then there are video tutorials that you can learn by watching. So, right now, since this show was on sort of a smash, let's click on the Istio from the Mentals track. Yeah. So, that's a new track that we announced a couple weeks ago. Shout out to Andres Hernandez-Bermudas from Mexico City for creating this new content for us. But there's a couple guided exercises and a couple lectures that you can go in and learn everything you want to learn about Istio. And it's some really this is the written part. Where's the, you said there's videos too? This one doesn't have video tracks yet. It depends on the contributors. Like for instance, our Rook content right now is all videos. So, yeah, it just varies between different learning paths. I gotcha. Yeah, the Istio one has guided exercises. Right, right. I think at least for me, that's my preference anyway, you know, it's funny. I do a lot of video, but I actually like to read stuff. So, you know, although ASCII Cinema, it's very, very cool if you haven't checked that out, which lets you actually cut and paste from the videos, which makes things handy. Nice. We'll have to check that out because that's one of the things we definitely want to do with our videos is transcribe it because that's the feedback that we got from some people is I just want to copy some of the stuff. Yeah, yeah. I wrote a horrible hacky version of it in Python many years ago because I drove me nuts that the videos were never like, you know, you couldn't select anything out of them, right? And then ASCII Cinema came along and did it properly. Nice. But yeah, so here's Istio. Where is the best place for folks to go to perform these exercises? So, like, I've got a step-by-step guide in front of me. Like, what are the resources you'd point people to for where I would get a cluster to run these things? Yeah. So, Langdon, if you want to go back up to the nav, if you go into resources, there is, it gets started, Tribe Kubernetes landing page that we have, and you can build and deploy in a real environment. So, you get to choose either you can learn with Minicube or you can check out our Red Hat developer sandbox. Yeah. So, and all the lessons are, yep, there it is right there, the sandbox. So, you can just sign up for that and then start building. You can also do the local version of sandbox too, right? But I think change names. So, but yeah, you can also download that. But Minicube is really quite nice and quite easy and pretty easy to plug things into. I was just experimenting with it over the weekend because I haven't used it in a while. Yeah. So, that's pretty cool. Yeah, absolutely. And there's more resources too. So, like, I was looking through, like, the, like, kind of intro to Kubernetes, right? And like, this is, there's a lot of videos here too, which is good for, like, when you're trying to explain what something is, you know, rather than like trying to, you know, kind of walk you through how to do something. So, but I really liked that it had, like, kind of every single little piece, right? So, you can kind of go and be like, okay, hey, I saw a stateful set somewhere, right? Like, let me go find what that is. Yeah, absolutely. And then a new feature that we launched. So, definitely after each learning path is a beyond KDE section where you can find links to, like, repos and prerequisites. But something that we recently launched, and you are just on it, is our KDE community forum. It's beta right now, but yeah, if you click into it, it's a live forum. You could start a discussion. You can talk to our contributors. You can let us know what content you want to see or whatnot. Or even if you have suggestions on future KDE insider guests. So, yeah, check out our forum. It's something that we just launched. And I think it'll be a good feature for the community to ask real questions live. Yeah, definitely. The more the more feedback we can get on topics people want to hear about, or on people they want us to try to interview, the easier it is for us to know what direction you'd like to go. Cool. And then we're also running a contest right now. So, the first three folks that started discussion on our new KDE community forum, you'll get an awesome KDE swag bundle, which is a really soft hoodie that I'm wearing right now. So, yeah, first three people. I'll look out for that. Yeah, so that's all the new stuff. We're always looking for feedback. We release every KubeCon. So, KubeCon EU was our last release. And then our next release will be at KubeCon North America in October. So, we're planning for new content and new features for the site. But we'd love to hear what the community would like to see. Yeah. And basically, there's a key to watching live, right, is that now you have a chance of entering the contest and being first. And getting swag is always the most important. Cool. So, should we wrap up the show there and call that an episode? Absolutely. All right. Do we know who's on next time?