 Hello, everybody. Welcome to another episode of Kubernetes by Example Insider, where we try to interview people who are actually kind of doing the work in the community so that we can get a sense of what they're trying to accomplish. And by extension, if we talk to the actual people who are doing the work, hopefully we'll get a better sense of what's going to happen next rather than relying on press releases and things like that. So today, we would like to invite Idit Levine, who is the, excuse me, the CEO and founder of Solow. And if you would like to, well, let me introduce Josh Wood real quick and say, hey, you know, thanks for coming. As I often say with Red Hat people, titles and groups and all that change so often that I like you to introduce yourself. So Josh, if you'd want to introduce yourself and then we'll talk to Idit. Right on. Yeah. So hello, Landon and Idit. I'm Josh Wood. I'm a Principal Developer Advocate for OpenShift at Red Hat. And while it is true that many of our titles change rapidly, mine never has in the whole time I've been here. So it's either a mark of stability or a certain stagnation in my career path. Either way, I'm happy. And I'm one of the guys who makes those press releases and things like that. So Idit can give us the real story instead of my stylized vision of the future. Exactly. So Idit, you want to tell us a little bit about your background and why you're here? Yeah, sure. So I mean, as you mentioned, I'm the founder and CEO of Solow. And Solow is a company that honestly making, it's not that different than Red Hat. We're making open source project simpler to consume, right? And then actually that one OpenShift and Red Hat is doing as well. But we are focusing more about networking. So the project that we are in involving is project like STO, which is a service mesh, or project like Celium, which is kind of like a CNI. So that's the work that we are more doing. And again, the idea is just to make it way more accessible for people, make sure that they will be able to consume it, work more about the user experience and now to basically be able to really fit it to each organization, organization structure and people, and honestly just make it easier to consume. So that's not that different than what you guys think, I guess. Yeah, yeah, totally. I mean, and that's in my opinion, right? I mean, that's really what a vendor brings to the table a lot of the time, right? Is that, you know, is how can I make it a little bit easier to consume for, you know, the typical enterprise, you know, or, you know, sometimes, you know, the big thing that Red Hat has offered is, you know, indemnification, which can be harrowing for a lot of corporations. So it's really kind of important. We do like to start, though, with like a little bit of background in the sense of what got you into a kind of open source to begin with? Yes, I mean, in open source, I think that I'm doing it for a long, long time. I think that honestly, when I started, I think I started actually on the area of honestly, a long time ago, Cloud Foundry and Mezos. That was exactly on the, you know, Docker just came. And I think that, you know, I was in the first Docker coin. So I think that's where it kind of like, it's made me excited about open source. So I was in Docker and I was doing a lot of work with Docker. No Salomon hike very, very well. And then basically when the war, let's say started between Kubernetes and Cloud Foundry and Mezos, I was actually really, really active in all those community. And then, yes, I mean, that's basically what we did. We did a lot of stuff that related to Unicarn, Unicarnal and other kind of like trying to push the boundaries. So yes, I mean, I basically started since then. And honestly, I love it because, you know, I'm usually a person that always looking at what makes what next, right? And the good thing about that market is that it's not you sitting in a closed room and kind of like working by yourself and moving incrementally is that we're working on it together everywhere, right? And so I love it. I mean, as you say, I'm kind of like the person saying, what next, what next? So yeah, yeah, yeah, I know that part of solo is giving the ability to be to decide on to decide to to create the what next, which is really, really exciting. Nice. So kind of, you know, asking a question of somebody who is involved in kind of all the communities. What feature or whatever of Cloud Foundry do you think is your most kind of missing from Kubernetes? Or do you think it's it's like on its way there? Or, you know, is there anything that you really kind of miss? Because I say that all the time about subversion is like, I really wish it did like kind of the submodule thing and get is like, you know, but in subversion, it did a much nicer job. And I'm just, you know, it's like, I missed that. It doesn't mean I don't want to I want to go back to subversion by any stretch of imagination. But I do miss that like one piece. Yeah, I think that what Cloud Foundry did really, really well is the user experience. I mean, I think this is still something that is missing a little bit from Kubernetes. Not sure why, honestly, I know that that K-Native was a intent to create something like that. But it's still it's it's not it's not yeah, it's not it's not as great as Cloud Foundry was. But honestly, that's the only thing that I think I would take from there. I think all the back end of it, like, you know, once it's actually getting to the Cloud Foundry. I think that's, you know, that was way way over complicated in my opinion. I just thought it was a huge problem. And I think that also, in my opinion, generally someone working in the open source quite a lot. I think that one of the problem that was there is that the way the community was operated was a little bit different than Kubernetes or any other open source that I know. In order to commit something, you have to create to do these things called dojo, right? Which is honestly it's it's a huge barrier in order to be part of a community. And you know, they're not just going to get your pull request unless you were in this dojo. So I think that honestly, that's a little bit, you know, like, I think that what's beautiful in the Kubernetes ecosystem or any other honestly is the fact of the exception. So expect it. Like people, whoever want to help, like, please do, right? We want you to join. And I think that in Cloud Foundry, it was a little bit different. It was more like we will choose with these people and we will have to do it our way, which I think honestly created a little bit of issue. That's such a hard balance there. It is. It's a really interesting balance to me. And I'm interested, Adid, if you think that that difference in sort of the governance and style of the communities around these projects might have something to do with that difference in user experience that is the end result of their outputs, right? Like in Kubernetes, there's a whole lot of openness and a whole lot of folks can be chefs in a very large kitchen. And that has a lot of great outcomes. But like if you can't tell, I'm sort of hinting at an opinion I have here, but I wonder if you agree. It's like in a way that more close nature of the Cloud Foundry ecosystem maybe led the end result for the user to be a little more focused, a little easier to digest. Do you kind of see that effect? Of course, because I can tell you that I wanted to contribute and it was very hard for me because I never passed the dojo, right? And I never went just because that wasn't my job, right? I wasn't sending by EMC to do the dojo. So that's very, very limited because, you know, I can call it and it's a shame that I couldn't influence. So I think that that's a huge barrier. I think that also if you think about it, let's be honest, right? I mean, a lot of the contribution that's going to an open source usually is people that that's their focus at work, right? I mean, Red Hat is paying their money, but those guys are dedicated to Kubernetes and they're doing whatever they need there. I think that that's extremely different, you know, so first of all, that's where most of the contribution, but as I said to you, like, I mean, as company like Soloway, we're also doing this, right? We're basically contributing wherever we can. So the question is, you know, if you're thinking about the experience of Cal Foundry, if I was Soloway right now, would I actually send, you know, my people right now to a week or two weeks, I think it would weigh more than a week, right? I wish it was a week or something like a month or something kind of like training somewhere in a location of the dojo. It's have to be in San Francisco. It's have to be with everybody. It's have to be overwhelming. I'm just not sure that this is something that I would as a startup do honestly. So it's really limited to you. Who is this company that can focus on this? And that will be the big organization that honestly can do that. So, you know, yeah, you're losing a lot of good at the end. So honestly, a lot of good engineers that working in companies like, you know, like Soloway, more like, you know, quick and, you know, and innovative. So I think that you're losing it will be here. Yeah, definitely. So I know Soloway is kind of more generally focused on kind of networking at large, right? But as I recall, right, you started kind of in the service mesh space. And I'm kind of curious, what is it about like kind of the service mesh idea is like was attractive to you that kind of said, hey, I should go and make this better. Yeah. So I mean, I mean, honestly, to me, when I started the company and I look at the market, right, because it's very depends on what the time, right, if I probably do the company, I don't know, five years ago, probably I will do also orchestration or something like that. So when I got the money and started the company basically by trying to figure out what will be the next problem that people will have, right? I mean, okay, so we already know is the people using Docker, right, or container in general. We already know that now everything moving from analytic to microservices. In that point, I even bet it wasn't that clear in the market yet that Kubernetes will be these things that will win it all. So now the question is what will be the people problem and to me it was very simple. If you're taking something that is, you know, but one binary, one big binary and cut it to pieces, somehow you need to reassemble them, right? I mean, eventually it should look like one application. So I understood that the problem that people will have is around, first of all connecting, right, those services. Second of all, make sure that when you're doing it, you know, in a secure way that no one can come in the middle because now everything is over the wire. And the last one is observability, right? Honestly, there's so many replication and when the request is coming, it's seriously like a murder mystery to figure out what's wrong. So when I looked at all of this, I said, okay, that's obviously the problem that people will need to solve. When I looked around, there was the concept already of service mesh. It came from the Boyan guy, Linkerdy basically, but it was, it wasn't well implemented, I will be honest. It was the first Boyan, you know, the best, the first implementation of Linkerdy. Honestly, like in service mesh there is this concept of sidecar, which hopefully will go away soon, but it is. And I think that in Boyan or Linker, first Linkerdy, we call it sidebus because it was so huge. So I looked at this and I said, well, that's not great. That's solving the right problem, right? It's focusing on the right feature. They did a good product market, you know, you know, product design. But actually the implementation wasn't great. So that was the first option. And the second option, STO just was announced basically. And when I looked at this again, there was a lot of stuff that I like. It made a lot more sense. But there was also some decision that I was kind of like questioning. For instance, this mixer thing that every time that the request is coming, you need to do kind of like a wrong free for a gRPC server. That doesn't make any sense to me on the request path. I think that in the request path latency is extremely important. So I kind of extended and said, well, that's interesting. That's something that probably is going to the better solution. But yeah, it will take a long time until we get there. So this is why we basically started first on the get way, then moved to the mesh, then extending to CNI, you know, basically, but the vision didn't change, which is, you know, we need to do, you know, it's basically the hashtag or whatever is application networking, right? I mean, everything your application need in order to basically work in terms of networking. Yeah, I mean, I think you raise a really good point, right? Which is that, you know, what a lot of people don't realize is that as soon as you start getting into any kind of service oriented architecture, whether we call it, you know, SOA or we call it microservices or, you know, even, you know, there's lots of other ones, Corba, right? The challenge starts to become is like, okay, now I got all the little pieces, now how do I, you know, how to put them back together again? And so I think for the audience, right, it's kind of like service mesh is a big part of that glue and how you how you bring it together. And I think, you know, the other point I really want to highlight too is that observability factor. You know, I worked on a system that actually used COM over HTTP many, many years ago and we we actually built our own observability as well because tracing across the services, it's like it's impossible unless you use those kinds of tools. And so I think that that observability is it. And we have, you know, SOA is growing like crazy right now, so we're getting a lot of people joining us from organization like Spotify or, you know, a lot of others, right, AWS and others. And they're all basically coming in, you know, the first kind of like meeting on the onboarding, they're telling us the same thing. The reason they join is because they try to build it on the other companies. And if they had service mesh, it was make their life way, way simpler. So honestly, it's kind of like a very good validation for us. Okay, we're doing something right. Yeah, yeah, totally. Yeah, no, I strongly agree. I mean, before I left Red Hat, my primary focus was on service mesh stuff because I'm also a big believer, big believer in services. But the trade off to all those nice little services is keeping track of them is very, very difficult. You know, and, you know, if you go back to the SOAP SOA days, right, I mean, they tried to do the same thing except it was all very top down. Right. And, you know, and with the microservice, you know, it's all very bottom ups. But, you know, and so it gives you a, you can do a piecemeal, but you still are trying to solve the same problem at the end of the day. Yeah. So, Josh, did you want to add to that, or should we move to the next thing? Well, I think that that issue of sort of the technology at the service mesh is an answer to this proliferation of services and a way of addressing that problem, like, sort of leads me into, so I have this basic understanding of that concept, both behind service meshes, generally what you're doing at solo.io, how, how do you connect that to that first bit we were talking about, like, user experience and developer experience? Like, what are the real improvements in developer experience of a service mesh? Because, if I could phrase it in a joke, as a developer, a guy who talks to developers a lot, the way you could improve service mesh UX for me would make it disappear, right? So, I want to hear about how does that happen? Yeah. No, so exactly, exactly what you described, I mean, I think that the reason there is service mesh, as you said, is basically to say, you know, you need to develop or focus on the business development and business logic, and you let us, the TEO, the SREO, the organization, to basically come with all those policy and make sure that it's secure, that it's observed, that it's everything that you need. I think that, so that was the purpose of service mesh, right? Back on the day, I said that it's like virtualized that from the user, right? I mean, all the ideas to take it away. I think that a few things. So, first of all, if you're looking at the way the API of those projects are happening, I don't think that that's coming, you know, the persona idea of who is actually using it, who should know about that. It's something that is really messed, right? It's basically most of the time the organization, people still need to know about the mesh, maybe will configure it, who is in charge of whom is kind of like very arbitrary. I think that one of the things, for instance, that we built to a product is basically understanding that the people that is doing the application are not the people who's basically configuring it some time. And in each organization, by the way, it's different. There is a lot of customer of us that actually are more, you know, the users are very advanced and they actually interested and kind of like do it all. But we have startup that honestly you don't have choice in that way. You don't care, you're writing the code, you're running it, you're doing whatever, right? And there's people that it's totally abstract, they don't even know that there is cluster behind this thing. So, I think that it's very, the different, the question is who is this organization? The way we build a product is there is the concept of workspace, that I do know that this concept is actually coming right now to the Kubernetes ecosystem, which is great, which is basically the ability of, look, again, why are we doing all of this? Why there is OpenShift and why there's Kubernetes and why there is VM? Why there is all of this altogether? Eventually, we're trying to do one thing and these things is basically care of piece of your infrastructure and delegate it to the application thing. That's all you want to do, right? And what you also need to do is tell what they can and cannot do in that infrastructure. And if this is a very, very advanced team, so maybe you will give them, you know, yeah, you can do everything, like capability and security and whatever, right? Go for it and do it. But there will be a team that there will not rush too much and maybe all I want to do is tell them, you know, you're only in charge of the read, rise, and the time. That's all, that's all I want you to do. So, we build it to a product. It basically has the concept of workspace, you're basically choosing, you know, clusters. So, it's multi-clusters. So, which cluster you want and which name space you want in this cluster. We're grouping it together and basically we're making sure that all of this is going to work, going to be secure and so on. So, that's pretty, pretty strong and that way, honestly, it's fitting to every organization because you can decide who can do what and in what level. You can decide if they're pushing the configuration to the local clusters, which usually I don't think you should because you need to do a github or you can actually do it to the management cluster. But basically, all of this is built very nicely with a user experience that way that you will know, the developer will know what they need to know, right. You can come with your own CRD. The CRD is way more simple or way more, you know, friendly. So, it's kind of like another layer that we build on top of the service mesh which is making it more accessible but also, you know, taking configuration, you know, multi-tenancy and multi-clusters, which I think STO today is not, honestly, very great it is. So, I don't think... It's the first time I've heard CRD and friendly in the same sentence, I will say. I mean, it's a game, it's not that bad. I'm quite sure that I call them friendly in the operator's book as often as I got. But anyway, before, I don't want to take us too far off into that, into like the little details of this but I am interested in so right now, you mentioned the workspaces effort in Kubernetes as sort of an augmentation of the namespace which is like this classical term in the industry for defining a virtualized space of, you know, dedicated to a user or a process or a view of a file system on plan nine. What is the difference between the namespace and the workspace or a better question really more specific for you, Adidas? What does a workspace mean in your product? You just mentioned the disconnect as sort of like Istio maybe not designed for this kind of environment. We're building on top of that how, what does a workspace mean for a developer using it? It is configurability of my view of the world and how much I need to know about and what else is like sort of virtualized into that workspace? Yeah, so I mean basically, you know, as an admin, you basically see everything, right? Then you can basically create a workspace and workspace basically, as I said, is a grouping of namespace in different cluster potentially, right? It doesn't have to be on the same cluster. Now, once you're grouping it, you need to tell me first of all who is the user that can use it or you delegate it to and what they can and cannot do. Where is the default configuration namespace, right? Where should they push in the configuration and so on? And that's it. I mean, basically now what it means is that there's an admin that can see everything, right? Including a beautiful graph and everything that you can see and you need all the policy, all the stuff that attached to it. We have, for instance, GraphQL into the mesh. We build it into the STO or Envoy. So basically you can see, you know, the policies, the schema, whatever. It's all the big management that you need. But as a workspace admin or workspace user, when you log in, you're only going to see your workspace. You can even plugable to make a decision if you even want to show them that there is a cluster or service mesh. Potentially they shouldn't even know about it, right? And then the other thing is the idea of catalog. So, you know, right now, maybe I trust my team. So I will tell them, yeah, you're responsible for everything. Well, I can say you're responsible only for reach, write, and timeout. But I can even do it more. I can basically say, I don't trust you at all. You guys crazy. All I wanted you to do is basically I will give you basically a three option. You can do timeout five, timeout 10, and timeout 15. That's all I trust you to do. And then as a user, it's basically a catalog. Like seriously, they can come and choose those policies. So that's number one. The biggest one, which I think is very interesting, is the feature of import and export. So let me give you an example. We have a big huge customer. It's like a huge customer, right? 60 data setter, the biggest destroyer of STO running in the world today, right? Thousands of instance of STO. And on that organization, it's built from a lot of team, a lot of groups. You know, huge, huge, huge organization that they bought, right? So it's like, imagine like the biggest organization that you are. Probably can guess it. But anyway, they have one billing system. That's what they have. So like in all that organization, they have one billing system. So that's one workspace. But they want the other team to actually consume it. So basically, we have the concept of import and export. So you can take a workspace and basically export a service, which is the billing service. And that's where we embedded developer portal. So basically, when you actually export it, the other team can see it. They can click at the tile. They can see the doc. They can ask an API key or whatever else they're using. And basically onboard it and the stuff inside the organization, right? And basically leverage that. So it's really, really honestly, really, really make the experience really, really easy. And again, we just thought about, for instance, what developer portal meaning that concept, for instance, right? That's what it means. Somehow you want to consume all those great services. So again, we did a lot of that kind of work. And I think that it's really, really make it exciting for people. And you mentioned developer, which is very important. What is the user developer? But honestly, we are not selling to the developer, right? We are selling to the SREO, the ITO, the platform owner or the open shift owner, right? So when we are selling it to them, honestly, also their user experience needs to be better because honestly, it's hard to manage. It's not an easy thing to do, right? For instance, today, if you wanted to consume it, SDO, you need to do n minus one, which is basically mean that you need to upgrade your system every six months, let's say. Honestly, this is not the way... I don't see our user doing it. I mean, right, we have a lot of open shift user and I can tell you they are not getting every six months. So now the question is, how we can help them? And for instance, for our product, it's n minus four, plus we're bringing all the patches and CVEs all the way basically to n minus four. So even that little thing, right, which is minor, or maybe they want FIPS compliance, maybe they want arms because they want to save money, only that by itself, the lifecycle of SDO install upgrade, only that by itself, I think it's extremely powerful. And as I said, we're doing exactly the same thing right now with Selium, because in the nutshells, we're an application networking, right? I mean, this is also something that we care about. And we can do some interesting defense in depth because we basically own the layer four and the layer seven. Yeah, we should talk about a rel adoption at some point. That can be a slow activity. So I know you wanted to talk a little bit about what's going on with Envoy and what Solo's been doing with Envoy. And I was kind of thinking here, we'd talk a little bit more about kind of forward-looking stuff. And so if you could tell us a little bit about what you've been doing with the Envoy proxy, that would be cool. Yeah, so I mean, we're working on Envoy for, I don't know what, five years since the company started, obviously that was our main focus. Because when we, as I said, when I looked at the market back then, I saw a service mesh, but SDO wasn't ready. It was clear to me that it would take a long, long time until they will get it right. So for the meantime, I said, okay, obviously, I do believe that these things are going to be everywhere one day. So what can I do for the mean, and I'm a startup, right? I mean, I need to figure out how eventually to create a product, I can just sit down and, you know, I said, show five years and do like, so what I tried to figure out is that what I can build that I honestly, I can sell today. And today is five years ago, but will make us very attractive and service mesh is going to be everywhere. And I bet on the proxy. Proxy was, you know, got it right from the first go. It's not surprisingly, because it was building it a few times before that. So when I saw it, I said, okay, that's really, really powerful. This is the thing that honestly matured, it was running in, you know, in Lyft already, right, in production for a few years before. So I said, and I really like the fact that it was different than anything else. Like for instance, comparing to NGINX or HAProxy, it was an API driven. And you will be able to customize it because there is what's called the filter chain. So you can put your own logic and so on. So I was honestly really, really important to us. So we took the proxy and we basically started to look of what we can do with it, right? And honestly, to me, I'm a big believer, you know, someone asked me one time, what is the best product or what is the best to me is the product that people are using. So to me, what was important is that, you know, yeah, it will take time for this thing to come to place. But if I will be able to run and win production and get familiar with this, get it better, make sure that it's working skillet, that's will give me a huge benefit. So what I did is I started with the API gateway market. I basically targeted the API gateway market and said, okay, that's a market that honestly, I think changed to date, I will say honestly. The only thing that changed there is basically the messaging on the marketing. Like it's basically become between, instead of API gateway to API gateway for microservices. But that's what's going to change honestly. So we basically build the best API gateway or the API gateway that I will want to run in production is CRD base, you know, like think about it, right? We were changing the world with DevOps. We were doing container and Kubernetes. All this exciting thing, and then, you know, what I want to run is this huge monolithic active, active Cassandra cluster. Just did not feel right to me. So we built glue, right? And by the way, when you describe it, you mentioned that what we need to do is to glue again the application. That's exactly why we call it glue. My English is really great. So every time that I tried to describe what we're going to do, I said, you know, we're gluing, we're gluing. So it was the best word I found, and therefore we changed it to OO, but that's the reason we call it the way we call it. So we started with this, which was really good. The advantage that it gave us is to, number one, we're running Android for production probably better longer than anybody. I mean, not lift and guys, but you know what I mean. Like, you know, we know envoy, we saw it in stress. We saw it in a huge stress. You know, most likely using our product everywhere right now on the gateway, even forget about the mesh. So I felt that that gave us a lot of tooling to see what can be wrong. How do you upgrade that? Do you manage that? We don't see it's the best experience. And the second thing that I think it was really, really good for us is that we work with a lot of the customers. And so as a lot of customers, we actually, you know, we're a billion dollar company right now, mainly because we have a lot of customers. So we learn from them a lot about what are they looking for? Like, why should they move to something like Envoy? What is the feature that they are missing versus EngineX or any other? And while we're basically all this time for the last five years, we're basically enhancing or extending Envoy. So it started with simple stuff, like transformation filter was very popular for us or GPC or, you know, simple stuff like this, rate limiting external audit. But I think that, for instance, because we become so familiar, a lot of the stuff that we're doing recently is enhancement that is honestly a little bit crazy. Like, for instance, we build GraphQL into Envoy. So Envoy has the ability to have the filter, as I said. And each filter is basically a C++ Async code, which means that you can leverage a lot of those libraries. So GraphQL by itself is very complex. C++ Async is really, really complex. Now merge them together and make sure that it's was killing, honestly, a nightmare. But we have an amazing team. So we actually work on it for over a year and we made it done. And that's huge because, you know, you think about what our customer is telling us, but they're telling us that, you know, a lot of their application team, basically trying to reinvent the API gateway right now by basically putting this like maybe Apollo server or writing the own server for GraphQL. And now you have basically two hop every time because it should go from the proxy and then it's going to these things that they built in Node.js usually. Honestly, not the greatest thing that I will want to see in my infrastructure. So we basically kind of like united it together and teach Envoy how to speak GraphQL. So when the request of GraphQL will come to Envoy, you will know how to do the laundry but everything in it, including the fact that we can actually take a lot of advantage for what ends up giving us out of the box, like security and observability and, you know. Right, right. So it's really, really powerful. Actually, you kind of touched on it a little bit, but like, you know, and I noticed, you know, Solo is kind of investing in GraphQL. Like why should I care about GraphQL? Like what is, what's interesting about it? You know, it feels a little to me like writing SQL in JavaScript. You know, like, you know, where, what is it, what's it for in your opinion? Yeah, so I think that the most advantage that there is in GraphQL is the velocity that you can get from your team or using GraphQL. I mean, I'll give you a simple example, right? We needed to do a SOP two-point compliance, right? And we needed to get some data for our auditory. So, you know, we went to GitLab, right? To GitHub and basically try to collect all this data. If we couldn't use GraphQL, we should have needed to do a bunch of res kind of like query to everywhere. And that will be really, really hard because you need to merge and then you write a lot of logic and so on. Instead of it, we just ask one question, right? In GraphQL, boom, we got all the data. It saved us a lot of time. So that's us, right? We are not, you know, that's not, you know, that's just an example of use case. But think about people that all what they're doing is basically writing a UI application or, you know, basically. So if you think about it, the amount of the work that they're doing by merging data, by collecting it and the performance of getting all this data and then merge it on the server. So honestly, it's really, really hard or worse, they either can do it on the client side or they need to go to the server people instead of, hey, can you add these things? Again, which is more complexity and now go figure out, maybe the backend engineer is busy and you cannot do that. So all this process, honestly, it's very, very annoying and just slowing down your team. So I think that the advantage of GraphQL is honestly and the tooling that the community is running build around it is that it's damn simple. Like everybody can do it. They can do it really quick. They can write an application in no time versus if you're doing rest, it's a little bit more complex and taking more time. So yeah, no, I'm a big fan, but honestly, it's not only me. Honestly, the reason we did it is because we heard it from the customers. And if you look, I don't know, just for instance of the trending of Google, you know, go Google and check how many people looking for rest versus GraphQL, you will see it's like, right? People very, very interesting in GraphQL. It's really big for the front-end engineer. Really, that's the biggest cool thing that's happening there. Yeah, like with many things, it's funny because as I recall, GraphQL has actually been around for quite a long time. And I used to use it with graphing databases. So it's kind of like, sometimes when something starts being used kind of slightly differently, I have to kind of like rewrap my brain around it, because I used to use it this way and now people are doing something different with it. Yeah, so I've been playing around with it a little bit. Yeah, definitely the ability to kind of connect information together that doesn't normally go together in your data mart or whatever is a huge advantage. And the fact that so many tools like Solos are kind of like almost like natively processing it, that makes another huge advantage to it. I think all the databases starting to be all the things. So like a database these days is not just relational or just document store or just name value. They're often just doing them all and kind of based on the query that you're asking are starting to translate into the very, like the best method to get to it. So I think it's very good. It's not only the database. It's not only the database, right? It's all your services. Think about it. We created all those little micro services everywhere. And now think about the UI person who needs right now to go and connect to 10 micro services in order to get a very simple UI. That's a lot of work. If we can actually in one query enable him to do all of this, that's really, really big. In performance way, also make sure that he doesn't need to worry about security. And that's where Solos is going. I was going to kind of say like the security and the performance and all the stuff, which if you have kind of that front end UI developer now doesn't have to go and figure out how to securely access each of those or rate limit, you know, because some of them are more expensive than others, you know, et cetera, et cetera, right? Oh, and a lot of the staff honestly, they're either trying to rebuild it themselves, right? Which is ridiculous because you have the proxy next to it, already know how to do it very well. And the second thing, go to the basically center, you look, don't worry, the security should be on by the application. It shouldn't, right? I mean, all the point of service management is trying to make sure that the people will not need to, you know, trust the engineer to, I don't know, add those libraries, something like that, right? Right. So to me, this is like really big, the fact that we can actually do right now, even on the level of the resolver, kind of like do, you know, whatever, OPPA or anything do you want basically or OICD or I think it's a big one. Yeah, I mean, I think, you know, it really kind of opens up your ability to kind of hire as well, right? Because, you know, as soon as you don't have to have, you know, every junior engineer worrying about security, right? Or that kind of thing, you know, that's, it's risky to have junior engineers worrying about security. You know, they need, that's where you get, you know, you get better at it with experience, right? So if you can have kind of the senior engineers taking a look at the overarching thing, right? And then letting the junior engineers collect the pieces together. I think it's a much more scalable for your organization, as well as your software, you know. Yeah. Which is huge. But honestly, honestly, one thing that, you know, I'm just going to add it, as you said, even if he's a senior, you know, people making mistakes, people, you know, basically, you know, it's the different of, are we trusting people to remember to do this? So I was just going to make sure that it will be forced by the mesh. And I think that, yeah, it's better to, to make sure that the people who know what it is will take your responsibility for that. Yeah, it was fun. Aditha and I have the same thought when you said senior people get better at this. I was like, do we? Well, I guess what I, yeah, I mean, truly at the end of the day, right? What we want to do is feed into the programmer's default, which is lazy. So, you know, it's like, as long as we can solve the problem once and then, you know, and solve it correctly the first time and then never touch it again, we will generally be a much happier set of campers, right? So the other one that I wanted to ask you about is just because I also see it very much on the horizon and not a lot of play is WebAssembly. You know, I was talking to a friend of mine about it the other day and he's like, oh, yeah, it's going to take over the entire earth. You know, it will be the hot, you know, the hottest thing since sliced bread, if it's not already. What are your thoughts on WebAssembly? Is that something that, you know, the audience of, you know, kind of Kubernetes deployers and operators should be thinking about? Is that, you know, is it important and on the horizon? So it's a great question. I mean, when you're thinking about WebAssembly, it's a technology and it's leveraging a lot of layers, right? One of the layer words started is basically the browser and it's making a lot of sense, right? I mean, there were, the reason they started it is because I think it's the Mozilla team and the reason is they wanted to allow people to create something that is extremely, extremely fast, right? So you don't have to write only JavaScript. Maybe you want to write C++ or anything else and basically enable you to extend basically your application or your browser, but we need to make sure that you cannot crash the browser, right? Because that's too important. So that's where WebAssembly is basically started. This is great technology because the way they build it because of that functionality, there is a lot of advantage here. First of all, you can write it in a lot of language and it's translating to a low level code. Second of all, it's dynamically loaded to the browser, which means that you can dynamically load it to somewhere else. And the last one is basically the fact that it's sandbox is basically contained. So all of this was very great. And for instance, when we were looking about it with Google, you know, a lot of the stuff, it's not a big problem, but to us anyway, but a lot of the stuff that we built in Envoy was basically, you know, we extended Envoy with this filter chain ability, which means that you need to write a C++ async and then you need to recompile Envoy. And honestly, that's not the funnest things to do. So basically the question was, and this is why STO did this round trip on the beginning with Mixer, is how can we make people that are not solo engineers basically to extend their, you know, the Envoy, right? And add logic. And I think that WebAssembly was really nice because basically what they were thinking is that why there's a similar functionality. You need to make sure that it's not going to take down the Envoy. You need to make sure it's have to be contained. It need to dynamically be load to Envoy because you don't want people to recompile Envoy and ideally we would want them to do it in any language they want, but the performance should be close to native. So that's exactly kind of like, oh, like WebAssembly could be these things that extending Envoy and that's what we built together with Google to basically enhance that. So we did this and solo worked a lot on the user experience and how to make it simple to consume. But a few things happen in the market and this is very, very interesting in my opinion. So this is, I'm just going to kind of like come again. It started in the browser. We brought it to Envoy, but there is a lot of other people that trying to replace Docker or container or a container do with this. So I think, again, three different reasons, right? I mean, in the browser, it's make a lot of sense and it's using, it's fantastic in Envoy. Well, we'll talk about it in a second because there is some issues. In the, as a containers, honestly, I don't think that the benefit will be too big, think about why Docker did catch versus, I don't know, other technology or like Unicurl that we tried to do before. The reason is because people will change all the market only if the benefit is huge. So like you said, well, we'll need to pay a price, for instance, create a tooling for debugging or something like that. But I know that when I will do that, I mean, I will get to the best experience ever and it will be fantastic. That's why Docker catch, right? It was 10 times better than VM. When we came, for instance, with Unicurl, people said, well, we already have Docker. Yeah, it's a little bit faster. Yeah, it's a little, but who cares? Should I, right now, reinvent everything because it doesn't make any sense. But even, even over LXC, right? I mean, like, you know, it was, like, it's not even just performance. It was just ease of use. It was like Docker was just like perfect storm, right? Exactly, exactly. So to me, you know, it's exactly the same idea with Wasm. Is that makes sense to make Wasm run instead of container? Yeah, maybe. Sometimes there is, but honestly, it's not that easy. You know what I mean? It's like, you're not getting dramatically better performance. You're not getting, but why? That's a question I will ask myself. So I don't think, honestly, personally, I believe this is not something that, I know people working on this and excited about it. I just don't think that the benefit is that big that all the community right now will change and run Wasm instead of Docker. But then you could write your container runtime in, like, Haskell or something. So, you know, maybe, maybe, that was a joke. Yeah, it's so strange. It's still to me, like, not that exciting. So now in Envoy, I will say that here's the problem. So I think that, you know, everything, we are big in, I'm a big on technology. That's what I care, but honestly, I should recognize what's going on in the market. You know, as I said, Wasm got a lot of excitement, specifically for stuff like, like, like the web browser as well as Envoy. But in order that to be very good in Envoy, we need tooling. And unfortunately, there were some changes that happened, right? For instance, Mozilla team, I think, would laid off and a lot of them went to another places. So a lot of the contributors suddenly, you know, let's focus on this. There was a lot of politics involved and so on. When these things happen, honestly, it's slowing down the community and the, you know, and the technology itself. When the technology is slowing down, honestly, it's hard to adopt. So is that going to be the next thing that happening? I mean, it's a good technology. It's depends if the community will come together and put their faith. But right now, I don't see a lot of work done there, like specifically on Envoy part. And that's worrying me because it's filled to me that even though that's a nice idea, and you just don't know that a lot of people using it, which is a problem. If you're not using it, you don't know where it's the problem. So to me, this is, yeah, this is honestly. I was just going to say, so is this, if we do have, you know, developers in the audience, right, who want to contribute to open source, would this be an interesting place to do so? Is this a place that you could kind of make a career out of would be kind of like building out how WebAssembly works? So yeah, but again, if you want, probably it's not going to be enough. We need a community to go and decide that we're making it successful. Again, as I said, like we are, I don't know if you remember, but when we announced WebAssembly support for Envoy, we actually, we were the user experience, we created a docket like tool links and so on. The thing is I said is that there was a lot of promising but less implementation. So it's getting better, but honestly, just too slow. That's the problem that I have. You know, like we talked about the ability to write in every code, but honestly, you know, and there's not a lot of people working on this. So to me, that's my wariness. There's someone who's looking at this, I say, well, I don't know that it's going as fast as I hope it to go. And eventually, I know if it's not, it's just going to be, oh, no one will, that's what will happen. No, in Envoy specifically, I think that in the browser, there are stages, different use cases. So that's my opinion, a personal opinion. I think in general, what people trying to build in every organization is basically plugins. So I think that this thing that is more sexy right now is EBPF. It's not that different. It's basically plugging to the operator system. But it's good because if you think about it, a lot of the stuff that we, like a lot of the stuff that we're building always started at high level because it's easier to implement and then we optimize it as low in the stack as we can. So I think that that's what creates a lot of optimization for service matches and other observability tooling and so on in the kernel so much. We're pretty excited about that. And we are very happy to be working on this. A little kind of sidebar. We actually talked to Liz Rice about EBPF in episode seven, if you want to go back and watch it for anybody who wants to hear more about that. I think WebAssembly is also super interesting, like on the kind of Envoy level, because I mean, who doesn't want dynamically loading libraries that are replaceable on the fly? That seems like a very strong feature. So I really like them in Envoy. I think they're really interesting. But I hear you, the community has to kind of be there for it. And it needs to be, that kind of the uses of it need to be more, somebody's most important components. We need a bunch of people who are kind of mainlining that tool chain as part of their deployment. Otherwise, there's not enough focus on it. But like I said, I think it's a very cool. No, I hear you, I was very excited about that. All right, so let's see, moving to, I don't know, did you have more questions about that, Josh, or do you want to move on to something else? Okay, cool. All right, so we do, I think we should ask, right? It's like, so how has your experience been as a kind of a community member and as kind of trying to run this startup and launching it off the ground? And it sounds like quite successfully. And doing that in Kubernetes community. Is it embracing, is the community embracing concepts like people doing startups? Or is there a certain amount of fear of capitalism? Have you found a nice happy medium there? How has that experience been as being a community member and also trying to run this successful startup? Yeah, I think that the community itself is awesome and Kubernetes, and I think that's what makes it so successful. I think that as always, there's the people, as community members, who are having a blast and doing great and very accepting and doing awesome. And there's the cooperation that they can be less nice. But I think this is fine. I mean, in the nutshells, I think that the Kubernetes ecosystem is great. I mean, as I said, they are so welcoming and so including and everything that we can do. So I mean, to me, personally, and it's not only, I mean, Kubernetes is one of the environment. I mean, the other one is STO, right? I mean, that community, there is insanely good and awesome envoy community, right? We are, I don't know, this and Cilium, right? I mean, Cilium right now is becoming a CNCF product. Like every good, honestly, project being successful, you need to have more than one provider and selling it otherwise it's basically their product. It's not the community. So I think you guys know that, but it's solo is selling Cilium and we enhancing quite a lot and excited about that a lot. Because I think it's going to give us another layer that we can optimize the mesh and optimize the application that's working and so on. So to me, the communities are great. I mean, as I said to you, I mean, it's interesting because each of them is different, right? I mean, STO from the beginning was very inclusive, like very interesting and getting new people and just help and be together. I think and for the same thing, I think that Kubernetes, that's what I saw as well. I think that, you know, Cilium is a new community so it's going to be interesting because mainly right now it's less than one people. So hopefully they will know how to separate between I'm the sovereign and I need to make money to, you know, to I'm a open source C and C of project, which means that I want people to come and help me to make that better. And I think that that will be interesting to check which direction it's going. Interesting. Yeah. Yeah. Well, I think we should probably wrap it up there. And, you know, but thanks so much for, you know, sharing your thoughts with us. You know, I really appreciate, like you said, that the whole part of the show, right, is to get some insight from people who are kind of in the trenches and working on the stuff about what you think is going to happen next. And I really do think you've been able to share that with us. So I really appreciate it. So thank you so much. Yeah, thanks so much for having me. It was a blessing. Indeed, I want to extend my thanks to you for joining us as well. I really enjoyed some of the kind of breakdown of how you thought about your forward analysis of the space that you ended up building a company in. I thought that was actually quite revelatory, really interesting. So thank you. Oh, man. Thank you so much. Thanks. All right. So we wanted to invite Tam, who is our community manager for Kubernetes by example. And we went back and forth on pronouncing her last name. And I'll say Nguyen, if I recall correctly. And so we just wanted to show off kind of one of the learning paths that's on Kubernetes by example that is coming out. Is it out or should be out by now? It's already out. Yeah, five new tracks at KubeCon EU a couple of weeks ago. So hi, everyone. Thanks for having me, Langdon and Josh. Good to see you. As we used to say on OpenShift TV on all our different shows, you know, on the show, we kind of live in the future. And so we're never quite sure where the real world, if it's caught up or not yet. So yeah. So I have clicked the share button, but I'm not sure if it is sharing to the actual stream. I don't see you sharing right now. I don't either, which is making me concerned. Okay. Well, you troubleshoot. Well, I mean, as far as I can tell, it is sharing. I don't know why it's not. Yeah. I don't see it on my side. Josh, do you see it all? I don't see it on Twitch. Yeah, I do not see it on my side. Is it? See if there's any. It's not showing up on YouTube either. Okay. This is why we don't normally share the screen because we don't know how. Yeah. It totally seems to indicate that I'm sharing. I wonder if in the back end, does somebody need to turn on the other person, like a fourth person that is the screen sharing? Maybe that's the challenge. Oh, look at that magic. Either somebody did something or my double clicking worked. All right. So you were going to show us where it was, right? So I went to left up go because it's cool. Yeah. So first, some of the audience members that may not know, Cube by example, is a free learning community site. So it's designed so that you can learn by example. So if you go to cubebygsample.com, hang in. And then I can walk you through some of our new stuff. So right now we have 15 learning paths. And we introduced five at KubeCon a couple of weeks ago. So since, yeah, so that's all of our stuff right now, which is pretty extensive. And then if all of our learning paths are developed by contributors and there's a mix of content. So there's hands on guided exercises so that you can learn by doing. And then there are video tutorials that you can learn by watching. So right now since this show was on sort of a smash, let's click on the Istio from the Mentals track. Yeah. So that's a new track that we announced a couple weeks ago. Shout out to Andres Hernandez-Bermudas from Mexico City for creating this new content for us. But there's a couple guided exercises and a couple lectures that you can go in and learn everything you want to learn about Istio. And it's some really cool. This is the written part. Where's the, you said there's videos too? This one doesn't have video tracks yet. It depends on the contributors. Like for instance, our Rook content right now is all videos. So yeah, it just varies between different learning paths. I gotcha. But yeah, the Istio one has guided exercises. Right, right. I think at least for me, that's my preference anyway. Like, you know, it's funny, I do a lot of video, but I actually like to read stuff. So, you know, although ASCII Cinema, it's very, very cool if you haven't checked that out, which lets you actually cut and paste from the videos, which makes things handy. Yeah, nice. We'll have to check that out, because that's one of the things we definitely want to do with our videos is transcribe it, because that's the feedback that we got from some people is, I just want to copy and paste some of this stuff. Yeah, yeah. I wrote a horrible hacky version of it in Python many years ago, because I drove me nuts that the videos were never like, you know, you couldn't select anything out of them, right? And then ASCII Cinema came along and did it properly. Nice. But yeah, so here's Istio. Where is the best place for folks to go to perform these exercises? So, you know, like, I've got a step-by-step guide in front of me, like, what are the resources you'd point people to for where I would get a cluster to run these things? Yeah, so, Langdon, if you want to go back up to the nav, if you go into resources, there is, it gets started, Tribe Kubernetes landing page that we have, and you can build and deploy in a real environment. So, you get to choose either you can learn with a mini cube, or you can check out our Red Hat Developer Sandbox. Right on. Yeah, so, and all the lessons are, yep, there it is right there, the sandbox, so you can just sign up for that and then start building. So, you can also do the local version of Sandbox too, right? But I think change names. So, but yeah, you can also download that, but mini cube is really quite nice and quite easy and pretty easy to plug things into. I was just experimenting with it over the weekend because I haven't used it in a while. Yeah, so that's pretty cool. Yeah, absolutely. And there's more resources too. So, like, I was looking through, like the kind of intro to Kubernetes, right? And like this is, there's a lot of videos here too, which is good for like, when you're trying to explain what something is, you know, rather than like trying to, you know, kind of walk you through how to do something. So, but I really like that it had like, kind of every single little piece, right? So, you can kind of go and be like, okay, hey, I saw a stateful set somewhere, right? Like let me go find what that is. Yeah, absolutely. And then a new feature that we launched. So, definitely after each learning path is a beyond KBE section where you can find links to like, repos and prerequisites, but something that we recently launched and you are just on it is our KBE community forum. It's beta right now. But yeah, if you click into it, it's a live forum. You could start a discussion. You can talk to our contributors. You can let us know what content you want to see or what not. Or even if you have suggestions on future KBE insider guests. So, yeah, check out our forum. It's something that we just launched. And I think it'll be a good feature for the community to ask real questions live. Yeah, definitely. The more feedback we can get on topics people want to hear about or on people they want us to try to interview, the easier it is for us to know what direction you'd like to go. Cool. And then we're also running a contest right now. So, the first three folks that started discussion on our new KBE community forum, you'll get an awesome KBE swag bundle, which is really fraught hoodie that I'm wearing right now. So, yeah, first three people. So, I'll look out for that. Yeah, so that's all the new stuff. We're always looking for feedback. We release every KubeCon. So, KubeCon EU was our last release. And then our next release will be at KubeCon North America in October. So, we're planning for new content and new features for the site. But we'd love to hear what the community would like to see. Yeah, and basically there's a key to watch in live, right, is that now you have a chance of entering the contest and being first and getting swag is always the most important. Cool. So, should we wrap up the show there and call that an episode? Absolutely. All right. Do we know who's on next time?