 Now, I want to give you a formal introduction because that would be terribly rude of me not to do so, but everybody please put your hands together, give us a very warm welcome to Edith, CEO of Solo. Take it away, Edith. Alright, so as Nick said, I'm Edith Levine, I'm the founder and the CEO of Solo. Solo is a company who is focusing on the service mesh and projects like Silium, STO. We're doing Envoy for a long, long, long time. So I'm not going to focus a lot about the company itself. I'm not going to focus about what I think will be the future of STO. So in general, it's really, really simple where we are playing as a company. I think that in the beginning, you know, we're all here because Kubernetes has become very, very meaningful for us. It's what in Solo we like to call Cloud 91.0. And what does it mean is basically, you know, we adopt microservices, we adapt Kubernetes, we were pretty excited about that. But honestly, that's also when the problem started. And to me personally, I don't think that when we adopting this amazing technology that giving us a lot, a lot of benefit and velocity, to make a lot of sense to keep the same, for instance, legacy API get way that running active, active Cassandra behind the scene. Or I think that when you're doing this, it makes sense to kind of like fit to the ecosystem of Kubernetes and do everything declarative and eventually consistent. So GitOps can play a big, big play here. And everything that were led to observability, now you have 100 and thousands of microservices everywhere, replica of them. You don't really know even where the request is flowing. So again, how do you understand what's going on in your infrastructure? I do making sure because now everything is on the network that you will trust it, right? Security, it's a big, big player here. And I think this is what most of us here. Multicluster, again, we have a lot of customers, they're running, you know, in multicluster environment, multicloud environment, right? So basically kind of like fell, fell over and so on. And I think that there is some interesting technology that we can kind of like leverage like serverless and GraphQL that is very interesting. So what Solar is focusing on is what we call cloud native 2.0, right? Which is basically all this part above that make you successful in cloud 91. So what do we do? So we have three pillars to our platform. We have a platform, Google Glue, and there was three pillars to it. The first one is the glue gateway. Really, really nicely aligned with the gateway market. And basically what we did, we took the STL in grass and we enhance it by actually plugins and entry filter to actually make it full blown API gateway. So again, it's STL native, the only one that exists in the market. So we are basically taking an upstream STL and we're building the API gateway on top of it. So that's the first one. Again, very nicely aligned with gateway. The other one is glue match and glue match is basically our STL distro. And besides the fact that again, we're taking upstream STL, we adding a lot, a lot of extensibility. So if this extensibility of is to use like cert rotation, for instance, if this is extensibility around multi clusters and how do you do multi cluster? How do you fail over? How do you write base localization, right? You will start with the closed one. If it's not there, you're falling over to the other one and so on. So there's a lot of kind of like enhancement that we did to the regular upstream STL. Besides the fact that we are of course making sure that all the CVC is for the, you know, n minus four and that you are successful in production. The last and not least is basically glue network and glue network is how we believe we can take the lower level and basically enhance the service mesh. So this is stuff that related to the CNI. Specifically, we are going to drive any CNI that you have. But we are personally, if you will ask one from us, we will give you silly. Exactly like you're doing with STL or what we're doing with STL, upstream, silly with enhancement, making sure that this couldn't production and so on. And besides that, we're leveraging a lot of BPF to enhance it. So everything that related to layer seven, layer, you know, layer four observability, but also layer seven observability, acceleration of your network. How can, you know, we can redirect traffic to when we will talk about NBN and so on. So that in the natural is the three pillars. Again, nicely in line for a market that, you know, get really mesh and networking right in CNI. We have also featured a coming on top of it called GraphQL. We basically teach envoy how to speak GraphQL language and become the server, the GraphQL server. And it's basically coming on top of those two feature. So this is great. And this is what we're doing. But we also a big believer on instead of guess what the future is, try to create it. So you're on, you know, you're all in the space of service mesh. And therefore, you know that the kind of like model that most of the people using today is basically the sidecar, which means that if a request is flying to go to the first sidecar, go to the second sidecar, and it will go to the destination. So this is great. But there is some problem with this model. So the first problem that kind of like, for instance, interesting or no problem, it's a, it's making it harder to operate. So the first one is everything that related to operational problem. For instance, right, let's say that you want to upgrade your microservices, right, it's coming up. But what if the sidecar is not coming immediately up and it's happening, right, you can, you can 100% control that. So the problem is that if your application is something like SQL, for instance, there's a good chance that it's a crush, right, so a problem. So the request will go and you will lose it. The second one, an example, and I'm just putting some example that we see from our customers, right. Let's say that you want to run some job. This is great. The job is done. But guess what, your sidecar is still there. So now it's not going to be cleaned. So theoretically, you might running a lot of kind of like sidecar that doesn't have even application attached to it in your infrastructure. It's just a little bit of waste. In terms of performance, right. I mean, if you're going from one microservices to the other, you know, it will be faster than going through the sidecar. It is what it is, right. And the last and not least is honestly, there's a lot of proxy out there, right. So that's pretty expensive. Okay, so a year and a half ago, so excited after working with a lot, a lot of customers to try to attack that problem. We created an internal project and we stopped working on how to create the future of STL. Like eight months or so after we discovered the Google doing the same thing internally. So we decided to partner and we worked on it under NDA, us and Google. And a month ago when I was NBN. So what is NBN and how it's making our life easier. So it's very simple. We are very, very careful. And again, there will be a very deep dive talk here. So we'll go fast. And basically we're making sure that we separate between layer four, which is usually honestly a very simple operation to layer seven. Right. So we're putting a layer for proxy on the node. It's very simple right now is envoy. We're working internally in the STL community externally in that community in order to make it the last implementation. We think we can get better performance. So basically it's very simple. The request will go here and it's going to be encrypted. Right. It's very, very important. Right. The MPLS is one of the reasons that a lot of people is adopting service machine generally. Once it's a crypt, it will go to the other side. And basically we'll go to the destination. That's not that different besides the fact that we go read from a lot of the, of the, of the site card. But if what you want is layer seven, that's where complex stuff happened. Layers four is honestly very, very simple operation encryption. Maybe some, you know, there's not going to be any problem there. But on layer seven, that's where we're doing stuff that are a little bit more complicated. You do not want to share that. It's not good to do it multi-tenancy. And this is why we took it out of it. We're calling it waypoint. And when the request will go, it still need to be encrypted. But then it will go to the waypoint will do whatever layer seven you want. It will go to the layer four and then we'll go to the service. So that's very, very high level. And again, there is a talk on it. So I'm not going to put too much. The idea here is basically reduced to cost drastically, which is important to some people. Right by actually going to one proxy. We in solo created a lot of blog and one of those blog was about what it's doing to your wallet. So we see a multiply, you know, reducing in three, a multiply by three. We think we can get to me to multiply by 10. So we're really excited there when we continue making it better. I'm not going to go to the performance mainly because this is like a little bit complex slide. But in the natural, we believe that it will be a faster and performance because we basically treat two layer four proxies relatively fast with one layer seven proxy, which is usually where it's taking more time. So we believe it will be faster. And the most important thing is the operational. And to me, honestly, this is the most important. And I think that they will talk about, you know, a lot of the customers who today is actually adopting service mesh. They sometimes only want layer seven, but they still need to build it to buy to basically buy into this like cycle model. And it's a little bit too expensive and too complex. It's very simple. You can apply a mesh. You can put whatever policy you want. You can remove the mesh. The application doesn't even know. And this is why the name ambient. It's really, really transparent to the application. And I think this is great. So what did we do? Right. It's really important. And so did that solo and Google did a blog on this and why we believe that it's actually better. But we are reducing the cost. We are simplifies the operation and we're improving the performance. So basically everything that you need in order to make these things easier to adopt. And again, there is a, I think a talk from Google and solo from Lynn and Justin that I think I really remind you to go. So, you know, we are solo put it in our product already. So if you are interesting and consuming it, this is will be something that you know, that we would love to help you with. And this is what basically is solo doing. So I will summarize by said what's special about solo because I think that there's something very, very unique that is overlooked. This ecosystem, we are here. You will see a lot of projects. There is a lot of open source project out there. And eventually as someone consuming it, it's very, very important to you to bet on the right one, the one that will win eventually. We all know that there was a big fight over orchestration and Kubernetes one, which means that everybody else who adopt, you know, Mesosphere or, you know, Docker swarm back then needed to switch older infrastructure, which is really complex. I think what solo really good is recognize the good ones, the one that will win. If this is a Kubernetes, we were there on the beginning, Envoy, we're working on this over six years, STO on the last five years. Scyllium, we really, really a big believer that this is a very, very interesting technology and we will love to leverage that. And that's not least, we didn't find any good API get away. So we believe it ourselves. Then when we recognize those project, we are going to becoming a leader there. And, you know, we have a lot of people here in the, you know, in solo that basically is running all those community and helping and making a lot of it. We are contributing to STO, to Envoy and honestly to anything we need, right, to make our staff successful. We have two TLC member of the five that STO has, two from Google, two from solo and one from IBM. And we are going to continue to pushing the boundary with NBN, but not only with NBN. If you will look at solo, you will see that we all about pushing the boundaries and innovation. If you will look at the work that we did with Wasim, you know, most of the stuff that people will talk today about all this orchestration of service mesh. It's something that we talked about it in 2018. We build it, we made it wrong, we fixed it. So we right now feel very, very comfortable about the solution that we have. You know, NBN, we talked about it, EBPF, we are big believer in it, and we build Bumblebee. It was the docker experience from running EBPF models. And another point which is extremely important is that we have a lot of customers. And I think this is very, very unique in our infrastructure, in our ecosystem. We see this, we run it, we see all this environment, we solve those problems so we can help anybody else. And we're all about education. So yesterday we had an event. We have a lot of book signing. We build it, that Lynn and Chrissy enrolled in the company and so on. So there is a lot about education. I'm also going to say that it's all for free. So we're very interested in taking whatever class they want. And we have a lot there about NBN, about everything that you basically want. NBN, everything that Solow is doing. And then you can even take a certificate if you're interested. But again, all for free, so just come and learn. And the last one, and this is kind of a personal note I will finish. When I started Solow, we were a few engineers in me. So they were writing code and I was evangelist. It took me not a lot of time to understand that I need to put my ego inside and basically bring everybody to the front of the stage. And that's exactly what I'm doing. Solow is not me, Solow is not Chrissy and Solow is not Lynn, not Nick, not everybody. Solow is everyone working in Solow and you don't see the 200 people are working there. He's A-plus player and there are a lot of them are here. So please, you know, learn from them, talk to them and let us know how we can help. Thank you so much.