 Welcome. So, for those of you who've been in talks that I've given before at KubeCon and other venues know, I like to do a very interactive talk. Interactive with the audience is a great deal. Partially because it makes it more fun for everyone involved. It's more fun for me, it's more fun for you. But also because there's a lot to be learned from what's actually going on with your audience. And so I'll be asking you throughout for a bunch of questions throughout there, sometimes for specific answers, sometimes just for a sense of the room. But like any good scientist, I like to calibrate my measurements before we begin. So quick question, how many of you actually expected to be in the state of the cloud networking talk today? Okay, and how many of you were lost? Okay, good, that's always a good start. Lost travelers are welcome, certainly, but usually not expected. So when you look at the state of the cloud of networking, cloud native as a secular movement has largely been about dropping the representations of the physical world that we all dealt with in cloud 1.0. So we no longer build virtual interfaces and virtual routers and virtual load, everything. We're no longer reconstructing the physical artifacts, just slapping a V in front of them. Instead, we're stopping to ask ourselves what actually serves developers in terms of enabling them to actually function. And this is incredibly liberating in many ways, but it does have a few side effects that can cause confusion. So for how many of you is this slide clear? Okay, all right, so some of you are just pulling my leg, I see that. No, I don't expect this to be clear for anyone. This is a giant collection of logos of projects in the cloud native networking space. As an amorphous jumble, you should be confused by this. But confusion is actually not something you're seeking in a cloud native environment. One of the central maxims of cloud native is minimal toil. And minimal toil is not just the amount of work that you have to do to get what you want, it also includes the cognitive toil involved in understanding what to do. If I give you an interface with two buttons, A and B, but you're gonna have to spend six weeks to decide which one to press, that is not a minimal toil interface. So what I'm gonna do for you here today is I'm going to actually take this amorphous blob of things and break them down into a structure where you can understand relatively simply the options and choices available to you as you try and solve your problems. So let's start from very simple workload. So quick question, how many of you have workloads that don't communicate with anything? Really? Okay, I'm really, sorry? Okay, what does the sandbox workload do? Doesn't talk to anything? Oh, interesting, I would actually like to talk to you about that later. Somebody else had a workload that talks to nothing? What? Sorry, what? I can't hear you. But does it talk to anything? Okay, what do you use it for? How do you know it starts up and runs if you don't talk to anything? Okay. No, no, no, this is really good. Occasionally people surprise me with things I gotta find out about the sandbox later that I don't expect. But the central reality of the world for most purposes is that workloads that don't talk to anything are profoundly uninteresting. And so when you look at this, you step back, okay, so we start with workloads, and then we sort of step back to Kubernetes clusters. This is probably the thing that you're most familiar with. And this is where we can meet sort of the first project in our collection here, which is CNI. And CNI is the SPI that plugins must meet in order to provide networking to a Kubernetes cluster. Now, it turns out that there's some interesting things about CNI that are not true of its close brethren, like CSI, the Container Storage Interface, or CRI, the Container Runtime Interface. So I can have multiple CSI providers in the same cluster. I can have multiple CRI providers in the same cluster and sort of attach these things based upon what workloads and what things I'm attaching to storage. But I can only have one CNI interface in my cluster. It tends to have exclusive ownership. And that's sort of interesting and odd, and we'll see a bit why that is as we go along. So quick question, what CNI plugins are folks running? Our first question actually, before that, how many of you even know or care what CNI plugin you're running? Okay, interesting. What CNI plugins are you running then and why did you choose them? Host core is host port. Host port is a feature of Kubernetes networking that a CNI plugin provides, but it is not itself a CNI plugin. I don't think so. Maybe I'm mistaken. Someone else? Okay, cool. Awesome. And so why? Not that that's a bad choice. That's a good choice. I'm just curious the reasons. Yeah, yeah, to set that up in a way that doesn't involve privileged pods running in your, privileged containers running in your pod. Other folks? Yes? Calico? Calico, and why? There's a lot to be said for playing nice with your network. Awesome. So that's kind of where we start. And when you look at this, what you actually come to realize is that most of what you're getting from CNI is intra cluster. And it comes in layers. And this is going to be familiar for most of you, so I'll go relatively quickly through. But thinking about networking this way can be very powerful, right? So the first thing that you get from CNI in your intra cluster networking is every pod can reach every other pod via L3, VIP, without NAT. And if any of you have ever suffered through trying to do this through NAT, you understand what a profoundly wonderful thing this is. Or if any of you have had the misfortune of having to muck around with subnets or all kinds of other things that involve L2 structure in cloud environments, it usually ends up being a lot of work for not a lot of payoff for most purposes. That's sort of the base layer of what you get in intra cluster networking. And the next layer up that you get is isolation. And isolation in Kubernetes is typically done with network policies. And what they allow you to do is to specify via selection, not enumeration, this is a common theme, but selection based upon labels, certain pods that are going to be isolated. No one's allowed to talk to them unless they're selected by the allow list for the particular network policy that you're dealing with. Now there are some other things about ingress and ingress IPs I won't go into here because they're a little bit, but that's the basic, the next basic layer. And isolation is effectively a security feature within the networking that you're dealing with. And then the final one that's actually part of Kubernetes proper is services. And this deals with service discovery and routing in the most basic sense. Your service has a selector that allows it to select endpoints that can provide that service. And so when you ask for that thing by name, you actually get run to a virtual IP usually by NAT but it doesn't have to be that way so that you can actually address things as services. Which actually brings us to the next project we'll mention, an unsung hero of Kubernetes. So for many folks, particularly when they're in their own cluster, core DNS is actually providing the DNS inside that cluster so that your pods have DNS names you can refer to so that your services have DNS names you can refer to. Quick show of hands, how many folks actually enjoy addressing things by IP address directly? Yeah, almost no one does, it's miserable. And that brings us to sort of the next piece. We've been talking about what happens inside the cluster. There are a few things in Kubernetes that involve what happens at the edge of the cluster. And at the edge of the cluster, you typically have load balancers and ingress controllers. And we have a few of the projects that I had on that slide here in the CNCF that actually provide these. So KGB and BFE, they provide load balancers that you might select. These are things that deal with traffic that's coming in that's being treated primarily at layer four, the transport layer. So they take incoming TCP or UDP streams, they load balance them to some service inside your cluster. And then if you lift this up to layer seven, up to being able to route things on HTTP, you typically are dealing with ingress controllers. And a couple of examples of this would be contour or emissary ingress. Now what you'll note about this is this entire model that's Kubernetes centric is highly focused on a cluster. And that will come up again later because while it's wonderful to do things in a cluster, how many folks live in a single cluster? Everything they do lives in a single cluster. Yeah, I thought so. How many folks live in a single cloud provider? With their clusters. Okay, a few. How many people live entirely on prem with their clusters? You may have more than one cluster for their entirely on prem. Are they in the same data center? No. We intrinsically live in a world that's increasingly hybrid and multi-cloud because quite frankly, you've got many of the same problems across data centers that you would have in a multi-cloud environment in terms of setting up your network. If you got good networking people, they're less, but even so, there are challenges. So here we're gonna move beyond Kubernetes and how Kubernetes conceptualizes networking. And we're gonna move up into the world of service meshes. Now I've pictured service mesh here as an additional layer. Service mesh has all kinds of interesting characteristics. Mostly it reasons it layer seven, HTTPS. And it will allow you to do more sophisticated things with service discovery and routing. You can have selectors that are more than just on labels that are more involved in terms of what's happening. You can segregate traffic further down your URLs. It's not just living at L4 with TCP IP ports. It gives you interesting features like circuit breakers that you sort of decide this thing is overwhelmed, we aren't gonna talk to it for a while. It will often allow you to offload things that you don't necessarily wanna make the problem of your application authors, like mutual TLS and identity. Or for example, also things like injecting tracing spans. There's a whole host of features that people bring up, but mostly at layer seven for service mesh. And service mesh also structurally looks quite a bit different as well because service mesh, getting back to the person who brought up SCOC and I's, service mesh involves injecting an L7 sidecar. You know, otherwise known as a proxy into each of the pods that are participating. And then they are controlled from a central controller that simply pushes down rules to tell them how to route the traffic. So if I'm an application in a pod that's using service mesh, I don't actually ever really talk to the outside world. When I attempt to talk to the outside world, that traffic gets hijacked and fed through the proxy running the sidecar. And no one from the outside world ever really talks to me. If they try and reach me, that gets terminated on the proxy and the proxy then will talk to me having done whatever munging of the HTTP messages it's going to do, whatever decisions it's going to make about whether or not I should even receive it or where it should go. And so this is quite different and this is really interesting because it's moving a whole set of things all the way out to the edge of the pod. Get what, get what? It's going to depend very much on where you're looking and a little bit on how the capturing of traffic has gone. So the classic way of doing the capturing of the traffic is with IP tables. And so if I try and speak out, then it will IP tables will hijack it and attach it to a port on the proxy, which depending on where TCP dump is attaching in the chain, you might not necessarily see. And likewise, when the traffic comes in, the pod has its one true pod IP. And so you will see it terminate on the external pod IP but it will have been glommed up by IP tables into a port on the proxy. Very different in that, think of it as taking legacy networking and lifting it all the way up to L7. So a very good way to think about it is to think about now you're routing and switching on layer seven, because for example, if I DNS resolve the domain name that I'm talking to, there's no reason that intrinsically of the IP address I get back has any relationship to where that HTTP message is going because it's being routed on the HTTP information, not on the IP information. No, but effectively because you're terminating on your local proxy, it may then go to whatever IP address it's going to. So you're a great straight man, you're getting to some of the stuff we're gonna talk about later with some of the things that do get done that are interesting about providing layer three because part of why this is very simple with service meshes is because you already have a guarantee in a single cluster that Kubernetes gives you that you're riding on a single flat layer three. That makes it much more tractable as a problem, which sort of brings me to what happens at the edge of your cluster, which is that typically in a service mesh you will then have an L7 gateway. And those L7 gateways may peer with other L7 gateways, think of them as routers on HTTP headers in other places. But no matter what you're doing with those gateway pairings, you do eventually run into the problem of how do you handle the underlying L3 connectivity? Because I don't care how smart your proxies are. If you can't reach things at the IP layer, possibly through multiple hops through proxies, you're not getting there because HTTP does not live in a vacuum. So really quick, you've been great about questions. Do other folks have questions so far as we go? Because I love questions. They're fun, yes? But Calico is great for advertising routes to your local network, but if your local network is a private IP space and you're trying to reach something that's outside of that private IP space, then it only gets you as far as the Calico advertisements of your local network at you. So you can absolutely do great things with making the flat IP space bigger than your cluster. So maybe you have three clusters living in the same flat IP space. But eventually, if you wanna reach something, you have to have some story about how you're gonna get there. Makes sense? Yes, question back there. You're a great straight man. You're about two slides ahead of me. No, we'll talk about Anvil here in a second. Awesome. Perfect. So this gives us to a whole bunch of other things that I showed you on that big eyesore slide at the beginning, because it turns out there are a lot of service meshes. In fact, there are quite a few more than this looks like because of the fact that there are a very large number of service meshes built on top of Istio. But you sort of start with the granddaddy of them all, LinkerD, the original service mesh. And they're still going strong today. Their sort of claim to fame is they keep everything extremely light and extremely simple. Then you've got Istio. How many of you have heard of Istio? It's definitely the most well marketed of these service meshes. So it's usually the one that folks think about and hear about. You have others like Kuma, which is coming out of the Kong folks. They have a lot of experience with API gateways and how to string things together between different zones. Open service mesh, which is another approach to service mesh and Kira mesh as well. And there are even more than this. This is just sort of the ones that are most closely aligned with the CNCF. It's a very busy space. So if you're hunting for a service mesh, you actually have a lot of options available to you. And as I said, even more than you see here because there are somewhere between half a dozen and a dozen different implementations of Istio that I can think of all the top of my head. But then we've got things that are service mesh adjacent in the ecosystem. So you've got things like SMI, which was an attempt to come up with a generic interface that you could use across multiple service meshes so that you could address an API that was common. I've seen lots of places where people will come into projects that are built on top of a service mesh and they would like to use a different service mesh under the covers. If the project is consuming a service mesh via SMI, that's a conceivable thing. If it's consuming via a bespoke API, that's a big lift. Then you get things like measuring. So it turns out that when you have service meshes, you've got the data plane, which is the sidecars, the L7 proxies, and you've got the control plane, which is the thing that takes your intent and translates it into the things that get pushed down to those data planes. You can think of it as pushing the route side, if you will. But the world gets to be even more complicated on top of this. And so you will often wind up with what people call a management plane on top of service meshes. Messury is a multi-management plane. It will actually give you a management plane across many different service meshes so you can get a consistent management plane experience. And it has an entire library of service mesh patterns that you may wish to deploy and templating that actually eases the deployment of them. So you can think of it as up-leveling some of the things that you might want to do with service meshes. Related to that, you've also got things like S&P, which is the service mesh performance platform. Messury, as part of the work that it does, actually has a whole battery of performance tests that it can run that are compliant with S&P. So if you're wondering for your particular use pattern, is service mesh A or service mesh B going to give you the performance you need or going to perform better or consume fewer resources, Messury can help you understand that problem. And then Nighthawk, which is a distributed service mesh testing tool. You can sort of think of it as a distributed packet blaster of some sort of, if you will, or sort of a much more sophisticated approach to IPerf. Do folks have questions on these so far? Because we're about to leave the, yes, yes, yes. The interesting parts of the networking happens inside the proxies. When the proxy goes to actually open a TCP connection to the proxy on the other side, that still falls down to layer three to IP and has to get there off IP, right? There's no magic where HTTP avoids being on top of IP. But that's a very good question. You're following perfectly. So you had brought up Envoy. So here's where we get to Envoy and friends. Envoy is a tricky one to put into this landscape because it is really ubiquitous. So Envoy is the all singing, all dancing, HTTP proxy of Doom. It just kicks ass. In fact, it kicks so much ass that it is in fact the proxy that is used in the sidecar for Istio, Kuma, Akira Mesh, Open Service Mesh. And for the Ingress side, it's used for the L7 Ingress, for Emissary Ingress and Contour. And it also gets used in a whole bunch of places you would use L7 proxies that have nothing to do with cloud native networking. It does all kinds of cool things. It has a very nice plugin model. It has the ability to be dynamically configured so you can actually push things down as configuration dynamically instead of having to restart it and have it read files. You can even run WASM programs inside of it so that you can write small WASM programs to do cool things to your HTTP. It's just a hell of a lot of fun. But because it's used in so many places, you kind of have to introduce the places first before it makes sense in the landscape. Do folks have questions about this? About Envoy? Did I answer the question you were going to ask about Envoy? How is Envoy connected to the CNI? Well, so there are two ways of looking at this. You have some service meshes that will have a CNI plugin that will inject an Envoy sidecar. But in that sense, it's just setting the Envoy up. Envoy doesn't really understand that it's running in Kubernetes per se. It understands that somebody connected to it who's authorized to do so and told it that it should listen for incoming HTTP messages on a couple of ports and when they come in, it should treat them in particular ways. And that's more or less all it knows about the world, which is actually just good modularity. If Envoy had to know deep things about CNI plugins, it would not be nearly as handy. Cool. So then we get to some of the questions about cluster to cluster and L3. So we had some folks who had mentioned that you do have situations if you control a larger blob of networking, where certain CNI plugins like Calico will let you advertise into that. But for many people, like quick show of hands, how many of you are using public IP addresses in your data centers for every single pod? One day I'm gonna meet that person, but I continue to look. I have yet to meet them. IP addresses in the V4 world are really scarce. People have gotten in lots of habits around using private IP spaces so that you sometimes even see them in the IPv6 world. And so it's relatively unusual to see public IP addresses on pods for any number of reasons. And so even if I am advertising my pods to my local data center network, they probably don't go past that. So if I want to get layer three communication going between clusters that are running in different places, I need some way to do that. So one approach to do that is with a project called Submariner, or Submariner. I always get that confused. And with new Marvel movies coming up, I keep wanting to say Submariner. It basically does what you might call cluster to cluster networking. It basically strings a tunnel from a router in each cluster to each other cluster. If it has to, it will provide NAT for you. And it gives you the ability to export services from a cluster to the group and to import them from a cluster from the group. So I could have a service in cluster one and I could consume it in cluster two. It's kind of trying to find a way to federate in some sense between clusters, but because there are scalability issues with federation and the generic, it's trying to take a more intelligent approach to that. But at the end of the day, it's reasoning at the level of clusters and services, not at the level of giving sort of end to end connectivity and not at the level of individual workloads. That's exactly how I think of it as a legacy networking guy. No, you're thinking about it exactly the right way. So one of the fun things about these conversations, by the way, is you have cloud network people, cloud native people who often don't even want to know that IP addresses exist. And then you have people who are deeper on the networking side who are trying to reconcile it with what they know and understand. And for networking people, we talk about East-West traffic all the time. So think of it as East-West traffic between clusters. And that gives us the last one on our list today, which is network service mesh. Network service mesh thinks about this quite a bit differently. So network service mesh basically looks at, yes, you had a question? Right, effectively imagine having an egress router that's running a tunnel at L3, at L3. Right, but the egress point just gets them into the conglomeration of clusters. And all the clusters are on a single conglomeration. Yep, good question. Which brings us to network service mesh. Yes, you had a question? It does handle netting. I am not as familiar with the ins and outs of it, but my suspicion, and this is, again, based upon how I know networks work and the available possible state of solutions is, it's primarily in the business of exporting and importing services. So a service that had been exported from one cluster to another cluster, if I were doing it, I would be using that to export a VIP into the second cluster. And so it doesn't really matter that I'm netting, but if I wanted to directly connect from a pod and a cluster to another pod, I don't know what the answer or solution they have to that for direct pod communication that doesn't go through services. Any other questions before we move on? Yes, Sillium. So Sillium keeps moving across things. Primarily Sillium is a CNI plugin. The ISOvalent folks have recently also started playing in service mesh. I don't know if they are branding their service mesh efforts under the Sillium moniker as well, but Sillium tends to be a CNI plugin. And they have done a huge number of interesting and experimental things with the EBPF over time. But structurally, it's in a similar vein. It's just a question of what the underlying tool you're using is. So Calico, for example, does some very cool L3 things, but it's fundamentally just using VEath interface pairs on the box. Sillium would shove different kinds of things into EBPF programs. So it's more of a data playing difference than a structural difference. Yeah, I have not looked as closely at that, but that doesn't mesh with my understanding of the limitations of what you can manage in EBPF. And the few explanations I've heard involve various programs manipulating things in user space. And once you've got a programming user space that's manipulating things, you have a sidecar. It's just parts of your sidecar you're shoving off to the kernel for processing. So, I mean, I know the marketing, I haven't been able to square it with my understanding of reality. Doesn't mean it's not true, it just means I haven't fully processed that piece yet. Makes sense? Yeah, I mean, once you have a user space program, it's a sidecar from my eye, but they may have good reasons they don't want to call it that. Yes, yeah, that's unsurprising. I mean, it's an attempt to sort of spread in all directions. I have a personal taste towards modularity, but that doesn't make me right or wrong. So, anything else? All right, this brings us to the last one on our list, network service mesh. So, network service mesh sort of thinks quite a bit differently about all this, because you'll notice that almost everything that we've talked about so far is welded pretty tightly to your cluster. Remember how I said at the beginning that in your cluster you get exactly one CNI plugin, which means if I want the BGP goodness of Calico, then I can choose Calico, but that means I then can't fall back and choose some other CNI that gives me some other piece that I want. Effectively, within your cluster, if you're a pod and you're talking about everything we've talked about so far, you get exactly one flavor of networking. It's the one that comes with your cluster, and you can twist some knobs on it for isolation like network policies and services, and maybe you can run something like a service mesh on top, but everybody's getting the same thing. Sort of going to the Henry Ford joke, you can have whatever color car you want as long as it's black, right? Network service mesh looks at this and says, look, we were promised loose coupling when we did cloud native. How many of you have heard loose coupling mentioned as a fundamental principle of cloud native? So why are we tightly coupling our networking to our runtime? That seems awfully weird. So what network service mesh basically does is it looks at the situation and says, look, leave CNI alone. Whoever decided what CNI to run, they had a reason. We don't need to know what that reason is, and you don't want to break how clusters work in Tracluster, but allow individual pods, individual workloads to ask for by name, zero or more additional network services they want. And think of a network service as some combination of connectivity, security, observability that you want starting from layer three and moving up. So for example, here we have an example where we've got a network service at the top that we call a database replication domain, right? Picture a case where I have a giant Oracle database on prem, I'd like to keep read replicas in my clusters. They're not doing anything fancy of HTTP. You're probably using one of Oracle's protocols like DNS. It needs basic L3 connectivity, maybe a little bit of DNS. I may want to stick something security in there that's bespoke to what I'm doing in that regard, but I want a very specific network service for those participants. Network service mesh will let the individual pods in different clusters running in different clouds providers and on prem connect to this logical network service as well as allowing bare metal servers and VMs. And so you can get this fine grained segmentation of what kind of treatment you need for your workload. You get the networking you need for the thing that you're doing, not the networking you inherit from your cluster infrastructure's choices. And as it sort of shows here below, you see one below that talks about an Istio network service. Picture that as another flat private VPN. We call them VL3s with a single Istio instance running on top of it. So you no longer have a crazy quilt of L7 gateways with all the debuggability problems, latency, et cetera that it uses. From the point of view of the Istio control plane, it's a flat IP space. It just so happens that rather than being a flat IP space in a single cluster, it's a flat IP space that can span clusters wherever they may be. And also VMs and bare metal servers that are participating in it. So this could even do more exotic things. For example, is there any one of the room who does anything telco shaped like a network function virtualization? Okay, so from network service measures point of view. Well, apologies to the rest of you. I'm going to get geeky in a direction that may not be familiar. So how many of you know what an SRIOVVVF is, virtual function? Okay, that's actually pretty good. So I maintain that literally no one, and I mean literally no one, I'll prove it to you, once an SRIOVVVF. So how many of you don't believe me? Excellent, I love doubters. Here's why. I maintain that what you want is a network service from your physical network. And if I could give you a technology I call Dunkeon that gave you twice as much throughput, you would drop SRIOVF in a heartbeat. Is that true? You wouldn't, why not? Sure, so if the applications could support Dunkeon, interface compatibility just faster. Yeah, absolutely, nobody cares. What you want is a fast path to something your physical network does for you. Network service mesh thinks of your physical network as a provider of a network service, and it will allow you to dynamically get your SRIOVVF that's plugged into not whatever generic thing was statically configured when the node was run, but the particular network service you want from your network, and also handles the scheduling problems around landing you in a place where you can get it, and where you can get it with the NIC capability that has the capabilities you want, like 100 gig, 10 gig, et cetera, right? And it handles it dynamically. So you don't have to statically provision these at no time, you can get this dynamically as the needs changes across the system. Apologies to the non-networking geeks in the room. I think we probably concluded that portion of this unless there are questions. Do we have questions? Is what? Most of you almost certainly not. But the telco people in the room, do you need that? And there are a certain set of low latency cases that I could see in certain enterprise-ish spaces which would also need it because when you're sweating microseconds, every bit counts. So most people, no, you don't need it. But you also don't get it in a network service mesh unless it's the thing you ask for. So I just wandered off a little bit into there are some new strokes that were suspicious were in the room, wanted to make sure they understood what was possible. Other questions. I believe not at all, except potentially with some of the things like the egress stuff. And part of it is that network policy primarily, if you put aside the IP-based ingress and egress IPs, network policy is about telling you which pods are isolated from each other. And then you get services that actually sort of often will go past that. And if I've got a side card that I can reach, the security gets lifted up to L7 for service mesh. So I would use L7 to determine who could or could not do what security-wise I believe. Yes, very well put, thank you. Other questions? Yes. Oh, no, you're absolutely right. So let me back up a few pieces. One of the things I didn't go into because of time limitations is the fact that there are things like Spiffy that will give cryptographically verifiable identities to your workloads. Now, network service mesh primarily operates at layer three. So it's actually arranging a virtual wire for you, not an MTLS tunnel. It just so happens if you have side cars as part of your network service, which we have some examples that do that, then you would have an envoy. And we're still using Spiffy IDs end to end through the entire system for both the L3 and the L7. So you actually get visibility for the same ID all the way through the system for all three op. Does that make sense? And I apologize, it's just, there's so much going on in this space that bringing identity in just got crowded. So I'm not as familiar with how, with games people may be playing with CNIs and DPUs. From network service mesh's point of view, your DPU is just providing a network service. And so your workload doesn't have to be any wiser. It gets handled in the plumbing and away you go. And it turns out if you look at a lot of DPUs, many, do folks know what a DPU is? Data processing unit. Many of them actually when you drill into them are little tiny servers in a nick. And many of them will run something that looks a lot like, I mean, there are some that are burning. Okay, I'm being told to stop. I'm perfectly delighted to talk with folks after this. You've been a wonderful audience. Thank you for all the questions and interaction. Thank you.