 Welcome to this edition of Agile India. So happy to have you here. Before we get into Neeraj's session, here's just a little bit about him. He is the head of engineering at solo.io, a long term contributor and maintainer of the Istio project, previously co-founder and chief architect at Aspen Mesh. That's Neeraj for you. Without further ado, over to you Neeraj. Thanks Usha. Let me just start with sharing my screen. So hello and welcome everyone. Thanks for joining in on this session on security and observability for all apps and how you can leverage and combine the user and the kernel powers to get all the functionality you want. And thanks Usha for introducing me and hosting this session. Just continuing on the intros, I'm head of engineering at solo. Like Usha said, I've been a long time maintainer and contributor to the Istio project and several other open source communities. I'm currently on the Istio technical oversight committee and a former student committee member where I have been helping shape the direction of the project. I'm also written a book, Secure Application Diploma with Istio. So if you're interested in Istio, please check it out. And if you are, if you have any questions or want to reach out, you can always follow me on Twitter. My handle is NRJPota. So with that, let's get started. So the agenda today is basically going to talk about how these different technologies, whether it's EBPF and Istio, ServiceMesh, Celium, how are they related and how we can harness the power together, right? So I'm going to set the stage by giving some overview of where the cloud native security and observability is. What are the business challenges that people face with microservices and how ServiceMesh is solving them and particularly will focus on Istio. Then move on to the functionality that EBPF provides, right? And we'll see that there's a lot of overlap between what EBPF and ServiceMesh and Celium and Istio can do. Now, the aim here is to understand the benefits and limitations of each technology. Nothing is perfect, but at the same time we want to make sure, you know, we understand when to choose what and how to combine those powers. And that's what I'll focus next. I'll give you certain examples of where these two technologies can be layered together and then I'll just have one or two slides of what does it look like in the entire landscape if you look ahead, right? Where is the industry heading and what are we doing and thinking in the community? So if that sounds good to everyone, I will kick it off. All right. So, you know, in the last five or six years or maybe a decade, we have been seeing the shift in almost all large industries that started with innovators where people were breaking the monolith into microservices, right? And this was largely driven by the business needs of they wanted more scalability. They wanted to meet their user demands. They wanted to make sure they're more reliable. They have more atchic characteristics. They reduce the cost. They only scale the components that they want to scale, which is what microservices gives you. And then they wanted to have a better developer efficiency, right? So this is the Agile in the conference. The whole aim of microservices in a way is to make sure we are able to deliver more value to the customers quickly because we have smaller teams who have well-scoped boundaries on what they're working on and they know how to interface with the other application teams better. So all of this sounds really good, right? That's why you see in the cloud native space, there is so many technologies that enable businesses and organizations to move from monolith to microservices architecture. While all of that being great, what we see is when you have microservices at scale, I'm not talking about five or 10. We deal with companies who have 10,000, 20,000, but even at like small numbers at 100 to 100 microservices, right? Overall, you will see a lot of challenges that pop up, especially around operationalizing that scale. So microservices by nature are written in different languages, right? So you want each developer team to be autonomous and they will make their own choices because they have different languages and runtime frameworks. It's very difficult to get consistent security, consistent observability and reliability, right? So in the monolithic world, it was all one code base so you can create, you know, you can just create libraries and you are done and you can share those libraries. Now there are different languages, so you can't really create libraries, right? So how do you make sure, you know, you're surfacing the same metrics? How do you make sure you have the same TLS libraries? All of these are lots of challenges that the operational and developers team face because putting all of this logic in your application couples the developers and the operators. That means it slows down the eventual game or eventual, you know, giving more value to your customers quickly. So we want developers and operators both to work independently, developers to write code release, and then operators to make sure they can enforce organizational policies independent of developers lifecycle. So with these challenges is what we came up with service mesh, right? So this is just setting the stage of what a service mesh is. So service mesh is an infrastructure layer, it's a transparent infrastructure layer that can manage communication between microservices, right? Because a lot of complexity comes around this communication between microservices, how do you secure it? How do you get visibility to make sure when something fails, you can debug it? How do you make sure, you know, when you want to route traffic when you want to canary, you do it without changing your applications? And that's what service mesh gives you. And the aim here, like I was saying is developers can focus on business logic and operators can work independent and make a more resilient and secure environment. So Istio is one of the most popular service mesh projects. It is open source. It is now very soon going to be part of CNCF. I have been working on Istio for the past five years. I've helped build the community and need it and shape the direction of the project, right? So for me, Istio was innovative in many ways, particularly around the architecture we came up with on how to solve this coupling of developers and operators. And this architecture is what we call a sidecar proxy architecture. So on the right, you will see a diagram. So this is a typical Kubernetes environment. So we have, in this environment, applications running as pods. So you have two applications, app A and app B. And then what we do is we add a proxy right next to that application. This is what you see as the on-way proxy. So on-way, if you don't know, is a very, very powerful and performant proxy in C++. It was originally developed by the lift engineering team. It is now part of, again, CNCF. So what we do here is we inject this proxy inside the pod as an additional container. And then Kubernetes is a nice thing is anything that is part of a pod, they share the same networking namespace. So in a way, this proxy can see the packets going in and out of the application, right? And what we are doing is we are automatically intercepting this traffic coming out of the application into the proxy and then sending it out. Similarly, any traffic is coming into the pod or for the application first goes through the proxy and then to the application. So what that means is because on-way proxy here is in the middle of all application traffic, it can do the things for securing it, it can give visibility metrics and it can do traffic shifting, for example, routing, right? So this is what, this layer is what we call the data plane of the steer service mesh. The data plane composes of all the on-way proxies that are talking to each other and that's where the request traffic actually flows. Now in order to configure these on-way proxies, we have a component called STOD, which is called the control plane. So STOD is basically a control plane or a controller whose job is to look at the Kubernetes API server. It will discover what are the services that are there in your cluster. It will also discover what the user provided configuration is and then convert it into the configuration for on-way, right? So it's like a translation layer and it provides some abstraction so that people who are configuring Istio, they don't have to worry about the low level configuration of on-way. They can just look at a higher level Kubernetes driven configuration and then they get a service mesh. Another important thing that Istio does is manages the identity so that these proxies when they talk to each other, all the traffic is encrypted, right? And the way it works is the STOD will automatically create certificates. These are X549 certificates with strong identity built into them. And then as they expire, before they expire, STOD will automatically rotate. So if you see a lot of this complex functionality that you want to make sure your environment is resilient, proxy and the service mesh will do, and STOD can automatically configure it, make sure you have certificates available. Additionally, there's a lot of metrics that the proxies expose when the traffic is flowing through them and these metrics can automatically be exported to something like Prometheus, right? So if you see this is a complete system and a platform that gives you security, reliability and observability. And like I was saying, it is widely used, heavily used in production. They are a very large community and if you haven't played around with Istio, I will heavily recommend that you do and give us feedback on how to improve it, right? So moving on, this is what the networking in Istio looks like. I briefly described it, but this is good to understand how the traffic flows and how Istio and the service mesh layer is able to provide this functionality, right? So the proxies will handle all the traffic coming in and out of the application container. The redirection is done transparently. So what I mean by that is we add IP table rules inside the pod networking namespace. These IP table rules are written in a way so that all the traffic going out from the application comes into the proxy and all the traffic coming from outside first goes to the proxy and then the proxy sends it to the application container if needed, right? So this is what we call the IP table redirection. There are two ways in which these IP table rules can be inserted. One is via init container mode in which we add an init container in the pod that will set up these IP table rules before the application container or the sidecar comes up and we also support the CNI mode in case you have not willing to give privileges to insert IP table rules in the init container, right? So I'm going to now change to the EBPF site so that we understand what EBPF and CLM are. So we understand what service measures now hopefully and what Istio architecture and how Istio provides these benefits, right? So coming to another written technology, which is very exciting and evolving in the space is EBPF, right? So EBPF stands for extended Berkeley packet filter. So it's an evolution of the classic BPF. So if you have played around with TCP dump, TCP dump as a library or as a tool, you know, once it does the parsing of the arguments you have given at the back end is it invokes BVPF hooks. These BVPF hooks are networking hooks that can be invoked, right? Or that are called whenever any networking event happens in the kernel. EBPF extended their functionality and now provides a way to basically run custom programs for lots more things than just networking. And these programs are user space programs that can be added in the kernel that are running in a sandboxed environment. So what that means is, you know, the kernel verifies that the program is safe. It won't crash the kernel. It is guaranteed to run. You can't have infinite loops there. And then it is also safe in terms of memory access. So this is very different than what loadable kernel module is. If you have played around with Linux kernels, you will know that there's a way of extending kernel functionality through LKMs. While those are great, they are also very risky. They can blow up your entire kernel if you don't know what you're doing, right? With EBPF you can't do that. That's why this technology is so exciting, right? And these event-based programs, right? So these are what I was saying. These programs can be attached to hooks and these hooks can be networking hooks, you know, K probes, tracing. So there's a lot of benefit and things that you can do with EBPF, which I will just cover next. I see some of the questions that are being asked. What I will do is I will ask the question. I'll answer them once we are done with the presentation so that I can get to the entire content. But please keep on asking. I'll definitely answer everything. So why am I excited about EBPF, right? And especially if I've been working in service mesh and Istio for so long. So like I was saying, the three functionalities that we get from Istio and service mesh, we can also get from EBPF. For example, with networking and reliability, since EBPF is built on the classic BPPF, which was designed for efficient filtering, right? Packet filtering. It's a no-brainer that you can do a lot of advanced things in EBPF, right? So you can drop packets. You can send traffic to somewhere else. And you can do a programmatic access of packets in the kernel networking stack, which is much more efficient doing it in the user space, right? Secondly, when it comes to security, because EBPF gives you hooks at so many low-level events, you can codify your policies, right? Or you can monitor sensitive operations beyond just networking. So you can do things for file access, for example, or some other key event that you want to monitor and you can block it or allow it. Additionally, you know, EBPF programs, since they work so well with networking, it's like a very good fit for classic network security problems or firewalling, right? And remember, all of these things are going to be running in kernel. So in a way, a user application cannot avoid it, which is different than when you are doing the things in a sidecar proxy architecture, right? Because if you don't have the sidecar, basically, you don't get the functionality with EBPF because it's in the kernel, every application you will get, right? Or every pod if you're talking about Kubernetes. And for observability, you can attach the EBPF functions to any function. You can extract data metrics. I'll talk to you next about the EBPF architecture, how EBPF programs can send back to the user space these metrics and traces. And then you can send it eventually to your monitoring tool like Prometheus, right? And the main thing that I want to express here is that EBPF is highly, highly performant. And because it's performant and efficient, you can process raw events as they're happening in the kernel, right? Which is not usually possible from a user space library. But the EBPF, you can do that. So, quickly, this is the EBPF architecture, right? So a user creates an EBPF program that can be in C, C++ or some other high-level languages they support. You use the Clang, LLVM compiler, you are creating a bytecode. Bitecode is your portable code format. So that gets loaded into the kernel. So left side is user space, the right side is kernel. Once it gets loaded in the kernel, the kernel will verify the bytecode so that there are no infinite loops to run to completion. It is memory safe. And then the Justintine compiler will change it to a machine code. Now, this machine code is now inserted, will be running into the kernel. So the thing is, you can have outputs from that program, expose the user space via what we call EBPF maps. These EBPF maps are very, very powerful way of exchanging data between the user space and the kernel space and they're very efficient. So, for example, you can think of, I have a program that runs which only allows certain traffic based on IPs. So that will be running here. Through the maps, you can automatically add which IPs are allowed which are not. So you don't have to recompile it again. And also through these maps, you can get the information back which of the requests were blocked and which were allowed. So now you can create alerts and metrics on it. So you can see why this is so powerful and what are the kinds of things you can do with EBPF. So CELIM is a cloud networking overlay or it's side off as a CNI, a Kubernetes Container Network plugin, which is built on the EBPF technology. So CELIM as an open source project as part of CNCF allows a lot of functionalities like load balancing, policy. It is a very scalable CNI and it gives metrics and policy troubleshooting. So as you can see from both the Istio service mesh and CELIM EBPF layer, there's a lot of overlap. Both of these things are doing kind of similar kinds of functionalities but there are a lot of trade-offs and this is what you want to discuss next. So what's the benefit of a service mesh and using Istio and what are the limitations and then we are going to talk about EBPF and CELIM. So for Istio, the key thing that you should understand is we are inserting a very, very powerful proxy even though it's a user space. This proxy is very feature rich. Now it's among the second or the third most used proxies in the world after maybe in Gen X. It is a layer seven proxy. There's a native support for a lot of other protocols, not just HTTP, HTTP2 even quick but things like Dynamo, MySQL. So because it understands those protocols, it can give you information like metrics relevant to that layer. Relevant to that protocol. It can also do securities relevant to that. So the fine-grained security is basically that. So because it can pass layer seven, you can write rules like I want this service to access slash foo but not slash path. It is allowed to do a get operation, for example, not a put. So that's the kind of functionality with Istio you can get. Additionally, like I was saying, Istio is natively supports encryption automatically rotating the identities and certificate. So out of the box, if you add the sidecar, you can get layer seven support if you want. And also you will get end-to-end MTLS encryption, very strong encryption. And then the telemetry which is supported because it's passing layer seven is very, very rich. And then in Istio, the way we insert the sidecar proxy is by automatic injection. So we add a mutating webhook in Kubernetes which will change the pod and add the sidecar automatically. The good thing about this is the sidecar comes and goes or lives and dies with the pod. The pod goes away, the sidecar goes away. If the pod restarts, the sidecars come up with it. So it's a very easy way of management, low overhead in terms of operational complexity to the users. While there's so many benefits, there's still some limitations that come with service mesh, right? So the key thing is all features are relying on sidecars. And you will have to inject the sidecar to make sure you get policy, you get encryption, you get telemetry. And you have to make sure the application cannot bypass the sidecar. All of this is in the user space, right? And all of this is in the same pod as the application container. So it's not as strongly enforced as something that can run in the kernel. Secondly, the IP table redirection that I was saying that is done to get the traffic from application pod to the sidecar proxy pod. This only works currently for TCP traffic. We can make it work for UDP, but there are some limitations that we are dealing with. That's why it works for TCP. And then the way we are injecting the sidecars to the webhook methodology, it requires the pod to restart. Now this sometimes is an operational overhead and no go for various companies that at least we talk with, right? So they have applications, they are running, they don't restart it, they don't want to change anything. And they say we want to get service mesh like functionality. How can I get security, visibility, and reliability, right? And then lastly, because of these sidecars are added for every pod, they can be a resource consumption problem, especially if you're not tuned the configurations correctly. At the same time, since you're adding proxies on both the sides, there is latency that comes with it. So there are impacts to performance. If you want to do that trade-off, that impact is beneficial compared to the features you're getting out of the proxy, right? So you are offloading things like MTLS. So MTLS adds overhead, but if your requirement is doing MTLS, then your application will have to do it, right? Overall, I mean, we have tuned the proxy and the Istio architecture well, so the overhead is minimal, but it's not zero. Coming to the next, coming to the EBPF benefits and limitations, right? So in the EBPF side of things, our Selium, we are enforcing the security and observability in the kernel. Like I was saying, because in the kernel layer, you cannot bypass it. It's not like the sidecars where if you don't add the sidecars, you don't get it. So this is huge, especially for applications which can't add sidecar or there are huge environments in which it will take time to add things to the mesh. You have access to very low-level kernel events that obviously even a proxy can't access, right? So proxy has limitations of only accessing networking things. With EBPF, you can do anything you want in the kernel and it's very, very performant, right? But at the same time, there are a few limitations. Because it's the kernel level, you can only do layer 4. It only supports layer 4 parsing. You can only deal with TCP, UDP, IPs and ports. You don't get a sense of the high-level protocols, right? So kernels only stop at layer 4 right now. So that means you cannot get request-based metrics for HTTP. You can only get connection-oriented metrics. And because it's not doing layer 7 parsing, you cannot do fine-grained policy. So you can only say this IP and port can talk to this IP and port. You cannot say, you know, I want to allow a get request to admin but not a put request. And also a big difference between using EBPF and Selium compared to Istio is in Istio, we are creating strong cryptographic identities via certificates that we add for each sidecars. And these certificates, these identities are based on the service account. In Selium, we don't have that provision, right? So we are using IPs as a way of doing workload identity. Now, there are some limitations that come when you use IPs because IPs get recycled. There might be a lag in which an IP is recycled in the system observing things. So, you know, that's one of the limitations that you have to live with if you are dealing with the kernel, right? So a question that always comes up when I'm talking with customers or in the community is, okay, these technologies provide similar things. When should I use what and, you know, can I leverage them together, right? And for me, the key here is where this entire space is heading is we should be able to leverage the user and kernel space together to get the advanced functionality we want. So you saw no one technology is perfect here. They all have the benefits and limitations. But if you're able to combine their power, you get the best of both worlds. And that's what I want to focus on next. Let me see. So there is one question. Maybe I can answer that. So Ame, you're asking all traffic routes through Istio how performance is managed and it is only for Kubernetes or any cloud provider. So that's a really good question. So like I was saying, the performance overhead, the latency overhead is minimal. We have worked really hard to make sure traffic going through the proxy doesn't add a lot of overhead, but there is some overhead. It works on Kubernetes in the sense that the control plane for Istio has to run on Kubernetes. The data plane proxy, it can run on a Kubernetes environment. It can be added in a pod or you can even have virtual machines in which you can add the sidecar proxy. So we also support VMs for cloud providers. It is supported on any cloud provider that supports Kubernetes, right? Because primarily Kubernetes is a layer of abstraction. Does that answer your question? All right. So let me continue and please keep asking questions and we can make it more interactive. So moving on to how we can leverage them together, right? So there are three easy use cases that I will show how we can, in which we can combine both sidecar proxies and EVPF or Celium, right? So this is the first use case. So remember I was saying to get the traffic from the sidecar to, from application to the sidecar, we need to, so we need to use IP tables and what the IP tables does is the application sends a traffic to the proxies, sending a traffic to some other application, but we'll intercept it. It goes through a socket layer, goes through a network stack. And then from the network stack has to go through the sidecar proxies network stack up again. So the packet traverses just because this IP table interception, the packet traverses the user in the kernel space twice, right? So this is when you're using the IP tables ways of interception. What we can do is we can accelerate this data path interception with EVPF. So EVPF allows you to attach programs on any socket. And what you can do basically is have an EVPF program attached to this socket where it will just send the packets that are coming out of the application directly to the proxies. So it doesn't have to go through the whole TCP IP stack twice. So if you see, this is so powerful. Now you get the same functionality as with the IP tables redirect, but now you are getting a lot more performance because TCP IP stack is very optimized but still traversing it twice is not needed. Additionally, because you're not using IP tables here, you can see that this EVPF way of intercepting works for both TCP and UDP, right? So this is just one way of how we have combined STO and EVPF to get more and better functionality out of what we want. So the second functionality, and this is my favorite, which I always say is the defense in depth, right? So STO provides layer seven authorization policies. It provides fine gain control for layer seven. It gives you service account based strong typographic identities, which is great. But if you don't have a side card, you don't get that functionality. Or if you bypass that side card, so suppose you are under attack vector and the attacker has reached the application container, right? It has bypassed the side card. It doesn't matter now, right? All the policies that you have won't be enforced. EVPF and Selium as a CNI layer, they provide layer four policies of port, right? So you can say this IP and port can talk to this IP and port or this cannot talk to that. The good thing about that is even if the attacker has escaped and reached your application pod, if the attacker hasn't broken the kernel, they cannot escape this, right? All the traffic that goes through will still be, the policies will be enforced. So what we suggest is if you want defense in depth, you can combine both the two layers. So you can use Kubernetes network policies. So Kubernetes network policies basically gives you a higher level policy object in which you can say, hey, I want to have traffic to my pod called app pet store, receive traffic only from spots which have the service account named foo, right? So you can create an access network policy like that. If you're using Selium, Selium will pick up this network policy and automatically enforce this via EVPF. So when a new virtual interface is created from the CNI for any pod, you can even attach programs in that virtual interface. And what Selium does is make sure it filters packets based on what allowed or not at that layer. So basically, if you create a network policy like this and you're using Selium in the EVPF layer, you can see service account foo is allowed to talk to pet store, but bar won't be account. And this will basically be managed at an IP and a port layer. So the Selium control plane or the Selium agent will be monitoring when these pods come up. What are the IP addresses? And then it will block the traffic based on this policy. But you can still continue to use your STO layer seven policies, right? So even when this traffic is allowed, you might want to say that foo can only reach and make a get call to slash admin to pet store and not do a put or a post on admin. That you cannot do with Selium, but you can do that with STO. So if you see, if you've combined those two layers, you make sure even if the site card is bypassed or the site card doesn't exist, you have basic layer four policies and then you do advanced micro segmentation using layer seven policies. This is a very powerful concept. I always recommend for folks who are doing security and want to do defense in depth, try to layer these together and get more functionality or better security. And the last thing that I want to talk about is observability, right? So you get some observability from Selium and the BPF, you get some observability from Istio and service mesh, but if you combine them actually, you get the best of both worlds. So here's an example, right? So I have two nodes and in one of my nodes in the pod pet store, I have the site card proxy injected, but in the applications bar and foo, I don't have site cards, right? So either I'm just migrating them or they are legacy applications. I don't want to touch them, which is a very normal scenario for everyone. So when you have a site card, the site card will inject lots of telemetry, especially layer seven telemetry around. How many requests are there? What's the status code? What Kubernetes pods and services is associated with? Who is talking to that? Who is talking to what the sizes of the request, the duration? So you can get a lot of high level service level metrics and you can have Prometheus scraper, right? So this is beautiful, right in the box, you get a lot of metrics. For the applications which don't have a site card now, now you're stuck. You don't get the service level metrics, so you can only get application level metrics, right? So if you have application level metrics, you know, you rely on the application and foo and bar, for example, can be in different languages. You have to make sure they're surfacing consistent telemetry. So you get the same visibility, you can create dashboards and understand what's happening, right? But the best part is if you have EBPF running, you can have EBPF generate metrics on any of these networking events, right? For example, when a TCP connection is open from foo to pet store, the EBPF agents can surface that metrics saying these two are talking and these are the types of data, this is the size the data that has gone out. Or if the traffic was blocked, it can also surface metrics saying this traffic was blocked. The good thing is you don't need a side card for it, right? So if you have combined both the STU and the EBPF layer, so you can have these EBPF programs get invoked when special hook happens, they will get those metrics out to the user space program and then Prometheus can scrape them just so it will scrape STU. So whether you have a side card or not, you still get the baseline metrics for everything. And when you have the side cards injected, you get even more metrics and even more rich data. And this is how you can combine these layers and get a lot of advanced functionalities. Just lastly, I think I have like on four minutes left. Now I want to share what the future looks like for both STU and EBPF and Celium, especially as the cloud native landscape and the technologies involved. So for me, both of them are very, very exciting technologies, right? Like I've been sharing throughout the presentation, there's a lot of benefits from each of them. But then when you layer them together, you get a lot of advanced and enhanced functionality. Again, folks who have been familiar with STU might know, but if not, we recently announced a new architecture in STU. It's called Ambient. In fact, my company Solo and Google, we were partnering together for the last few months and we were together building this and we open sourced. So Ambient mesh is a new architecture in STU, which allows the proxy to be injected in a non-side card mode, right? So currently in STU, only you can inject the proxy in the sidecar proxy in the same pod as the application container. What we found out was, like I was saying, for some of the applications, it's difficult to add the sidecar. It adds a bit of overhead in terms of resource and performance that customers don't like. So we have come up with a new architecture called Sidecarless architecture, which relies on node-based proxies. This node-based proxies are going to be run at layer 4. So if you just want mutual TLS between services, you can get a lot of performance, low resource utilization, low operational complexity by just using the layer 4 proxy in the nodes. We call it Z-tunnels. But if you want layer 7 functionality, you have to add another proxy, which is called Waypoint proxy, and the traffic goes from your application to Z-tunnels to the Waypoint proxy back to the service. It's very difficult to explain the entire architecture in the two minutes, but I just wanted to say this is happening based on the feedback we have received. But even in this architecture, I still see VPF playing a very important role, right? You can even, how the traffic goes from the application pod to the per node proxy. You can still have VPF doing acceleration. You still want VPF involved when you want security in depth. So for me, even with the evolved architecture, these two technologies still play together. And then lastly, there's a lot of exciting developments happening. I'm leading the efforts in Istio. I'm also leading the efforts in Solo. We have created a higher level APIs and a platform so that with one API, you can get security and observability in depth. And we will install and manage Istio and VPF both for you. So that's the benefit that we can provide. Otherwise, you have to deal with multiple different APIs. You have to create a Kubernetes network policy. You have to create a Istio authorization policy, which can get messy. And you have to make sure the metrics are coming out of the similar format. All of that is handled by us. So I'm really excited that I've got a chance to present today. Please reach out if you have any questions. If you have any feedback, I'm happy to connect here also. You can reach out to me on Twitter. Thank you. Are there any questions I can answer? I think no more questions, Neeraj. Thank you so much, Neeraj, for sharing all your experience and insight. And I'm sure the attendees got a lot from your session.