 All right, let's go ahead and get started. First, I'd like to thank everyone who's joining us today. Welcome to today's CNCF webinar, using cloud-native technologies to solve complex application security challenges in Kubernetes deployments. I'm Sheila Sabi, CNCF ambassador and open source program manager at Comcast, and I'll be moderating today's webinar. We'd like to welcome our presenters today, Shreyans Mehta, the co-founder and CTO of Sequence Security. A few housekeeping items before we get started. During the webinar, you're not able to talk as an attendee. We do have a Q&A box at the bottom of your screen. Please feel free to drop your questions in there and we'll get to as many as we can at the end of the webinar. With that, I'll go ahead and hand it over to Shreyans to kick off today's presentation. Thank you. Hi again, my name is Shreyans Mehta and like I was introduced, I'm co-founder and CTO at Sequence Security. Today in the webinar, I'm gonna talk about mostly introducing a new way to do application security, especially focusing on the runtime application security aspect where we have more and more APIs, more applications that are being exposed up through containers in like Kubernetes and microservices environments. So before I kick it off, I just want to briefly talk about our company, Sequence Security. We are a venture-backed startup focusing primarily on application security space. We build an AI-powered security platform which is actually delivered as containers to protect the web, mobile and API-based applications from bot or sometimes called as business logic attacks as well as vulnerability exploits. We are completely built on top of cloud-native components like Kubernetes, Envoy, Prometheus and likes. And we play well with the existing ingress controllers like sidecars like Envoy and Nginx so you don't necessarily need to replace them to use technologies like ours. And for more information about us, you can obviously visit our site, sequenced.ai. So kicking it off, so the world as we see it, there are a lot of publicly-facing applications and you need to expose them publicly for a number of reasons. Primarily for your customer interactions, sometimes you might have mobile applications that need to talk to the actual apps on the backend using mobile APIs. You might have supplier APIs, you might have partner APIs. And the moment you expose them, you are exposed to the bad or malicious actors as well. And these kinds of exposures can lead to attacks like what we call as the business logic abuse attacks. I'm gonna go a little bit more in detail around the attacks themselves. But they are highly automated in nature. These kinds of attacks really go after the business logic or the applications themselves. Typically more about the contract of communication rather than the syntax of the communication. So the content really appears to be legitimate, but they are actually very difficult to detect and block using traditional signature-based technologies. And then there are these traditional attacks, what we call as attacks on vulnerabilities. They are highly targeted. They can be both known attacks or zero to exploits depending on how they are being executed. So that's sort of the high level view of the application attack landscape primarily at the runtime. So let's talk a little bit about how these kinds of attacks are being sort of protected against at least so far. And the traditional world is all about monolithic applications and think of, for our discussion, think of a retail application that has, let's say user management on its side. It has a shopping cart. It has a place for reviews, ratings of the products that you're selling. And all of that is packaged in as a single monolithic application. And you need to protect that, right? And this is the traditional way. And the traditional way is to protect that using what is called as a perimeter defense where you have this application that is scaling up and down using a load balancer. And then you have sort of layers of defenses put in front of it before this application is hit by the end user. So the first kind of defense is the defense against vulnerability exploits. And you can think of it as a couple of examples. The first thing that you put in place is the web application firewall. And this web application firewall is doing a couple of things. It is trying to protect the application from more like OVAS type of attacks. It is also needed for compliance in a lot of cases. It is helping you prevent against vulnerability scans, vulnerability scans, think metasploit, think OpenVas, which is trying to expose any vulnerable applications that you might be running. So you might be running something like a vulnerable PHP application or Node.js application. So it's trying to uncover that and that becomes sort of the first phase of the attack. The second phase is really that your graph is also protecting against the actual breaches themselves. You're running your entire user database on the back end and somebody could actually exfiltrate the entire user database, PII information, transactions, all of that through attacks like SQL injection, or you might have something like elastic search server that is exposed. So the job of a web application firewall is to protect against these kinds of attacks that lead to exfiltration or the actual exploit themselves. So you need that. The second part that we spoke about earlier is also about the highly automated attacks that are also called bot attacks. So think of a case, now in this case, this is a retail site that we are talking about. So bad actors, especially competitors, can come in and scrape the entire database of inventory that you might have. They might use that against you in terms of the pricing information that you have and then competitively put prices on their own site. Sometimes we also see competitors scraping content which is really your IP rather than using it maliciously, they might even use that to develop their own content around it based on what you have posted on your site. We also see in the retail site something what is called as inventory lockup. Think about a case where somewhere you have, let's say a thousand items of certain kinds and the malicious actors come in and then they simply add those thousand items in the shopping cart and that inventory is no longer available for you to sell online until this inventory is freed up from the shopping cart itself. Fake likes and fake reviews is another use case where there are thousands and thousands of automated bots can come in and then bump up the ratings of a certain review, good or bad, or try to automatically put in fake reviews on the site. We also see cases around fake account creation where you might have a promotion going on that you get, let's say X number of credits or certain maybe dollars or something like that when you create a new account and the bad actors can end up creating hundreds of thousands of such accounts and drain out those credits that you have available for legitimate users. One other big large use case that we see in the bot space or business logic abuse case is the case of credential stuffing. What we see really is there are billions of username passwords that are available in the black market, especially because of the breaches that have been happening in the past few years. It's either username passwords or email IDs and passwords, so these are stashes of credentials that have been breached through, like, think going back, right? I mean, LinkedIn, Yahoo Breach and most recently, Chipotle and State Farm and things like that. So you get these credentials and obviously these companies have gone ahead and we see the username passwords for these guys to protect the individuals. But the other sites continue to have the same username passwords just because normal human being tends to reuse the same passwords across multiple sites. So what bad actors do is they take advantage of that and they take these stolen credentials and then try it on our retail sites or XYZ retail site and any hit they get is money to these guys. So that's another typical case of a bot attack. And the last protection that you need is protection against application layer DDoS. Now think of it as your monolithic application here. Needs to scale up and down with the kind of traffic that is coming in. But when your traffic is 70, 80, 90% sometimes coming from the synthetic traffic that is being generated by these bots, it may scale up at just the web server level but not necessarily at the application level or the backend database level. And because of that, it leads to unavailability of the application and maybe even sometimes slow down. So in some ways, app DDoS attacks are related to bot attacks as well but it doesn't have a security implication but more of an availability implication. So from your perimeter defense perspective for your monolithic applications, you need all these kinds of defenses in place no matter where your application is running, could be running in your data center, it could be running in a public cloud, wherever that might be. So now coming to the new world of where these monolithic applications are being sort of split up into microservices for various different benefits. I mean, we all understand the limitations of monolithic applications. It's very hard to roll out new updates to these applications. There's so many dependencies between different components that anytime you touch one component, it might lead to sort of breaking some other component and you have to just rely on one application style. Maybe it could be on Java or JS. It's harder for the development teams to sort of develop at a much faster pace that sort of the new world needs. And that's why people are moving towards microservices. And again, you guys are all probably experts in the space, but just to give you an idea how these applications are breaking up. So the original monolithic app that we spoke about can now potentially be split into, let's say a user management microservice that then in turn can potentially talk to the data access microservice. You might have a shopping cart microservice that is separate. You might have a customer reviews and ratings microservice. You might have an inventory management microservice and so on so forth. And all this is being orchestrated through your orchestration platforms like Kubernetes or Istio and so on so forth. Now, you can then individually scale these applications. So let's say you have an update in the new customers because you have a promotion going on. You might just want to individually scale the user management service and not necessarily everything else. You might have a sort of a Thanksgiving sale going on and you might want to scale up the shopping cart and customer review service. So you can sort of take sort of a scalpel-like approach to scale up and down these services rather than scaling up the entire application stack in a monolithic site. So these are sort of the benefits of the microservices based approach rather than a monolithic application. Now, all these services when they were working in a sort of a monolithic environment, they were talking to each other. These components probably still existed but they were talking to each other using function calls or RPCs or something like that. But in the sort of the microservices world, these applications are loosely coupled. They are exposed through APIs and these APIs have sort of a contract sometimes that, okay, only these services can talk to other services. I'm going to talk over TLS and so on and so forth. But they're still prone to the kind of attacks that we spoke about earlier. And let's drill down a little bit more here. So when we move from the monolith world to the microservices world, these are some of the additional challenges that you have to worry about. Number one is because of the newly exposed APIs for these microservices, you have a whole lot more entry points into your environment, right? They might not necessarily be exposed directly but they might be exposed indirectly in the sense that the data access service might not be exposed directly to the outside world but every time you invoke the user management microservice or an inventory management microservices, it might sort of indirectly invoke the data access service or that you could even compromise one of the microservices that are running internally and then abuse the other microservices just because they are exposed eternally. Sometimes these services can even break the contract that okay, I'm only rather than doing a account sort of a check or authentication, I might be doing something else just because somebody compromised some other microservice. So that's one aspect. The second aspect is the scale out aspect where in the monolith world, you were scaling out the entire application stack in one go and you had sized your runtime application security services to maybe for the peak. So let's say in a Thanksgiving time, I need like five gigabits of traffic and that's why my runtime application security stack is actually sized for that. But it does not account for specific services that are scaling up and down because the needs for your application security stack might rely on the individual components and the security that they need rather than sort of a single sort of a brush of, okay, I just need to have this much capacity. So that's, sorry, this is falling down for me. So that's another security challenge that you face where you have individual microservices that are scaling up and down. How do you scale up and down your security services along with it, especially when these services are in some ways disconnected from the microservices themselves. The second big aspect is around keeping up with the sort of the DevOps space, the security keeping up with the DevOps space. So earlier, there was a single monolith application and the security team was actually working with the application team anytime you were rolling update to the application and working closely with them. The whole idea of moving to microservices is the development team, the operations team, they can move with the lightning speed where they can roll out new microservices while coexisting with the older microservice. They could, let's say, they have a new version of the shopping cart application that they want to expose, maybe just to the 10% of the customers first and then sort of scaling up or dialing up to the rest of the community. They might want to phase out the reviews or ratings application and then roll out a new one but they only want to do it based on a region. So new applications are being rolled out at a much faster pace and how does the application security become aware of these applications? So the discovery aspect, the protection aspect, this becomes a challenge from the application security perspective. The third one is, like I said earlier, is earlier you would depend just on sort of one kind of application, secured application stack. Now what microservices allow you to do is you could have a heterogeneous environment, right? Your user management service could be running on a Java stack. Your data access could be on the PHP stack. You might want to move the shopping cart to Node.js stack and so on and so forth. So you see what you have is rather than a single stack and sort of that the security team needs to deal with, now you have a heterogeneous environment and newer and newer stacks keep coming up and the security team, again, needs to worry about these individual services and the kind of protection that they need rather than sort of again a single approach to solve that problem. So that's another big challenge that the security teams need to deal with when you have this heterogeneous environment as opposed to the historical monolith application. And so one last aspect that what these microservices allow you to do is ability to run in multiple clouds. So you could be in the data center, you might have started off with something that like a non-prem of Pivotal or OpenShift and you could be moving to the cloud when some of your microservices are still running on-prem and then some are running in the cloud and the perimeter-based defense approach doesn't really work there because there is no real perimeter in that case. You have all these services that are spread out in different environments and the applications could be moving from on-prem to in the cloud and so on and so forth. So you have to deal with that situation. Now, so what is the new approach that we can actually work with so that we can keep up with the needs of the microservices? So what we're proposing is a new way to do this. So rather than moving, having application security at the perimeter level, the application security needs to move closer to the applications themselves, the microservices that they are actually running. What that means is the app sec at runtime needs to be packaged in as containers and run in the same pods as the applications themselves. So when you are scaling up your application, it just scales up and down with that application. It needs to, the way it can work is it can work with your site card. So when you're in the part, one approach to do that is be the site card itself where it is working when the request comes in, it tries to hit the API. It inspects the application, gives it a thumbs up or thumbs down. If it's a thumbs down, it goes ahead and takes the application out, the request out. And if it's a thumbs up, it lets the application through. But you do need to worry about the existing site card that you might have in place. You might be working with a non-voyer proxy or an engine egg proxy that is, that you rely on for your application delivery. And you don't necessarily need to to rip and replace the existing site cards just because you want to put security in. So these security microservices can actually be injected in conjunction with your existing site cards, either as a site card chain, where the primary site card is hit, it might be your own void, it passes on the request to the upset site card and then sort of upstreams it to the actual application microservice that you're interested in. So it can that way coexist with the existing automation that you have in place orchestration that you have in place. And the existing services that you have in place. So this is what the new runtime application protection needs to look like in the new world, rather than having sort of an application security at the perimeter level, which has no understanding of these microservices themselves. It needs to move closer to the application themselves. That way you are targeting the few limitations of the perimeter defense that we spoke about. As you're scaling up and down these microservices at the individual level, your application security is scaling up and down along with it. As these applications, the new applications are introduced, they can automatically be sort of protected through your orchestration platform like Istio can automatically inject this microservice in the same part. And then you are automatically protected around that. You can take sort of a scalpel like approach for protection. So you might have different platforms that you're talking about like user management running on Java or data services or PHP. Your application, runtime application security, a stack is focused towards protecting just that application rather than just applying all kinds of generic protections for all kinds of services. So you can focus on individual microservices to be protected rather than sort of a generic approach at the ingress level or at the perimeter level. And then the last aspect, the limitation that we spoke about in the monolithic world is sort of as the microservice is transitioning from on-prem to the cloud world or from one cloud to another, the security is actually moving with it. So you don't need to worry about that, okay, did I protect my application in the new environment? It's automatically moving with that setup. But what you also need along with it is a sort of a centralized application security, visibility and control. So you don't want to individually manage these parts, you need a centralized management that can actually discover all these applications that are coming up. You get centralized visibility, how many apps are running, what's being abused, any configuration that you wanna enable, disable for these services. So that's also needs to be delivered as a microservices that is running in your environment rather than going out to the cloud and making all those decisions. So that's sort of the framework that we are presenting. So sort of summing up on some must-haves in the sort of the run-time application protection for microservices, given that you want to empower your developers to roll out new applications at a very fast pace, it needs to be designed to work with the existing applications without asking your developers to make any modifications. It needs to be non-invasive in the sense that you don't want to bundle in sort of no agents or SDK or JavaScript in forcing the developers to bundle in these SDKs or JavaScript rather than it needs to automatically come up when the applications are coming up as microservices. So it needs rather than being sort of bolt-on from the outside as a parameter defense, it needs to be a microservices-based protection to protect other microservices so that it can scale up and down and have all the benefits that we spoke about earlier. It needs to coexist and not necessarily replace your existing in-risk controllers and sidecars so that it needs to play well in that setup. And it needs to be a single pane of glass for all your microservices so that you have one place to manage visibility and all those things rather than individually managing every sort of security service that you're running. That way, your protection moves with the microservice wherever it's actually going. So that's sort of a framework of must-haves that we've defined for the new way of doing application security. So in the end, I just want to talk about the way you should be thinking about sort of the new security stack. Obviously, you need to worry about the infrastructure security where your Kubernetes environment is actually running, the virtual machines are they being patched or not? Does your Kubernetes stack have any vulnerabilities that can be exploited? So you need to keep up with that. The next phase is about the container security itself. So you could have sort of unpatched version of containers that you need to be aware of from compliance perspective, visibility and sort of logging in all those things come into place. Who's talking to who? Is the communication secure or not and so on and so forth. But in the end, you also need to worry about the runtime of the application itself through protections like web applications, firewall, sort of bottom bottom defense and application layer DDoS defense. And rather than having it at the perimeter layer, it's much better to have it closer to the microservice itself and think about that as part of the security stack in the microservices world. So this was more of a introductory sort of webinar. We plan to have sort of a follow up webinar on the actual way of actually how to do it, how do you inject the services. So I look forward to presenting that in the coming weeks or months. I'm gonna wrap it up. Any questions on this, please free to send it over. Awesome. Thank you, Shreyans for a great presentation. We do have some time for some questions. And before we jump into that, I will just wanna add that KubeCon, CloudNativeCon North America is CNCF's flagship event and it will be here before you know it. This year it's being held in San Diego which should still be sunny and lovely in November. This is the time for the community to come together to further education and to advance cloud native computing. So if you'd like to attend, please go to kubecon.io for more info and lock in your ticket before it sells out. So with that, I will go ahead and jump into questions. It looks like we have quite a few. I can read from the top. Shreyans Wahid Malik is asking, is this a micro segmentation or nano segmentation solution or both? So depending on how you look at it, the idea really is, I would think of it as more of a micro segmentation. So as your pods are, the security is built into the pod itself, how you define the pod is really up to you. So if you can take it to the level of nano segmentation if you like. So it really depends on how you are actually designing that solution. Wonderful. And we also have another question from Wahid. Do you integrate with something like Apigee, Axway or Akana? So we currently don't. The idea really is, rather than integrating at the API gateway level, we are sort of more focused on the microservices themselves as they are coming up and down. We have a genetic way to work with the ingress controllers as well. But if that is something like that we spoke about the limitations of integrating at more at the perimeter level that could be API gateway as well. So the approach that we are suggesting is moving it closer to the microservices themselves. Cool. We also have another question from Pradeep Nambiar. It's, wouldn't CAPTCHA block the fake request to some extent? Not really. So what we see is, number one is CAPTCHAs have been broken for a while and especially the sort of Kubernetes or microservices where we're talking more about APIs and APIs can be exposed primarily for your mobile apps or sort of machine to machine communications and CAPTCHAs don't really work there. I mean, there's a lot of information available on our site why CAPTCHAs don't work. Feel free to come there. So it's not a simple answer, but we've seen them broken all the time. All right. Another one from Wahid is, why wouldn't every API call go through API gateway and then I can apply all my security policies at the gateway choke point? So the primary difference here is because of the microservices talking to each other you could have a compromise at one of the microservice level. And that can be used to sort of horizontally move and abuse. So you want protection closer to what is getting abused rather than sort of a perimeter-based approach to solving it. Okay. Another question we have from Joe Hackett. How does your AI technology automate resolution once it detects a bot or DDoS breach? So we have sort of a policy-based approach to doing this. So this kind of technology gives you sort of a intent that what kind of attack is actually going on. So you could see like a scraping activity. You could see something like a fake-like activity. And you can define policies ranging from simply blocking, you could rate limit them or you can even deceive them by sending fake responses. So again, there's a very long answer to it, but you can definitely go to our site and look at how we can actually do it. But we can have policy-based actions to rate limit block, even send signals upstream for your application to decide what you want to do with it. Okay. Another question from Thomas. Should I understand inject as something done in container build time, pod deployment time, or done during runtime? Pod deployment type. So when the pods are deployed, you can inject them as they are coming up. Great. Question from Brian Irwin. Are the individual sidecars in each pod sending telemetry to a sequence controller within the Kubernetes cluster or to a SAS-based service that can span multiple clusters slash clouds? So we have an option for both. It really depends on our customers. We can keep everything within your environment so you don't have to worry about the PII leaving the environment or lots leaving your environment. If your company is opposed to sending any data to a SAS service, if you are interested in SAS service, we support that as well. Very cool. Bhupathi is asking, any open source tools available to run security scanning on Kubernetes clusters? There are a few. I can't think of them on top of my head. This is, so what we presented here primarily is more of a framework. And then you can put together the services like these. You don't necessarily have to use sequence to do that. You can use open source services to do the same. Okay. We have a few more questions from Wahid and Pradeep. So I'll jump into those. Is this solution a sidecar solution or is there an agent per container? It's a sidecar solution. So sidecar or a sidecar helper. So you don't necessarily need us to be the sidecar itself but it can be chained with your existing sidecar. Okay. And is this hybrid solution where if my app moves to private public policy, the policy moves with the app? That's right. Okay. Great. Do you have support for DKS, PKS and NKS? Yes. So basically the kind of approach that we have, PKS, it does work. We've not tried it with MKS and DKS but the approach is genetic enough to be applied anywhere. Okay. Do you quarantine requests or just deny them? We, like I said, it's more of a policy-based approach. We leave it to our customers how they want to deal with it. So we can deny them. We can quarantine them in the sense then put them in sort of like a target or an area where it actually doesn't really hit the actual microservice. We can even send them fake responses. So we have a bunch of options that we can work with. Okay. Another question from Brian Irwin. Is there any difference between running the sequence sidecar versus injecting into existing NGINX sidecar? So it really depends on you. So if you don't have anything, you can use the sequence sidecar. If you, like I said earlier, if you already have NGINX runway sidecar, you can inject into that. It really depends on how your environment is rather than us forcing a particular way of doing things. Okay. And then isn't there risk to let the malicious request already into the pods from the perimeter? And this question I believe is for slide number 13. Okay. So it's sort of a layer defense strategy. I mean, you need to have things that you want to handle at the perimeter layer, things like infrastructure layer leaders and things like that to be handled at the perimeter. But you can't just stop there. The idea really is you would actually protecting the actual microservice. And the protection needs to be closer to that so that it has better context around the request. If you just try to do everything at the perimeter, you lose that context, where it's going, what it's doing. And if there are sort of other internal malicious actors like other compromised microservices that can take advantage of it. Okay. And do you support a deceptive mitigation which can randomize responses as urgent? We do. Yes, we do. So it can be, it's not just a randomized, we can, it can be on a per application basis because every application is different. You can have define your own success, fail criteria. You can define a new account creation criteria. You can respond with different languages. So all those options are available. Okay. And I think we are at our final question. What is the false positive rate that you've seen from the application DDoS layers? Like I said, we have sort of a more of a sort of a short answer is very low false positive rate. And the way you do it is we have a sort of a confidence based approach and policy based actions where you can actually define your own dials. What is the rate that you can be happy with? And beyond that, we can just send the signals upstream to your other applications or fraud system so they can combine our signal with theirs to take action. So that way you can keep the false positives very low yet not necessarily allowing a lot of bad stuff in. Cool. And we have another question from Brian Irwin. Can your solution span into traditional monolithic architectures as well? It does. The way we've designed it is our delivery model is containers. What this allows us to do is in the traditional monolith architectures, we can just deploy ourselves in a sort of off the shelf hardware in a traditional perimeter defense based approach. We work well. And then slowly when you're transitioning into sort of more microservices architectures, we can transition along with them. Great. And it looks like we don't have any more open questions. So thank you, Shrans, for a great presentation. And looks like that's all, looks like we've covered most of the questions, all of them actually. So thank you so much for joining us today. The webinar recording and slides will be online later on today. And we're looking forward to seeing you at future CNCF webinar. Have a wonderful day. Thank you. Yeah, thank you. Thank you for hosting me and the folks that you attended. Thank you very much. Great.