 Thank you for joining my talk from S bombs to I bombs know what's happening in your clusters. First, let's start with the most important thing. What's in it for you? Why should you stay here and not grab the next cup of coffee? I know I had one too many. So first we start by understanding why I bomb, which stands for infrastructure bill of materials is crucial for security, but also for any platform efficiency. Then we're going to define I bomb and understand why we should all care about it. Next, we'll talk about what's the implications in terms of attack service for I bomb. And later learn how I bomb can help us all learn how to control our clouds better. So who are we? My name is I don't even I'm the co-founder and CEO at the company called firefly. We do in cloud asset management. I spent almost 12 years in the Israeli cyber intelligence force, then let technology for prominent hedge fund work for a startup in the serverless field. And with me today should be Cindy Blake, fireflies VP of marketing. But this is the first infrastructure fail. All right. So just like I bomb and S bomb, we created an amazing, we hope amazing deck for you. But Cindy couldn't be here. So the software, the service worked well, but the infrastructure failed. And if we could do the bomb, the bill of material for the infrastructure, we could have prepped better for this call. So it will be only me. But I'll try to make up for Cindy's absence. So let's start with the basics. For the past few years, S bomb is being crazy, like a very hot buzzword. So I'm sure most of you heard about it many, many times. So I don't want to reinvent the wheel and talk about what is S bomb and why it's important. So I asked the most, the smartest thing I know, chat GPT, what S bomb is all about. And it gave me a very elaborate answer. So let's let's take the key part of it. It says something about software of materials, complete inventory of all software components. It has all third party software, open source libraries, and other components. This is where most of us are focusing our current S bomb efforts and endeavors. Then the S bomb is essential components for software supply chain security. Supply chain is fantastic. Personally, I'm really excited about this field. And you need to list all the third party components, right? So what basically says, let's understand what our services, our software is comprised of, let's list all the open source, all the dependencies, and the supply chain and understand what we have in our software because open source is huge, stack overflow is used, GitHub is used, and we don't really know what our services are comprised of. And now let's try to define I bomb or infrastructure build of materials. First and foremost, it's a full inventory of our infrastructure, where do we have our servers, where do we have our storage, where do we have our network, everything that's connected, Kubernetes, pods. But after we will list all the key components, we need to list an inventory, all the dependencies. For example, if I have a server, where is the server storage? If it's connected to network, what's the network that it's connected to? I am and so on and so forth. Then obviously, we're here at KubeCon and Cloud NativeCon, we should talk about child processes and predecessors, right? If I have a Kubernetes cluster, I need to understand all the child resources. If I have a replica set, I need to know all the pods that connected to it and all the storage allowance. And the scale status, for example, if I have an auto scaling group in my public cloud, then we should understand all the infrastructure's code, for example, Helm or Terraform or Ansible or Pulumi that connect to this environment or control or provision this environment or infrastructure. And lastly, maybe as important as the native or the native cloud components, but many times overlooked, is all the third party tools that became integral part of our platforms, but we don't think about our octa the way that we think about our replica set. We don't think about MongoDB Atlas like we think about an RDS instance on AWS. So all of those are super critical, and you can find your favorite vendor across your infrastructure. Then after we have the entire inventory and we understand how our platform is built and what components and dependency are comprising it, we should understand every configuration per each item on our inventory and each version. And the last part is have ownership. I know that in the modern era, especially, you know, I work a lot with SREs and platform engineering, blame is something that you can say because we're all in together and we need to support every production issue together to solve it as fast as possible. But ownership is super critical for us to understand how to manage our infrastructure and how to really create the inventory because we create the iBOM for a case where we have a problem or incident or security issue. Now we need to attend to the owner of the specific resource or environment and handle it. So we got to add it to the inventory. So we understood what or hopefully understood what iBOM is. It's a full inventory of my entire platform. Hopefully it runs in the cloud. But now let's see some example on how this inventory looks like. So the very basic example is to me to write a small Python or BAS script to allow me to see every, in this case, it's AWS machines, all the EC2 servers that are older than 10 days and are running and see all the sizes and understand how many servers I have. But very, very basic. Now I can go forward and list all of the different servers in a specific region and even see if I have some backup related to them. Next version will be to create something super complex or very complex that runs across my entire multi-cloud infrastructure and sus abilities and Kubernetes clusters, at least every single resource in them have the type associated with it, its name, its owner, its infrastructure, its code, its status, its location, its properties, its tags, its creation data and so on and so forth. But it's becoming super complex. And the next stage is have a dedicated inventory or asset management solution for the cloud native error. So after we understand what the concept of iBOM is and how we can start implementing it, we should probably understand what's the implications in terms of the attack surface. Sorry. So iBOM is huge for APSEC, right? Now everyone in APSEC thinks about supply chain and iBOM and we are very much focused on scanning all the dependencies and let's say if we are Kubernetes fanatics, we always want to scan the images and understand everything is properly run. But what if I scan all the dependencies and I scan the image and everything looks okay and the configuration for this specific container is fine. But let's go back even to 2015. I'm running this Kubernetes pod inside a cluster which is vulnerable. And now I can do, escape out of the container and go up from the kernel to the part that I like to take advantage of. So iBOM allows us to really outline the entire attack service. Not just the service, like I showed you, those slides are here but the infrastructure is missing. This is vulnerable. The human element is almost the most vulnerable. In this case, we work. So we need to understand the entire chain of our infrastructure and our service. So let's say I have a piece of software that runs on Azure and make sure that the entire Azure configuration is built and inventory and secure. And it's fine. And then I run Kubernetes on top of Azure and I'm taking care of the Kubernetes. But now I'm going out to MongoDB Atlas. And MongoDB Atlas is a great service and I hope it's always secure and I personally use it. But let's say someone manages to take advantage of me shifting data in and out of MongoDB Atlas. So I need to understand that all this traffic goes in and out of Azure. And lastly, I have DataDog monitor this. And I have Cloud for, I'm using Cloud for CDN to distribute the data. So if I don't understand each and every component across the entire service, I'm missing out. So just like this image articulate, I can have a very strong chain. But if one link is super weak, I'm probably doing something wrong security-wise. So let's see some basic example, some basic, some more than basic examples of overlooked iBom incidents or issues. So for example, Kubernetes, I can again scan my images, know that my cluster is correct. But if I don't keep track on my cluster version, I might run out of date and vulnerable version of Kubernetes. API version per every object in my infrastructure. The operating version and type of every node that I run. A DB version, something that is, it can be a bit tricky because we all know that the public cloud security teams are amazing. And let's say I have, I'm running a managed Redis by AWS. And obviously, at least I can talk about the people I know, the AWS security team can run and patch vulnerabilities in Redis faster than I can do it. And I can implement in my production environment. So it's good. It's good to have a managed service in terms of security. But if I'm not allowing auto upgrade to minor version, I might think that, oh, it's a managed service and it's fine. And I'm now patched against this zero day that was just discovered this morning. But if I'm not allowing auto upgrade, I'm doing a bad service to my security team. Then we can talk about all the Terraform or other infrastructure as code models that they run. Is it something that we created? Is it something that another team created? Is it public repo or public model? Because unfortunately, some bad actors inject vulnerable repos into public infrastructure as code modules. And I need to understand this as part of my iBom. And which part of my infrastructure is consumed by each service? Because in case of an incident or in case I want to change something, I need to understand per service which infrastructure it consumes. And we go next, we can think about security group or VPC changes. I'll share a quick story from one of our customers at Firefly. So this relentless SecOps engineer examined a specific environment and found that the server, there was no need for it anymore. And the server consumes a security group. So he looked at it. He saw, okay, security group. One server. I don't need the server. Let's do Terraform destroy. Destroyed it. And he felt good. He minimized the attack service for the environment. He eliminated some waste. And it was fine. But 15 minutes later, production started to crumble. And some bad things happened to data retention. And why is that? Because this SecOps team set up a security group with a server in Terraform. But then another team in the DevOps department saw this security group and added two additional servers to the security group. So the SecOps engineers examined the Terraform manifest and see one server in the security group. Does Terraform destroy? Terraform doesn't see any problem. Destroy it. But the two other servers are now disconnect from the network and took production down. So this is very bad. And if they had a detailed I-bomb, they would know that if the security group is deleted, then those two additional servers would go down. Another thing that we should think about is how do we communicate changes to the infrastructure? What's the source of truth? If I'm the SecOps engineer and I see new ingress rule into my VPC, do I allow it or not? How do I know? I work in a large enterprise with hundreds of developers. And I don't know if someone added a new piece of software that needs a relay and talked across my VPC. So how do I know it? If I have a detailed I-bomb, I know I have now a source of truth for my infrastructure. And there are no questions asked. It's not a game of a cat and mouse. The SecOps close the ingress. The DevOps thing will open it again to allow ingress for the SAS vendor and so on and so forth. Moving on, we can understand in our I-bomb where or when all resources were deployed. So if I see lots of pods, obviously not pods, I hope you don't have pods running from 2017, but compute instances running from 2017 that are still running might get me thinking and researching why it happened. And as a last example, at least of all the assumed roles by SAS providers, right? Because I can tell you from some examples that are close to me, companies that don't remember if they already switched from Prisma cloud to WIS or they're still using cloud health or now it's FTO, but should I remove this assumed role or not? I don't know. And it's MS. So now, after understanding that we need to map the I-bomb, let's see how it can be used in case of an incident. So let's hope it never happens, but I remember when Meldown inspector hit, if you remember the vulnerabilities in the Intel processors, we were all very scared for a software that runs on instances that based on those processes. But let's say, and again, I'm really hoping it won't happen. Let's say tomorrow morning, God forbid, we find the vulnerability in Graviton by AWS. How do I know which part of my software, which part of my data set was exposed to the compromised or the vulnerable processes? If I don't have an I-bomb to go with it, it's almost impossible or very, very complex for me to do. If I have it, it still clicks away or one command line away for me to output and start investigating and have a proper disclosure. So up until now, we talked about how I-bomb is important for the security perspective of the cloud or the infrastructure practitioner. But now let's think about the operational point of view. If I take a service down, something changed, I want to take the service now, the service down, how do I make sure the entire supporting infrastructure is really eliminated? And I don't have any waste. I don't have any vulnerable instances that keep running. So if it's so important, now we talk about why I-bomb is so important, where is compliance? Because S-bomb is probably a compliance darling in the last two years. SOC2 loves it. The U.S. federal government really loves it. FedRAMP adores it. And now everyone is occupied with being an S-bomb centric. But is I-bomb needed or demanded? I'm not sure. I'm not sure that those compliance explicitly requires an I-bomb. But if you know how big banks and very large enterprises like the Fortune 100 quirk, you know that they are already implementing an I-bomb tracking mechanism. So we need to understand they are always one step ahead in terms of compliance. If they are tracking I-bomb, did we miss anything? So I got thinking about it and started to see some efforts around I-bomb. So Cyclone DX, very hot, explicitly states S-bomb, SAS-bomb, so bill of material for all the SAS that they use, and H-bomb, which is hardware bomb. And now the new cool kid in the block, Salsa, I think they call it, supply chain level for software architecture. So I don't know why I put a picture of Elmer Fudd here, but I'm suspecting it's more Fudd than something real. But if Cyclone DX, which is important, they want to list the S-bomb and they want to list the H-bomb, the hardware bomb. And I don't have, like in my company, we don't have hardware, we are cloud native, we use only cloud. So what's the equivalent? And if my cloud is software defined infrastructure, is it part of my S-bomb or not? And SAS-bomb, is it just a lease of SAS? Or when it comes down to Mongo Atlas or services like NetAppspot or Aqua Security, which really touches and control my infrastructure, how deep should I understand those and list those resources in part of my I-bomb? So FedRAMP is taking a step forward and really start to ask you to bomb everything that relates to your service, including the infrastructure part of it. But once again, I think it's important to note that I-bomb is definitely not just for security. If you track your I-bomb correctly, you can have your platform be much more efficient and tight. And now another story. So one of our customers call us one day and tells us that they, using Firefly, they found an environment by a sales engineer that cost them $3,000 per month. But they are a relatively large enterprise, so we ask, okay, fantastic, we're happy $3,000 is a lot of money, but you pay your cloud providers, orders of Mongo is more per week. So why are you excited about it? They tell us, okay, this software engineer left the company and he did it three years ago. So for three years, an entire environment was up and running. The company thought that if the HR management SaaS solution marked this employee as a non-employee anymore, he left the company, so it takes down everything, but it didn't take down the cloud infrastructure for it. So it's a security weakness, a significant security weakness in my opinion, but also just a lot of cloud waste, which led to more than 100K in the unneeded expenses. So if I try to wrap it up here, iBOM really allows the DevOps, SRE and platform engineer enjoy the security aspects of some compliance requirements. And now we need to understand why was iBOM neglected? Why when S-BOM is so big, iBOM is something that we almost never or never hear about. And this comes down to the fact that the old world was segmented differently, right? We had the software teams care about software and IT teams care about hardware. So software teams care about S-BOM and IT cares about H-BOM. But cloud is as mentioned before software defined infra, where does it fall? If I manage this infra through my Git, and I'm deploying it using my CI CD, is it software? I'm not sure, because app sec teams and software teams don't care about iBOM, they care about S-BOM. But cloud sec, they do not really care about iBOM, they just chase the next vulnerability, they chase, I am world tightening, and so on and so forth. And this is why it was neglected. But what if I told you we never neglected iBOM? What if I told you that the old legacy IT world has a very detailed iBOM, extremely detailed iBOM? It's called CMDB, configuration management database. Some of us can remember it. Amazing companies like BMC and ServiceNow built amazing products around CMDB. But this CMDB world was designed for a world where you have actual servers sitting in wrecks in your basement, and the CMDBs even list the HVAC and the AC mechanism in your data center. So they're built for a situation where you change your servers once every two years and track licenses on-prem by SQL and not for Kubernetes pods going up and down every 100 milliseconds. So the CMDB world, this great iBOM database, is missing from the cloud world. So for us here at CubeCon, at CloudNativeCon, this is a very important piece that is missing. So I probably skipped the shameless plug for Firefly being the best CloudNative CMDB and iBOM generator, but we should think about it. So what is CloudNative CMDB, configuration management database? It's a tool to control and see everything in my infrastructure from my public cloud, and obviously it should be multi-cloud, multi-account, but also all my relevant SaaS applications, like those data dogs and Tubernomic and New Relic and Octa and so on and so forth, and obviously Kubernetes, serverless, OpenShift, all the CloudNative technologies. Then we need to understand which parts of my cloud are managed with infrastructure as code and which are not. The unmanaged resources are almost worthless for iBOM, because if they are not codified to, let's say, Helm or Terraform, they are not scalable, they are not immutable, and I can't re-replicate them. So it's a problem. Last part, I need to understand the scale status per my infrastructure, the relationship and dependencies, and have a mindset of CloudNative. It's not about listing where this server, in which physical rack it lives, it's about which user deployed it, who's the owner, and what its status. So after we established the importance of iBOM and the CloudNative CMDB, we need to understand why grifts are key, or eliminating grift, remediating grift, and detecting grift, are key to keeping my iBOM always up-to-date and to establish or handle deviations for my secured or my desired baseline. So a drift is every deviation between or any gap between the desired state of the cloud and the actual state of the cloud. So it means that I architected, or I want to create an architecture for my cloud to look like A, but in reality it looks like B. And what can cause this transfer from A to B? It can be someone, a person, doing a change. It can be a glitch in my CICD pipeline. It can be an error. It can be a third party, a source, or a physical vendor, and doing changes to my cloud. And this many times is a silent killer for innovation. It creates tech debt, it has production issues and errors sprawl in, and it generates a lot of hustle for the ops teams. So it's like Pokemon, right? We've got to catch them all. We've got to understand that if we architected the secure baseline or the desired baseline, we need to understand every deviation from it and handle it as much as possible. Without it, iBOM becomes irrelevant, becomes out of date, and this is the whole point of iBOM. Always have a detailed and accurate representation of my infrastructure. So the final part here is after we established iBOM and we established how drift detection or remediation is important to keep iBOM efficient and updated, we need to understand, or we can try to think about how we can take advantage of iBOM. And one example is ransomware resiliency. So if everything in my infrastructure, let's say my cloud, is properly configured, qualified, and documented, it's easy to set it up in a different location, in a different cloud, in a different account, in a different region. I'll share another story. I think it was four weeks ago, one of our customers tells us that they are being attacked and there's a ransomware incident for the octa configuration. And obviously, you know, if you take away our octa, we are locked out. It's literally the keys to production because we don't have a kingdom just yet. So if your octa is being taken away from you, you are at a problem. But if you made sure that you always have your octa configuration properly codified, and let's say in this case, in Terraform, and you can go back in time and spin up your octa configuration in Terraform from two days ago, and now just do Terraform apply for it, you're up and running, and you don't care about the ransomware attack. So this is one of the benefits of having iBOM. You are much more resilient to ransomware and other service failures. All right. So conclusion. First part, XBOM, mostly SBOM, is one of the hardest buzzwords currently in the last two years in security, but infrastructure in iBOM was neglected. It's like we're taking our jewelry and put it in a safe, but the safe is in a trailer, and it's easy to hijack the trailer. So I think we should all track our iBOM and understand how our infrastructure looks like at any given point in time. And I think it's a professional must, from a security standpoint, from a fin-up standpoint, from compliance, and from just being a professional. I don't need Kubernetes, and I don't need serverless. If I just want to have high scale at any cost without any professional implications, just spin up lots of NGNIC servers and have any request handles super, super quickly, but it's not a professional move to do. Same goes for iBOM. If you want to be efficient, I want to have a tight platform, I need an iBOM. Then we remember that the OGs, the original gangsters of IT and on-prem data centers, then they understood the importance of iBOM and created the CMDB. But now let's think. Which infrastructure is more complex? Legacy IT or cloud? Which changes faster? Which had more technology advancement? Which is touched by more and more teams at a higher pace? I think we are much more complex. Instead of just thinking about sBOM, we should all start thinking about iBOM. And the last part, how to get started. We should start by inventory our cloud, understand which part of our cloud is properly codified, which is unmanaged, and this is the basic building blocks for an iBOM. And with that, I like to wish you all that your infrastructure never drifts and you will never be hustled with an on-call in your weekend. If you like this talk, you can rate it. If not, you can also rate it. I'll leave it up to you. And with that, and just 15 seconds after time, I'm open to any question if any has one.