 Good morning, everyone, almost lunchtime. And I just want to say thank you for taking the time to come here. Today, we're going to be talking about this platform as a service abstraction known affectionately as Batcave. And we're going to be talking about how it is used in one of the largest federal health care organizations. One of the largest federal agencies. But specifically, we are focused on delivering health care-oriented applications and how we're doing our software security, supply chain security, things like that through this particular approach. So for introduction's sake, I'm Rob Wood. I'm the Chief Information Security Officer. And hey, everyone, Jed Johnson. I'm a tech lead with the Defense Unicorns and a platform engineer on the Batcave project. Awesome. And so CMS, for anyone who is not familiar with it, it is the Centers for Medicare and Medicaid Services. So basically, what we do is we manage the back end, the payment backbone for all the payments that flow through the Medicare and Medicaid systems, along with some other supporting programs, like CHIP, which is focused on childhood or child insurance programs. We do clinical standards for hospitals across the US, all like 6,000 or so of them, as well as a bunch of other things. And so all in all, our programs cover over 130 million people across the US, which is a pretty decent amount of PHI, PII, just sensitive stuff that we have to make sure we are protecting. So as you might imagine, doing that at scale inside the federal context comes with challenges. There are challenges from the federal compliance standpoint. You've got FedRAMP, you've got ATOs, you've got lots of bureaucracy. It's almost a meme, or I'm sure people have memed it, all of the red tape that you're kind of swimming through to do anything in a large enterprise, let alone the largest enterprise in at least the country, potentially the world, that is the US federal government. And so we have a lot of developers, thousands of developers, all working across different parts of the organization. It's federated, it's disconnected, loosely coupled, different missions, different priorities, all of that stuff. And there's a lot of waterfall development still happening. There's a lot of agile transformations, which is really just translated to more affectionately like agile fall, because you're kind of just delivering in sprints, or you're tracking a waterfall project in JIRA, and you kind of wrap it up and call it DevOps, and you kind of move on. And that's not what we're aiming to do. We're aiming to actually enable faster, more efficient, more secure software development. So our solution, or part of our solution, to this problem is the back cave, which because every good government program needs an acronym, it is the Continuous Authorization and Verification Engine. We just needed something that mapped to something fun and exciting for branding purposes, so that's what we came up with. And so this is really, CMS has one of the largest cloud footprints in the world, mostly on AWS, some on Azure. And so there is a very large cloud footprint that we're already working with. And the back cave is really an opinionated abstraction on top of the CMS cloud program. That's tech, that's process, that's onboarding, that's people, that's policy, all of that. But it's an opinionated abstraction trying to alleviate some of the double work and leg work and prep work for compliance and authorization processes that people have to go through in order to get something done and built inside of the enterprise. And so some of our goals, we have many, but we want to help people move faster, naturally. We want to help people do it more securely, naturally. We want to make sure that things are compliant because we have Office of Inspectors General to reckon with and audits and all of this stuff that we have to deal with. That's all important in the federal context. But really, at the end of the day, if I could draw your eyes to the bottom right of this slide, it is about empowering and enabling continuous delivery, like true continuous delivery in the federal context. Not, we want to deliver something so we're going to pause and wait a month and do a security impact assessment and make sure we do a controls like a mini audit. That's not continuous delivery. You need to make a change because Log4J just came out. The Log4Shell issue just got released. And you need to change your application right now. We want to be able to do that without the bureaucracy getting in the way. So tech supported and process supported. So I want you to kind of rewind yourself in time. Back to March 2020. So we're all sitting around. Maybe we're in an office. Maybe you're out at the mall. You're out of the bar. And COVID-19 is just starting to kind of float around the world. It's coming to the US. There's like 10 to 20 cases. And then the White House issues a decree. Nationwide shutdowns. Nationwide shutdowns, everything is going into lockdown mode. So businesses stop bringing people in. All of that stuff happened really, really fast worldwide, especially in the US. And so now imagine you're inside of a federal agency. And you want to, like you have a responsibility and imperative to start contributing to the problem, to the solution in some meaningful way. So you go through and you do your ideas. You start generating things. And in the federal contact, at least in CMS, we are heavily supported by contract partners. And so to get to a contract partner, you can't just sign up with a credit card. You actually have to prepare a proposal like a procurement package. Put it out. People have to submit bids to it. All of it. That is a whole long process in and of itself. Another thing that has been heavily glorified over means, the slow federal procurement process. So call six months a pretty aggressive timeline, because these are extraordinary circumstances. And you kind of keep evolving. You kind of keep evolving now. Let's say you've awarded your contract. You're starting to do design work and onboarding people and background checking. And the contractor is then hiring the folks that are going to staff the project. And this idea is really starting to come together. And so from there, you're starting to set up the infrastructure you need. You start to make some design decisions around what you're doing. And then rewind all the way into the future. And you've got your thing built. And now you have to go through a three to six month ATO process, which is also, that's somewhat quick in some agencies. And what I want to really try to drive home here, if I could do this without falling off the stage, is look how much the problem space has changed. From when you were over here and you started coming up with your ideas for how you were going to fix the problem, the assumptions that you made then and the assumptions that you now have when your app is live are radically different. We're talking millions upon millions of people affected or potentially facing lethal outcomes because of one strain of a virus in COVID-19. And this is one health care emergency sort of situation that agencies like CMS are contributing to. There's the opioid epidemic. There's these more bespoke diseases that are starting to have outbreaks again. And this is the kind of thing that federal agencies need to be able to responsively and nimbly respond to situations. So let's say we get all the way to the end of that, and we need to start making changes quickly in the order of magnitude of hours and days, not weeks to months to quarters. We need to be able to do that and do so in a way that doesn't throw the baby out of the bathwater of all the compliance, security, resiliency requirements that are necessary operating in the business of the government. We can't forego that because real people, it's not like you're going to lose your cat videos or your food logs or what that. Like we're in the business of the government, so we have very real things to reckon with and contend with. And presidents come and Congress comes, and there's hell to pay if you don't do something that you were asked to do from those upper echelons of government. So this is our flywheel. And I'm not going to walk in detail through all of this, but you'll see terms on here, like marketing and sales and customer enablement and value and all of that. And we are really thinking about the ecosystem that is CMS. Hundreds of FISMA systems, which are composed of different web services and applications and data tools, all of that. We're thinking about this from a total addressable market space, really trying to lean in and instead of bring the hammer down, like most cybersecurity organizations do, and say, thou must adopt this platform because people are just going to ignore us. That is inevitable. That's typically how it goes in security when security teams start going in, not empathizing with who their stakeholder or customer is on the other side and just making demands. You might get short-term a compliance, but long-term people are going to hate you, resent you, ignore you, and so you're not going to really move the needle. And so we're really leaning in on this concept. And this is more, this is part of a broader culture shift inside of CMS, inside of our cybersecurity organization that is focused on empathy over the person on the other side of the conversation, the developer enablement, the person who's struggling to just get something done because the app that we provided them sucks because it takes 20 seconds to load a page because it's not accessible from some network segment you're in, all of that stuff. Focusing on value, investing value, start to spread the word, really trying to build something that people find valuable and letting that create virtuous cycles is really the kind of cornerstone of our particular flywheel. So with that, we're gonna transition into some of the technical bits of how the back gate actually works. All right, very good, hey everyone. Again, my name is Jed Johnson and I'll be talking about some of the engineering implementation of the back gate platform. So to begin with, the back gate platform is built exclusively on open source software and partnerships. In particular, we leverage an open source product called Big Bang and this is maintained by a United States Air Force organization called Platform One. And Big Bang is the core tech that the back cave is built off of and it's what has allowed us to go from zero to pride in less than six months. And so Big Bang itself is a declarative baseline of configurations and applications used to create a secure Kubernetes based platform. So when we think about what all is necessary to build a platform, things like logging and monitoring, runtime security, pipeline tooling, et cetera, Big Bang provides secure hardened containers and configurations for these core platform components right out of the box. And additionally, these hardened containers and configurations, they come with off-scout documents mapping their implemented security controls to NIST 853. So by adopting Big Bang, an organization can inherit a number of the security controls and the goal is to have in terms of NIST 853 and 80% secure platform on day one of development. At the same time, while it's great to have open source tooling like Big Bang, it's very rare that open source products will do exactly what you want them to do right out of the box. So here at BackAbe, we have a Contribute First culture, meaning that if an open source product is lacking a feature or we find a bug, we will engage with that community. And oftentimes, we start the conversation by creating a GitHub issue. And in many cases, the issue that we create isn't on that particular product's roadmap. And so in those cases, we'll actually go in and make the PR and fix the issue. Also, when using open source, there's always this option available to create a fork of the product. And while there are cases when forks are warranted, in our case, it is an organizational strategic imperative that we contribute back upstream. So for example, I mentioned that Big Bang provides hardened containers of open source products. So let's say we wanna use Grafana to visualize some metrics data. Grafana, the application receives periodic updates from its developers, right? Like bug fixes and security patches, things like that. When these updates occur, the Big Bang team brings them in, creates a re-hardened container, a re-hardened version of Grafana, updates its secure Grafana Helm chart, and I take all of this as a BackAbe developer and deploy it onto our platform. So as a result, with very minimal effort on our part and using this open source ecosystem, we get these bug fixes and we get these security patches virtually for free just by updating a single dependency and that is Big Bang. If we were to create a fork of either Big Bang or Grafana, this would effectively double the work on our teams in because now we would be the ones monitoring that upstream Grafana repo for updates and we would be the ones maintaining this secure Grafana Helm chart. So contributing to these upstream repos and building the relationships in the open source community has been absolutely critical to BackAbe success. All right, so now I'd like to introduce the core tech stack of the BackAbe platform which we call the utility belt and the utility belt is the set of core services that run on every BackAbe cluster and this includes things like our monitoring stack and logging stack, runtime security, et cetera. And each one of the apps that you see here contributes to our platform's security control mapping. So first we deploy on AWS, EKS and once we have this bare EKS cluster up and running, the first thing we do is install Flux and Flux is our continuous delivery tool for deploying the platform. We use Flux to take all of these CMS BackAbe specific configurations. We overlay them onto a Big Bang Helm chart and then deployed all as a Flux Helm release and this single Flux Helm release will deploy the entirety of our platform. So next the platform stack is deployed in a particular order such that we establish a security baseline before installing anything else in the cluster. So to do this, the first thing we do is install Kyverno as our admissions controller and Kyverno allows us to write policy as code that all of the other resources in the cluster have to adhere to. So then we install a service mesh using Istio. This allows us to regulate all ingress, egress and lateral network traffic. And finally we install our monitoring and metric stack which is Prometheus and Grafana. Once these apps have been installed to establish that security baseline, the rest of the platform, the rest of the apps and services you see up here, they're installed in parallel. And again, all this is happening from a single Flux Helm release. And lastly, once all of these core services are deployed and healthy, we use Argo CD to deploy an app team's application. Argo CD will aggressively reconcile drift. So if someone attempts to modify an application's configuration once it's been deployed, Argo will automatically revert it back to the state defined in version control. So in other words, we use Argo CD to deploy tenant applications using this declarative GitOps model. Okay, so at this point, all of the infrastructure and configuration as code has been written, it's all sitting up in GitHub. And the big question is, how do we want to deliver this platform to the app teams? And depending on what app team you talk to, people want different things. Some application teams already have platform engineers and they feel comfortable managing an AWS account and operating a Kubernetes cluster. Other app teams don't have any platform engineers and they would rather have a platform as a service fully managed type of experience. And of course, you also get every opinion and use case in between those things. So after many iterations, we ended up with these three delivery models. The first and earliest model we have is this single tenant platform as a service. And in this scenario, each app team has their own cluster that we the back cave team will manage and operate. And this model is really meant for app teams who maybe have limited cloud or platform expertise. And also maybe they have a higher budget. And I mentioned the budget piece because you can imagine running your own EKS cluster, including all those utility build apps you just saw plus your own app, like this can get super expensive, right? That said, if your app is truly mission critical, like maybe it does warrant this single tenant isolation model. That said, many app teams don't need this level of isolation. For example, if you're running a relatively low traffic internal facing web app, you probably don't need an entire EKS cluster to yourself. So in order to be a competitive option for this other market segment of app teams, we also have a multi-tenant platform as a service model. This model works especially well for entire organizations that want to migrate their entire suite of applications to run in the back cave. And of course, the cost is just gonna be considerably less than that single tenant option. Technically speaking with this multi-tenant model, each app is separated by namespace and each app has its own dedicated node. By separating the apps by node, this allows us to mitigate the blast radius in case one of the apps malfunctions. So for example, if one of the apps has a memory leak and it eats up all the memory in one of the nodes and that node becomes unhealthy and goes down, the rest of the apps running in the cluster should be unaffected. And finally, we meet some app teams who already have platform engineers who have experience with AWS and Kubernetes and they express an interest in back cave to use our code as a baseline for their own set of security controls and assessments. For those teams, we can just give them an overview of the back cave platform, point them to our code up in GitHub and let them dig in. Now, looking at these three models as one of the engineers who has to implement these different delivery models and then operate all of these very different clusters, I can tell you firsthand, like this is too much, right? Having this many delivery models does not scale very far past our current size. What we'd like to go in on is a fully managed platform as a service model that is multi-tenant. And I think sometimes folks get hung up on this idea of multi-tenancy and for good reasons, but I think using that verbiage, that phrase multi-tenant is actually a bit of a marketing faux pas on our end. For example, when you deploy an app on Heroku or even AWS, do you know or even care if it's multi or singleton on the back end? For things like databases, yeah, sure, you might care, but for the stateless application itself, probably not. All you care about is that your app is deployed, it's healthy, and it's accessible. And that's the beauty of those platforms. The back end of the platform is completely abstracted away from those development teams and they can focus solely on delivering value to their user base. Also, from a platform engineering perspective, what we've discovered is that whenever we onboard a new app team, it is so much easier to just spin up a new node for them rather than an entirely new EKS cluster, we can onboard these teams so much faster and with way less overhead. And lastly, the number one concern that keeps coming up is this idea of balancing developer freedom versus platform opinionation. More developer freedom means more options and more control for the developers, but this comes at the cost of not being able to inherit security controls from the platform. And as a result, it's gonna be more work on the app team, so in other words, they pay for this freedom of choice with their time. But with a highly opinionated platform, Rob can sign off on 80% or more of the security controls as soon as an app onwards. This enables us to give those app teams the flexibility and the control where it matters most, which we believe is at the app tier. And that's the real value out of Batcave. We want to enable an app team to inherit as many security controls as possible to reduce this amount of redundant toil all across CMS. All right, with that, I'll pass it back to Rob to talk about what we've done so far. All right, so most recently, we just hit a milestone around getting our full and independent ATO. The reason why that's actually important is the way that control inheritance ends up working in the federal government is you pick and choose things from these packages, these compliance packages. And if you have this more general, broad package where the Batcave is just a part of it, then the people on the other side of that work, the developers, the information security people who are embedded in those teams, still ended up have to kind of go through and pick and choose in the GRC tools like this control applies. That one is only partially applicable. This one doesn't apply, all of that stuff, as opposed to having this more modular approach. And so having our own full and independent ATO is one of those examples of a backend process enabler for our teams. Now right now we're heavily focused on that production level multi-tenancy model. This is also something fairly new for CMS. Most of the time segmentation has sort of been a very rigid construct. We're gonna segment you into different AWS accounts. You know, your prod account, your staging account, your test account, and your dev account. And that becomes very, very expensive over time, especially if resources are not optimized inside of those accounts, which oftentimes they're not. And so sort of shifting where the segmentation happens to the namespace layer inside of a cluster in a node, that is just, it's way more efficient, way more cost effective. And so we're leaning pretty heavily into that because we have a lot of groupings of systems that are fairly on the same, they're in the same sort of bucket and category with regards to risk and functionality and portfolio owner, like the person who pays the budget for that thing. And so this is something that we're really, really trying to optimize for it right now. Couple of other things that are in the very near future slash being worked on now, integration with other stuff inside of the Office of Impression Technology. So other components of the cloud program writ large, single sign on onboarding, being able to generate design diagram, architecture diagrams and that sort of thing, onboarding your logs and stuff into enterprise monitoring and operations. All of those other things, like the ATO gets a bad rap for being a big blocker in terms of getting stuff out the door. But there's all sorts of other stuff that happens either before or after the ATO that also is a huge strain on development teams. And it's just not as gnarly or you can't say it as easily as ATO. And so we're really trying to make sure that all of those other things, those orthogonal pieces are integrated through this sort of middleware enterprise API that we're building right now. And then the last thing, which you'll see on the top here is our security data lake integration. So this is part of a broader cybersecurity objective that we have right now. So show of hands of folks who are familiar with like Splunk deployments and stuff like that. All right, so normally, my condolences, normally the way you'd like deploy one of these things is it's anchored in these little pockets of the owner of the thing, of the team that manages the Splunk cluster. Now that's not a limitation of Splunk, it's a limitation of how teams typically deploy these kind of tools. And so when you start introducing this federated network segment dynamic and security versus operations and ownership and all of that, it creates all of these silos and that you basically have to reach across and get approval to reach across to do anything with that data. And so taking from the lessons learned in like marketing teams and like big data teams and stuff where they're basically, they've collapsed all those organizational silos around how they manage their data into, you know, think like big data lake tools, data bricks, snowflakes, redshift, things like that. And they're optimizing more on not like building detection and response capabilities in cluster, but rather the streaming ingestion pipes, the ETL pipelines that are bringing data in so that way they have good structured data with which to work. And then they're focused on re-skilling, upskilling their teams to be more like data analysts, data engineers. For example, the last startup that I was in was this company, Simon Data, and it was a marketing company. And I think why in the world does marketing have anything to do with federal healthcare? And it doesn't except for the fact that I think most of us, I'm wearing all birds right now. Now, I could buy my all birds off of an Instagram ad. I could buy them off of, I could go to their website, I might buy them in a retailer, I might see a Facebook ad instead of an Instagram ad. And marketing, similar to security, and this is a dynamic that exists in almost every industry, is, you know, all those things represented different pockets of data. They lived in different places. They didn't synchronize with one another. And insight gleaned from Instagram did not yield to insights or more targeted advertising in the website or in Google or what have you. And marketers, the team that was marketers were more like copywriters. They're buying ads, they're doing stuff like that. Similar to cybersecurity, we had a lot of policy people and a lot of operations people. We didn't have a lot of data engineers and software engineers. So really, we're trying to take that same concept, think all the data that we as cybersecurity people operate with and will operate with in the near future. S-bombs, compliance, like scan data. You've got vulnerability scan data. You've got pipeline outputs. You've got like configuration, like scan data on your SaaS tools. You've got asset management data, both from specific tools as well as normalized in some means. You've got maybe training data. You've got logs, all of that stuff. And you apply that to scenarios like log4j where you could kind of touch all those different parts of your data ecosystem. And if they live in different places, you can't do that efficiently and effectively. And so this is really about upscaling our team to be able to do that more effectively all built on top of like a data lake backend. So one of the big things here is making sure that the back cave natively integrates with as a data producer onto the data lake. And then eventually as a data consumer, pulling stuff out of the data lake to enrich the stuff that we present back to the dashboards and things like that to the app teams. So next slide. All right, so a couple of the big accomplishments that we're pretty proud of at this stage. 80% of, oh, that says 100%. It's not 100%, it's 80%. Chris Hughes would beg to differ. I'm sure he would. There we go, okay. I like that answer better. It looks nicer. So 80% fully inherited of our 853 baseline, which we refer to as ARS because we can't just call things consistent names. So there's also a lot of correlation and connection points happening between back cave and platform service and engineering emphasis and policy. So one of the things that falls under my group is we write policy that informs the agency. And so starting to write policy that aligns with the roadmap that we have at back cave and other parts of the security organization like when and where we're gonna start ingesting S-bombs, where you're gonna be sending your logs, how you should be doing, like if you're doing continuous delivery, how should that work? And so basically laying the groundwork for adoption of value creation here. And then also, this is something that's a little near and dear to me, given like I used to do a lot of red teaming work in my former consulting life. And so from day one, we brought in what's referred to as a purple team. So that's sort of the think red team, think blue team sort of sits right in between those two things. And they're acting as an interface point between testing, like adversarial emulation and testing and security operations and detection and response and then platform engineering, like right in the middle of that. And like one of the things that's an emphasis for this team is not only like figuring out how things are gonna break, like that's important, but taking the way that things break and turning those, translating those into regression tests so that they can go back into the upstream deployment pipelines so that when something breaks and we know it's gonna break, that thing doesn't start breaking again in the future. Think like you may have heard pentesters or people who do security testing complain that like they come back the next year and I found the same things in the client environment and that's annoying. They blocked my cross-ed scripting payload and all they did was like filter on my one thing and I just changed one character and it worked again. So that sort of dynamic of regression testing where every time that we do it, we're raising the floor on the overall resiliency and security posture of the platform and that benefit is then inherited by the tenant apps that go on top of it. All right, so this was, I'm gonna spend a ton of time on this slide, but this was one of the lead engineers for one of our early adopter customers. Now this particular customer was, they were working in an on-prem environment so deploying all their stuff in a brick and mortar data center and they were planning a transition into AWS and so right off the gates, like they were looking at like an 18 month timeframe to do all of that stuff, to re-architect some things, re-architect their app to embrace and work with like cloud native solutions, stuff like that, set up their new, build deployment pipelines and all of that stuff. So we got our clause into them early on in these planning migration discussions. We knew some people there and this is part of our early marketing strategies internally and like we took their 18 month roadmap and we ended up shrinking that down into like two and a half, three months and like that alone is awesome and so something we're really proud of, we hope that it can go even faster than that for future migrations, for future customers because we learned a lot, we're a little clunky like going through this initial engagement, because we're like basically working within this broader construct of other OIT Office of Information Technology resources and so like figuring out how best to work and collaborate with them in the service of the customer is something that like we learned a lot through this initial engagement and others like it and we're just folding that back into the program. So every single run we're gonna get faster and faster and more efficient. All right so a big thing like throughout this whole talk we've been emphasizing people and partnerships and teamwork and collaboration and so this is like this is part of like part of this is doing inside of like doing it inside of the CMS context then part of it is working across other agencies. So across HHS, HHS as a collective cabinet agency is one of the biggest in the government. Think FDA, CDC, NIH and HHS proper. Like that covers a lot of the federal budget and a lot of basically everything like anytime that you're interacting with anything in the healthcare space in a hospital or what have you, one of these agencies touches something about that and CMS inclusive. So like sharing out code, lessons learned, getting like design partners in early so we can start like thinking about what other operating divisions slash agencies are gonna like the needs they're gonna have so we can fold that in, that's really, really important. And then this other bit like really leaning into a culture of openness. So this is something that's important for our security team writ large. It's been uncomfortable for a lot of people just being more transparent, being open, sharing. My deputy and I for example we do monthly ask me any things. We're like all of you if you wanted could join next month, like we don't restrict them to anybody and nothing is off the table. Like we've talked about wine and whiskey and some of them we've talked about career advice. We've talked about security policy. We've talked about tech stuff. People will try to like stump the chump on and we do all these things live and that it makes people really uncomfortable but what we wanted and this program the part of that is about making security more accessible and like being able to empathize through real world connection with other people and that's a real important cultural value for us. So with that we just wanted to say thank you for taking the time this morning to spend with us. These are all LinkedIn links if you have any interest in following along with any of the rants that we do or things that we post on there. And I think we have some, yeah we've got a little bit of time for questions and yeah thank you so much everyone. Like a tenant application that would have operated in that space. Right now, right now I don't think so. They're more like a traditional APIs or web applications stuff like that. I imagine that like yes we will get to that place because those sort of applications do exist in CMS. The, one of the things that we've been trying to be very cognizant of is like taking on the right customers both in terms of like status and influence inside the agency early on but also like tech complexity because with every customer that adopts that's burden reduced for not only them but for us and for other parts of our IT office and so we don't wanna get muddled down in like a super complicated deployment early on and we're not gonna learn a lot of like replicable lessons and stuff like that to other customers but we're trying to be very intentional now so that we can lay the groundwork for scaling and I feel like that we'd have to kind of pull out like what are gonna be the bits that are very applicable to a pipeline where the pipeline is really gonna benefit that kind of app, where does it run and all of that stuff and so not yet but I imagine soon. Yeah, so like there's layers of that across like the AWS accounts and stuff as you might imagine. In some cases, things that are like internet facing and stuff do have wafts in front of them which I think we could stand to do more of that even internally like that runtime sort of security introspection stuff and it's a growth area for us internally basically layering those in more places and it's not even about in my mind like blocking malicious traffic. I think one of the big benefits is being able to create like these virtual patches. So like you find something and before you can fix it in code you can get like a virtual patch out to like prevent a thing from happening or like block a cluster of IPs that's doing something or a pattern of traffic and then basically buy yourself some time to fix the issue in code. So like that's something that like generally speaking not just for back cave deployments is like quite relevant for our like security engineering teams. Yeah, good question. All right. Well, thank you so much everyone. We're gonna like be hanging around for the rest of the day so come and chat and come get stickers if you want them and yeah, have a good rest of the conference.