 Okay, we'll get started a couple of minutes early because first part's intro anyway. So, those of you who don't know, I'm Brett Mogilevsky and this is Diego Lapidus. We work for H&F, which is this really kind of amazingly weird unicorn organization. It's a digital consultancy inside the US federal government for federal government. In other words, we're feds, but we do agile user center design and we do interesting platform stuff with Cloud Foundry. And we work with other federal agencies to help them solve their problems. And we're working on something called Cloud.gov. So, this is a talk about that and what we're having to do to make Cloud.gov viable for the federal marketplace. Okay. So, H&F is filled with people like us who are, like I said, the user center, the agile people, a lot of them came from private industry. And again, the government and the first thing we do is we make a bunch of software like, okay, let's start iterating, boom, boom, boom, boom. We're starting to make things up. And so, you develop something maybe one to three months and you're ready to ship it and we're starting to stack things up and like, let's get it out there. So, we start to do that, but then that's not what happens. This is what happens. And so, H&F is kind of new in the government and we're like, what's going on here? You know, why are we having such a difficult time and why is everyone else having such a difficult time? It turns out everybody's having the same problem. You go over to all the agencies in the federal space and they all have these problems and it turns out it's all about compliance. This is like the biggest impediment. So, if we make software faster, we go faster and faster and faster and we iterate faster. If we can't deploy faster, then what's the point? All we've done is just feed a bottleneck. It's like having a freeway run into a country lane. So, the process in the government doesn't look too much different until you reach the bottom. So, there's this notion that you have to procure and configure your servers. You're gonna have to set up your application. You're gonna run some security scans. You probably have to do some documentation on it. Here's where it goes left in the government. To document it for the government, you have to write docs for your whole stack. And I mean your whole stack and the standards are written around physical data centers. They have things like fire suppression systems next to your hardware, like, do you have that? How is that taken care of? You end up writing to document your whole stack from the hardware up to somewhere between 200 to 1,000 pages of documentation. And just to write that documentation, you have to know about 4,006 pages of regulations. So, this is in addition to the law. So, Fismith law and there's all this regulatory framework and this culture around risk acceptance that are built around it. You have to understand all that. So, that's a lot of information to be familiar with just to ship a digital service. So, the way agencies have handled this typically is they'll develop these experts. They have these compliance experts who sit in the agencies. And they're the ones, they're sort of these Talmudic scholars of compliance architecture. And they understand every single OMB memo that's ever been written or every FIPS document that I've gone out or the NIST standards that exist. And they're able to say, well, the intent of this thing is this, is this satisfying it? Yeah, I think we think so or no, it's definitely not or whatever. And so, you end up with this bottleneck going around these people to try and get your software out. And so, you might ask how long is it gonna take? This is, we're trying to get something out in the public space. We wanna serve the American public better and they're complaining about digital infrastructure for the government being terrible. How much longer is this gonna take? Well, the government doesn't have great stats because it's spread across all the agencies. But the bell curve, it says the middle is around eight to four, or six to 14 months to get what's called authority to operator, ATO. That's authority to operate your service in the public. So, six to 14 months to get an ATO, you can imagine the effect that has on your team. Because now they've tried to deploy something but they've spent all their time in actually the compliance and deployment side of the equation even though they spent a very short amount of time actually developing their software. So, you can imagine the effect this has on them. On their morale, on their momentum, on their ability to learn. It's a huge deterrent to people in government actually being able to keep up with the leading edge of software because they can't iterate quickly. So, this is a huge problem. The other thing that's happening in government is that as you all know, speed equals security. And there's a message from Pivotal here at the conference which is about, you know, repave, you know, I can't remember what it was, but there were the three words. So, basically it was like, get back to a known good state immediately. Repave the world. And the speed with which you can deploy a change or a fix is huge. There's no, we all now know there's no such thing as a secure system. There's only systems that haven't been broken yet. There's only systems that are not yet, we're not yet aware of the vulnerabilities that they're subject to. So, speed becomes a huge factor in making these systems secure. So, if we're gonna do things with the federal government, we don't want every single agency and every single team to have to figure out how to get things going really quickly. So, speed becomes a huge thing. So, for AT&F when we came in and we said, okay, we're having this problem, we better figure it out for ourselves and then make sure it works for everyone else. So, we looked at this problem, we said, okay, this is kind of where it has fits in. We deployed Cloud Foundry and a couple of other options internally. We actually tried the configuration management route for a while first and found that didn't work out for the exact same reasons that were described in one of those keynotes, which is that, you know, the developers don't really want to work on your Chef Huppet and Ansible and God knows, you know, salt, whatever other manifests you have. They don't really want to work on that. They want to work on their code. So, we ended up going in the Paz route and deploying Cloud Foundry as something called cloud.gov, which is the product. Diego's got a technical and business lead and I'm kind of product lead on the side and cloud.gov has built pretty much soup to nuts on Cloud Foundry and not just Cloud Foundry, the technology but Cloud Foundry, the community. We as the federal government have a hard time buying things. So, working with the vendors is very difficult around stuff like this and we also have these really kind of, I wouldn't say unique but sort of specialized compliance issues that make it harder for vendors to work with the government. So, we're basically taking Cloud Foundry and adapting it and making sure that we do everything that's possible in our deployment to make sure it's actually gonna be compliant with federal architecture. So, we're gonna go into a little bit of the changes that we've had to make and things we've had to tackle. None of them are huge technical lifts but there may be things for people to be aware of. I'm gonna say everything that we have done is in the open. We have it all in GitHub. It's all there for people to follow and use and follow our path. So, we're gonna talk first about federal compliance. I'm gonna try and stick to one slide because it could be really nightmarish. You could say 4,006 pages and I'm gonna try and do it in one slide. But basically it works like this. FISMA is the Federal Information Security Management Act of 2002 and it says all kinds of things about what you're supposed to do with your systems and it makes reference to these NIST standards. So, basically you say my app is how sensitive? You say low, medium or high. That's the FISMA level. That level implies you have to obey these NIST controls and this has these very exhaustive specifications for different kinds of access control or information integrity and things like that. And that's the NIST 800-53 controls. So, you're supposed to determine for your app what is my level, which controls do I need to obey. Then you have to take that into consideration as you build your service and document it, which is a very waterfall process, typically. And then you have to verify both the docs and the systems. They have to verify, if you make an assertion in the docs that you've configured this hardware in this way, they're actually gonna go verify that you've configured the hardware in that way. They have to actually say, yes, not only have they documented this but the documentation is correct, it's true. And then, at the same time, they have to then also verify the system. So, that's gonna be looking for holes, looking for your security posture, looking for vulnerabilities and how do you roll things out. So, they're gonna be looking at all that as well. Sorry, this type is really small here. And then, ultimately, there's an authorizing official. So, this is usually the Chief Information Security Officer for an agency who's going to accept the risk. And accepting the risk says, okay, all these other things considered, the rest of the things, I accept the risk if the rate in the system is relatively low. I'm gonna give it authority to operate, ATO, so it can be out in the public. Now, the number of controls we're talking about is, as it says there, 255 or so. That's for FISMA moderate. If you go to FISMA high, which is like the high security applications, then it goes through the roof. But, basically, there's a huge number of concerns here. So, we're trying to get them on the platform level and the Federal Compliance Architecture has something called FedRAMP, which is basically aimed at platforms. It's aimed at SASs and PASs and IASs that are gonna have a lot of tenants. And the idea being that they're gonna validate everything they possibly can about it. And then, that is a signal to the rest of the government that they can build on your stuff with impunity. And the way they do that is there's actually like this triple approval from the Department of Homeland Security, Department of Defense, and the General Services Administration. Those three CISOs are gonna look at this full package, including this exhaustive, months-long assessment, decide to accept the risk and say, yes, you have authority to operate as a platform. There's a provision of platforms so now agencies can work with you, build their applications on top, and save all kinds of time and effort. That's how compliance works in the government. Okay. So, what we're doing is we're addressing this by layers. So, what you do is you say, there's at least four layers that you look at. So, there's the human or organizational layer like who's actually running things, who's staffing it, how are those people validated? How is their access control as they enter and leave the organization and so on? There's the IAS layer, which as we know is just a cloud and you're all at the Cloud Founder Conference, you know that Cloud Founders are ensuring the IAS layer into sort of a commoditized layer, but you can leverage a lot of the stuff that was already done if you pick one, which we'll talk about in a minute. And then the paths is where we're kinda focusing all our effort and energy, which is to kinda knock out as many of those controls as possible and leave as little as left to do for the application layer. In the application layer, we're trying to get the number of controls they can do down to something that any team could handle, any two-peats a team could handle. And then to that end, we're providing facilities to help that team do that more easily. So, we're gonna give you some of the tips, things that we've had to go through. We're in the FedRAMP process right now. We're in what's called the FedRAMP ready state, which basically means that they've done the course assessment of our tech and our processes and things like that and say it looks pretty good. Now comes the exhaustive, you know, month-long process. So, fingers crossed, Ben of the Year will be fully FedRAMP compliant. Do you wanna start from here? Sure. So, you know, we decided to build a platform, but the reality is that we need to build it somewhere. Right? You can choose to use a data center. You know, we have plenty of those in basements all across DC and Virginia. But we wanted to, you know, move to the cloud, move to use a provider that can give us good architecture. And we decided to use a public cloud provider. And let me try to. Okay. That, there's no way to, cool. Not usually much. Okay. It's really tiny. Our preview slide is about this, it's a poster sample size. Sorry. Yeah, so basically we decided to use a public cloud provider that had their infrastructure approved that had FedRAMP compliant, that had FedRAMP compliance already done and that we didn't just leverage, right? So, a bunch of the controls that you would normally see someone else, you just inherit them by using an approved cloud provider. The cool thing is, you know, we use a pretty popular one. We use Amazon. So, like, you know, it's the same thing that everyone else uses. We can be mobile. We know we can move. Yeah. So, one of the cool things about Cloud Foundry is that it doesn't lock us in and do anything. And if anything happens, we can just move along. And that was one of the key features that we liked about Cloud Foundry when we selected it. But the thing is, when you talk about cloud, I think that there are three rules of cloud and that is automation, automation and automation. You know. Automation, robots all the way down. Yep. It's gotta be robots all the way down. So, we are automating pretty much everything with Bosch and Concourse. We, you know, you track the public CF release from GitHub. We have Custom Manifests in GitHub too. If you wanna go check it out, you can do that. One of the cool things about having everything you could have is that we have one authoritative source for everything that we have. Right? Like, we know that if anyone comes in and does a change to one of our repos, what that change will us and we can track why that happened. Moreover, we have, you know, the pull request process of someone having to look over any change that goes into any repo. We also, you know, use, as I said, Bosch for everything. We have a bunch of Bosch releases and we use Concourse to deploy all the things. Our main, one of our main pipelines looks like this. It's growing to a point you can't read things anymore, but it's pretty cool, you know. If you guys haven't checked out Concourse yet, you definitely should. Yes, it has been extremely useful for us to use Concourse. But one of the cool things is that with Concourse, we can do Bosch releases, we can do, we're doing Terraform now, we're doing CF apps, we do custom stuff with security scans. You can do pretty much whatever you want. So all of these allows us to, you know, have a lot of benefits when you're doing government work. When you're talking about compliance, when you're talking about security, automation is really crucial, right? All of the things, right? Like, you know, we can repeat any deployment whatever you want. If there's any disaster recovery issue, if Amazon happened to go down, we can redeploy everything in a different region. It allows us to have a very good audit trail. So usually in any other data center environment or any other agency, what they would look at is like, what are the logs for the Chefs run? So what are the logs for the SSA taxes? But here, what we show is like, you know, the good history for our GitHub repos, the Bosch audit logs, the Concourse build logs. Those are all things that can really help us when we're talking about auditing. Often they're looking for things like a change review board. What is your change review board process for deploying code and production? How do you make sure that what lands in production is what was supposed to? And we just say, well, there's hashes and you can look at the hashes all the way across and it's automated and so no humans have to do it. It's just here's the pull request and here's the person who approved it. So there's our change review board. And that kind of blows their minds in the federal compliance architecture because people don't do it that way. But they all look at the NIST standards and say, yeah, that satisfies the controls. So that's, you know, modern techniques that are in this community are really gonna apply. Yeah, and one of the things that it was really important when we were talking about compliance is like, we are going to the core beliefs of people but we need to show how we fit the controls, right? Like how we fit all the compliance tax that the concept was made for, right? Like, you're not talking about using Chef. The control is do you have your configuration somewhere that it can be audited, right? So that's what we're doing. So when you're talking about security, Cloud Foundry has a lot of cool security features, right? These stem cells, you know, are hard and already, you have the user management piece of UA is fantastic. Even within the Cloud Controller or all the permissions are really well organized and that really helps us, right? There's also like the Logurgator and the audit trails that it provides are extremely useful for a compliance world. And, you know, we took the Cloud Foundry security from the get-go as a starting point but we needed to move from there into deeper things. So we started with host security and that the Bosch automation pieces allowed us to take an iterative approach to how we deal with security at the host layer, right? We created Bosch release, we tested, then we applied it to all the servers and we know that everything is consistent across the board. These are some of the things that we're doing on the host security itself. The, you know, we track the upstream stem cells, they are usually updated pretty often. If there's an issue, there's a vulnerability in Ubuntu. And the last time it was like a day that we had a vulnerability open. We do stem cell hardening. I mean, we have a Bosch release that is called CGharden that applies pretty basic hardening scripts to the stem cell. We do vulnerability scanning on all the servers using Nessus, if you're not familiar with Nessus, it's just a security scanning tool that you can use to scan all DOS and network. We are starting to do internal checks on all the files using Tripwire, which has been pretty cool, you know, so far so good. Because we're using Amazon, we are forwarding all the logs from the platform to CloudWatch logs now and hopefully setting alerts soon. And, you know, we're using Bosch to deploy all of this. And again, it allows us to be consistent across the board. Yeah, they wanted, initially the compliance side, they wanted to scan our entire set of machines. We're like, we don't know how many there are at any given time and they kind of, that kind of made them explode a little bit. And they said, well, we need network access to those things. We need SSH keys on them or like, like hell you do. So we ended up deploying the Nessus agent, which reports out, but by making part of the Bosch release it ends up, you know, on every single machine. And so it really is kind of naturally done, but they don't have to change their configuration. We're using the agent to report it centrally. So that's worked out pretty well. Yeah, and all of these are, like each one of those are single Bosch release. So if you're interested in any of those, you can choose. Yeah, we're publishing everything we do. Yeah. Regarding user access, you know, again, we use standard UAA, but one of the things that the UAA provides is it allows you to delegate user access to a platform. And that is pretty cool because it means that it's not our problem, someone else's problem, we don't care, which is, you know, I had a thingy there that says, you know, it's just login with your credentials. And, you know, if someone gets fired, you know, don't come yelling at me because they had access, you could just disable in the single sign-on platform. Yeah, this saved us a lot of effort. I mean, if we tried to do MFA directly in UAA or replace UAA, it would have been a lot more work. But UAA is not altogether well documented, but it has all kinds of really interesting features. And one of them is, you know, if you go look, I think they're not gonna make it now better, but it was, it used to be just like, read me how to set up with Okta. And it turned out that's a roadmap for standing up with any SAML provider. So whether it's Azure Active Directory or Secure Auth or whatever. And we're now looking at like, you know, we all have these government identity cards that, well, that's my bike locker. We all use government identity cards that have like biometrics on things like that. So by doing this, we're outsourcing that back to the agencies that have very invested in that architecture themselves. So if you're gonna sell something to the federal government, then that's a really good thing to look at is having UAA delegate and then it's their problem. As opposed to, it cuts out a lot of concerns from us on the compliance side. We don't have to deal with it. And just for future reference, we don't allow people to sign up with bike locker cards. Yeah, your bike lock doesn't work. So another thing that we were interested in is how can we allow operators and administrators to access the platform with some security, right? And before we had a big street set of SSH credentials to jump boxes, but now what we're working on is something that we call a fumarole of jump boxes. And that's using concourse to, you know, build a container, hijack that container. And, you know, we have the credentials already in there that we got from concourse. And then you can just use botch and whatever. And after, you know, I think it is like 30 minutes that the whole thing disappears. That way, you know, we limit access to concourse itself and then, you know, you only have access for a very limited amount of time. Okay, so once we've got all that, so we still have all those pages of documentation. So we're, you know, we're working on the FedRam space to make it so that people are much more about like what is the system as opposed to what is the documentation. And the hope is that if we do this right, we have such economy of scale that people are gonna do only the apps and they're not gonna have to do platforms like this. So fingers crossed, this is the last time anybody has to do it this exhaustively if they're using our platform. But the idea is that we took compliance. Normally it's 200 to 1,000 pages and it's boilerplate, a lot of it's copy pasted from old systems and templates. And it's somebody's, you know, weekend to go through and try and, well, more than a weekend. It's a lot of weekends going through looking for tips and trying to figure out what changed from system to system. We said, okay, we're not gonna write any flat documentation. We want it to be like code. We want to treat it like code. So we decompose the way the documentation is structured. And so we do it all as YAML data in GitHub. And then what we can do is we can compose it. So the same way we're composing our layers of like the h and f organization and then AWS is our AS layer and then Cloud Foundry is our Paz and then the app goes on top. We're basically decomposing our documentation in exactly the same way. And then we're actually automating the publication of that via concourse. So what you see here is like there's the standard. So in this state, 100-53 is those controls. And then the certifications, the different levels you might have, FedRAMP low, FedRAMP medium, LATO is GSA one. But you could also, and actually somebody contributed to us because it's open source, they can treat it like a PCI mapping. So it's mapping all these controls. And then you have your components, you kind of divide it out and say, okay, we're gonna document each of these things individually. And so what that looks like when you actually push it through concourse is we deploy it in this get book fashion. It's directly readable on the web at compliance.cloud.gov. And we now are able to render it directly into the FedRAMP word template, which we're working on actually, somebody's working at the office today, which they require because there's a bunch of stuff they did with their structure, that word document that's important to them. And so now people can take this documentation and they can kind of recombine it on their own or switch it out and switch out with a different IAS provider. And so if there's other people who want to follow our lead and become cloud service providers to the government, here's a paved path you can follow where we've kind of laid it out for you. You can replace only the parts that are different for you or amend them with things that are different that your value add as a vendor. The other thing that we're doing is like, we really want teams to have high confidence when they enter the compliance process that they're gonna succeed. And we want the auditors to have high confidence that when we say this is what's in our docs and they have to go and say, yes, is that real now, that they can actually see that very, very easily. So we're actually using BDD, behavior driven design, it's a kind of test framework, where you basically can do sort of very human readable test to say, given this condition, when I do this, then this is true. And it's order readable and so on, but on the back end there's code, they'll do things. So we have things that say like, given if I go look in the Amazon console for this thing that we say is true, then I'll find that this thing is set to on. And we're actually like delivering not only the docs, but like stretches of BDD code that will verify that the docs, what's in the docs is true, so that we can basically be doing this continuously all day every day. And if everything ever falls out of true, we'll know. And also our hope is that we can actually get the fit ramp auditing process down to the point where they will expect this coming in. And therefore the auditing process goes much, much faster rather than being months. It's just to look for differences and look at the implementation of the tests and not having to check everything manually, which is what they do now. So that's kind of a novel thing we're doing. We're also making it so that as part of that deployment pipeline when we deploy the new version of the docs, we're also running all of our tests to verify the docs are true. And so it'll actually, in the published version of the docs it'll say last checked as of date X. So that's really powerful. Because we want it not only when we're getting compliant, so we want it after, when we want a state compliant. The other thing we're doing is we're working on making sure that the teams that are deploying, we want to give them tools to understand when, things that are within their boundary that are not at the cloud foundry level, but they're at the app level, we're giving them tools to help them understand when they've got problems and give them high confidence as they enter the compliance process and after the compliance process to be continuous. So we have automated code quality and pen testing scans. Again, this is built on the back of the concourse and we're slowly integrating a bunch of different tools into it. And then we have a viewer which will give them, sort of here's your current status and then alerts that'll tell them, hey, since yesterday looks like we got a few vulnerabilities that are introduced. And those vulnerabilities might be because they pushed in version of code or might be because the scans are picking up new stuff because new vulnerability was going into what it's looking for. And so that's what this looks like. This is what that looks like. This is currently sort of an internal 18 F tool, but we'll be turning it into, again, everything's open source, it's out there if you look for it, but we haven't made a part of cloud.gov yet, but it will be. So when you're attending cloud.gov, this is sort of a service that you get. And the goal here is for continuous compliance. So I mean, the federal teams have to jump through all kinds of hoops and they get bogged down as we said at the beginning. So we want it to be a lightweight process that they can get feedback immediately as soon as they start developing and sort of never fall out of true. Oh, yeah, so we're gonna start turning on some questions. If you wanna start submitting questions, you can use that. This is new, we're trying this on the fly, it's an experiment. Thanks, Google. But the idea is that we want compliance to be something that becomes lightweight and sort of incremental changes as things are found and not something you have to do with this heavyweight single time process that takes months. So we're not done. In terms of what comes next, we have a bunch of stuff we still have to handle. Like I said, we're probably through the federal process, but even after, oh, now it's open to anyone if you wanna use that link. It wasn't a minute ago. We have a bunch of stuff we still need to do. And so some of those are for compliance, but those are things we wanna do for security that we're looking down. So there's things there where we have to be really explicit about the boundary, like which things have gotten this scrutiny and which things haven't. If you bring your own build pack, we can't vouch for it. And that changes what you're getting, what your compliance around. But if you're using a supported build pack, it's compliant up to this level. Same thing when we move to the Diego backend, if you're using kind of blessed from images to build your app, we can make some assurances about that and include compliance, masonry, documentation with it. But if you're using your own from bespoke one, then maybe you're on your own. So we have to kind of work on that messaging to users. The same thing in the services they're gonna provision. If they're gonna provision services through our marketplace and some of them might not be provided by us but by other agencies, it has to be very clear. Does this service have ATO? What is your responsibility? If you're gonna try and get a story to operate. What is your responsibility with using that service? The other thing is we're not yet doing active container security. So once the containers are deployed, we're not really looking at the content very heavily. We're using tripwire on the host side, but we're not doing anything in the containers themselves. So we're investigating things like Clare and appdog that will make it easier for us to understand what's going on there. And again, that's a service back to people who are relying on us for their code. And then finally, we describe that really cool jump box thing we're doing with Conquers, Hijack and the containers to do the ephemeral jump boxes through Bosch, which is lots of fun. But unfortunately, we're not auditing that as much as we need to. So we're working on that. And that's kind of it. I'm gonna go to questions now, but if anybody's interested, these are things, places you can talk to us or find out about us that chat.atnf.gov, that's actually a public Slack channel. You can get in and there's us and a bunch of other people in federal compliance and governance space who deal with these kind of things, hanging out to get a place to ask questions and talk shop. So I'm going to go to questions. We've got about three minutes. So let's see. Do you do binary security scans in your standard revenue of tests, static analysis? We are doing both static and dynamic, but it's a very nascent pipeline. We're sort of, we're creating a bunch of different tools and comparing them, mixing the results to find out best to breed and figure out, okay, how do we not alert multiple times on the same thing across different tools and things like that. So, but yes, we're doing both static and dynamic. And then we use tools like CodePrimet and, you know, gymnasium and other public tools to scan our code because, you know, again, most of the things that we do, it's in GitHub, we can just use all of that to do the scans. Right. Are we running in, I guess that means Gov region of AWS or general? Ironically, all of our automation is saving our bacon now because it's turning out for a variety of reasons. We actually need to move from Amazon East West to Amazon Gov cloud. And in doing that, we're basically doing the last mile of our automation, which was actually provisioning AWS environment itself. And that is true to the whole multi-cloud promise. It's making it relatively easy for us to figure out, okay, we're gonna redeploy on Gov cloud and just keep rolling. And for anyone else in the compliance process, that would have made a deal killer. It's like, oh crap, we gotta redeploy, start over. And the federal people would have been, you're kicked out, but they see that we can do all this automation like, okay, can you do it in a week and a half? And we're like, well, let's see. And so now we're off and doing it. Team's working on that right now. Why and how did you decide to use Terraform to set up your Cloud Foundry clusters? How are your experiences with it? We started to automate with CloudFormation on AWS and it was just kind of very verbose. And it would help the initial setup of the account and setting up some VPCs, but when it then came time to do things like set up a database that we needed and things like that, it was kind of a separate step and it was disjointed. We found that using Terraform kind of unifying to one step made it a lot clearer and simpler. And also gave us a lot more confidence that we can take that across to multiple clouds. We were a little skeptical. We didn't wanna bring Terraform in kind of like the last minute right now because we didn't wanna bring another tool into the mix, but it's turned out that the team's been playing with that and they actually really enjoy it. So that's helped. And the thing about Terraform too is that it is a bit challenging to keep state with Terraform, but because we're using concourse to deploy the Terraform, the manifest, it's only concourse to one that is running all the, doing all the Terraform runs. So we can store, we store state in S3. So this is a big question. How are you going to help the federal government make the cultural shift? Many agencies have tried this and gotten crushed. How is this different? Oh, next, no. Yeah. So this is tough. This is really tough. We have a lot of agencies that are stuck in the past and think they need data centers and they think they need to scan every machine, you know, themselves and so on. We are working with the agencies that are most flexible, most forward-thinking first and working with them and showing success. Some of them are choosing to work with us on their own because H&F is working with them and deploying things for them. So as we work with these agencies and deployed things like the college scorecard or every kid in a park or not alone, like all these different sites we've done for different agencies, they have seen like, how the hell are you doing this and how are you getting H&F so quickly? We said, well, we're using Cloud Foundry internally and it's accelerating all these things. So they're interested in how we're doing it. So we're finding the agencies that are most flexible and working with them, but at the same time, we have consultancies going on H&F is not just about this. H&F does all kinds of things including organizational transformation consulting. So we're doing that with some of these agencies to help them understand how to be more agile, how to be more product-oriented and change the way they approach these things. So having a PAS available becomes a tool for them to say, okay, well, imagine ops was much, much easier. Now, how easily could you shift your culture? And so some of the very interested in that. Have you tackled secret injection in apps? For example, do, for example, credentials get set as a CF app and bars by concourse pipelines? We were just working on this yesterday. I don't know what the answer was. The team was working on it. So I think that we use user-provided services for some stuff and envires for some of it. I think that each application, each solution requires maybe some differences. We strongly suggest people to use user-provided services for secrets. And I think that's generally a good practice. Environment memorials for secrets have some problems, especially if you do low-green deploys and that kind of stuff. So I think that it's better to use user-provided services if you can. Okay. How much time are we saving through reduced compliance documentation? It depends on the level of the app. So the app might have, it's an open data application. It's government data that should be available to the public. It's not sensitive. If the site is broken, it's no big deal. It's not a continuity issue. If it's out there, it's not embarrassed. It's just, it needs to be out there. And it's like, basically, if you broke in the server, you'd see the data that is driving API, but you can get it all through the API in any case. Those things are very simple. We can do them just in a couple of days. We did it in like 90 minutes. We did it in 90 minutes, yeah. But again, it's because our director of infrastructure, who is the authorizing official for this, has seen all this in the past. He knows exactly what we're doing, and so he can do the risk acceptance because he knows the delta's very small. When we go to other agencies, which is not stuff we're building ourselves, but other agencies are building, it's much more of them accepting the risk and them understanding it. And so that's why FedRAMP is such an important thing. When we get there, that'll help them. But the goal is that even for really heavy applications, we want the time that the team spends writing documentation to be negligible. We want it to be something they can take in a two pizza team and not have a big deal. Whereas right now, it's you get three people from an ISO office to write this documentation for you. And the last one we'll take before we quit, sorry, we're over, is on the concept of a blessed build pact, does that create a bottleneck? How do you keep up with changes? So first of all, we're starting with the community build packs because the Cloud Foundry Foundation is doing a really, really good job on updating build packs and updating stem cells. We did see recently, there was a CVE that was like, wow, we're waiting for that new build pact to come through, it's taking a while, should we roll our own, what should we do? And it did come through and we were like, we were just about to complain and say, okay, what's the actual expectation here? What should we do? We don't really want to run our own and do separate fork it from the community. But then the Cloud Foundry Foundation, everybody seems like they're building, they're breaking out the build pack separately. So the build pack is gonna be shipped independent of the platform releases, which means the build packs will come through faster because that was held up on the platform release. So that's already being addressed. This is part of the magic is that we're in an ecosystem where we know everybody else has the same problems. They all want those build packs to come through really quickly. The bless build pack, again, we're just sort of taking the ones that are part of the platform, we say we will support those and those are the ones we'll have documentation for. For the ones that, if you bring your own build pack, we're basically saying that's your own bottleneck you're introducing as sort of a, as a tenant, you're bringing your own complexity that you're gonna have to deal with. You're bypassing some of the benefit of the platform. That's it, thank you very, very much. Thank you, I really appreciate it and great questions.