 Hi everyone. My name is Rowan Jacobs. I am an engineer at Pivotal. I work on the CF infrastructure team, and I'm going to be talking about government certified IaaS regions, how to get started using Cloud Foundry on them, and how ATNF is doing that in production today. Hi there. I'm Brett Moguleski. I'm the product lead for cloud.gov at ATNF, and we have some experience with this. We were one of the earlier doctors of Cloud Foundry in a government specific region, so we'll talk in more detail about that. All right. We are required by Boston Law to inform you the nearest exit is behind you in case of emergency. We'd hope you know that by now, but if not, you know. We didn't rehearse that. It went pretty well. Yeah. All right. Okay. So I'm going to take over first, and first thing to do is talk about, I know a lot of you in the room are gubbies, but maybe some of you aren't and are interested in approaching this space. So we're going to take a quick nutshell tour through what compliance looks like at a very high level for the cloud, and there are sort of three main bodies that you kind of need to be aware of. One is the National Institute of Standards and Technologies, NIST, FedRAMP, which we'll talk a little bit more about the definition of, and DISA, the Defense Information Systems Agency. I guess I'm going to be over here in advance slides, huh? Yeah. Okay. Oh, there we go. So first thing is that most of the civilian government compliance and actually most of the compliance internalists is based on this thing called the Federal Information Security Modernization Act, or FISMA. And what that does is it formalizes what are the steps necessary for a system to operate and be part of an agency's mission for them to rely on it, whether that's public-facing, internal-facing, etc. And so the FISMA actually requires agencies to use this thing called NIST Risk Management Framework. And so this is a standardized sort of body of knowledge and practice around the full life cycle of managing your system. It specifies a set of controls, which are essentially requirements in various families relating to access control and auditing and all sorts of different things, including physical environment. And you have to select a set of those controls that apply to your system based on what's called the risk level of your system. In other words, if this system was compromised, how much would it affect, how risky is it to the agency's mission? And some things may be different for different agencies. Maybe for somebody in the DOD, maybe the human resources system goes down. It's maybe not as critical as it is if it happens for OPM, which is where it's central to their mission or something. So it varies. It leaves a lot of context to the agencies to figure this out. And you have to kind of consider this combination of how confidential is this information, and how bad would it be if the confidentiality was compromised? What is the requirement for integrity of this information? What if the integrity was sullied? And the availability? What if the information was offline? You kind of rate in those three categories. You add it up and you take the high watermark, and that gives you your FISMA rating, low, moderate, or high. So that's kind of the critical first thing to understand is how you categorize systems. There's a little bit further that you go and the diss aside for the DOD, which I'll talk about in a little bit. But this is a baseline kind of how it works. But to give you an idea, if you have a moderate system, there's about 325 controls. It goes up for high, I think it's like 425 or something like that. But basically, the majority of systems in government are probably moderate. 325 requirements. And again, they go anything from how far is your fire suppression located from the hardware to your firmware updates, things like that. Let's see. Forward. So this is a kind of overview. FIPS 189 is the process I was talking about where you kind of categorize your system. You have to select your controls. 853 is where all those controls are. If you look for NIST 800-53, you'll see all different controls you might be required to do for government and all the compliance statements there. You're going to implement them. You're going to assess them. And somebody else has to assess them. And then the person who's the authorizing official of the agency is going to authorize your system. And then after that, you go into what's called con monitor, your monitoring of your controls. And you have to do that continuously. And then you have to redo it every year or so. So this is federal compliance in a nutshell, at least for the civilian side. And so that's the risk management framework. So everything else kind of stems from there. And FISMA basically says, hey, you got to use this in your agency. So then the question is what happens with the cloud when you have, you know, that work fine when agencies had full control over everything in their boundary. They had their own data centers. They ran all the machines. They racked everything. Now when you start going to providers, especially for CSPs, how do you handle that? So the recognition is that every single agency is doing this separately and redundantly. So the federal risk assess risk and authorization management program, FedRAMP, is going to basically document which agencies have accepted the risk for different cloud service providers. And it's also going to formalize a central sort of best of practice for validating cloud service providers. And that means that agencies don't have to do that work again. They can basically take that work and do what's called leverage it. They can leverage that work and build on top of it. The Joint Authorization Board is the three CIOs of the Department of Defense, Department of Homeland Security, and the General Services Administration. So it's not just one agency making the determination. It's three. It means there's very, very high standards among them, a little high level of scrutiny, a lot of communication. And so once that authorization is done, then agencies can accept them. They accept the risk of using that CSP based on the sort of word of the FedRAMP authorization as a sort of a bit of social proof, but just like understanding what it represents in terms of the level of scrutiny applied. And so that means that if you use a CSP that has the FedRAMP authorization, you as an agency, when you build your information system on top of that, you have a lot fewer controls to deal with. If I'm building something in the cloud, I don't have a data center. I don't know where the fire suppression systems is, but Amazon does, or Microsoft does, or Google does, or any other cloud you want to mention. So you're leveraging the work they've done there, and you're only worrying about the things that are necessary for your application. But as I said, there are about 325 controls for a moderate system. A typical IAS handles around 100 of them. So in order to go from that 100 that you get from just using a FedRAMP IAS provider to something closer to what you actually have to do for your full system, you saw the point in Peter's talk, a pass is kind of critical to doing this and doing it once and having a huge economy of scale where you can have lots of applications on that platform that are all leveraging the stack of controls that have been implemented for them. So Cloud Foundry is a great pick for that. It's why Cloud.gov uses it. Okay, so that was the civilian side. The DISA side, this is for the Department of Defense. DoD, as all these agencies kind of coalesce through DISA as the agency that sort of manages all the standards. DISA has what's called the Security Requirements Guide, or SRG, and it specifies levels. Publicly, one through six, although I'm told there are more, they're not publicly documented, I don't know. At least, actually I haven't been told that. I'm assuming that because it says these are the only publicly documented levels in the documentation, which implies that there are more. The good news is that DISA, SRG level two, they've recently recognized that it was really hard for vendors to go through with both a civilian agency and the DoD with totally different standards. DISA actually reformulated their entire standard to be based on top of FedRAMP. So now it's kind of FedRAMP equals or FedRAMP plus plus. So in our case, level two is equal to FedRAMP moderate. DaGov, for example, has FedRAMP moderate, therefore we're DISA level two. FedRAMP high with a little bit extra is DISA level four, and then DISA level six is secret. And the documentation is really clear, when you get the slides and look at the stuff, the documentation is really clear about how do you classify these levels and what is it you do. It builds on that same risk management framework as a base, but then it gives criteria for how do you determine what level something is. And I think that's it for the nickel tour of compliance land. Yeah, that's it. You now know all you need to know to like tackle the federal market for compliance. So if you want to... It's a lot harder than that. If you want to start using a public cloud IaaS, which is FedRAMP compliant or DISA compliant, there are right now only two you can choose from, which are AWS GovCloud and Azure government. Sorry, GCP fans. Actually, I should say Amazon recently, the Amazon public cloud is also FedRAMPed, but at a lower level. Yes, FedRAMP-Modern versus FedRAMP-High. Yeah, FedRAMP-High is AWS GovCloud only FedRAMP-Modern for other AWS regions. So AWS GovCloud is in many respects just like any other AWS region. It has its own endpoints for all the services. The difference is it is only available to U.S. persons. It is available to all U.S. persons, and you have a separate set of credentials that you use for your AWS GovCloud account versus your regular AWS account. So you today, if you are a U.S. national or working at a U.S. based company, could sign up for a GovCloud account and start using it and messing around with it. What's different about the capabilities of GovCloud? Well, there are a couple things that are missing, and many more things that were missing until very recently. Route 53 services, which is Amazon's own DNS services, are not available in AWS GovCloud. So you need to do your DNS management outside of AWS GovCloud, either on a vanilla AWS region or through some other provider. Up until... Oh, we also do not have a Windows 2016 stem cell for AWS available yet. There are Windows 2012 R2 stem cells. I believe the Windows 2016 stem cell is coming, but don't quote me on that. Yeah, what that means in practice is that if you're trying to put Cloud Foundry on AWS GovCloud and you're planning to support Windows, you want to use the full .NET framework, you're going to have a lot of trouble because you've got to build your own stem cells in there. That's much more complex. Yeah, your support will be limited for that experience. There are many features which AWS GovCloud added recently this year, actually last month, including a third availability zone, which makes deploying Cloud Foundry much easier, as well as SSL termination on application load balancers, and they added network load balancers as well. So all of that is now available to you in the AWS GovCloud region, which brings it much closer to feature parity, especially for Cloud Foundry users towards what the vanilla AWS regions already have. AWS GovCloud has several compliance features, including FedRAMP High, DoD up to level five, although at DISA level five, there are many services that are not available. Fortunately, the services that are available at DISA level five include most of the services you need to run Cloud Foundry. There's PCI, HIPAA, and CJIS compliance features available as well for people who have sensitive data on AWS GovCloud. Azure Government is one that is only available to the U.S. Government and Government Contractors. There are four AWS Government regions and two AWS Government DoD regions. And AWS, sorry, Azure Government is pretty similar to Azure in terms of features. The two main differences are you don't have all the same VM types available in Azure Government regions. There's a bunch of letters that probably don't mean anything to those of you who are not Azure users, but what this basically means is you have an older version of the basic VM type, which is VM type D, and of the specialty VM types. You only have a few which are the entry level for Dev and Test. That's A, DF, which is Compute Optimized, NC, which is GPU Compute Optimized, and H, which is your high-performance computing, high-memory VM type. Fortunately, a Cloud Foundry deployment will only use VM types that are in this range, so you don't have to worry. It doesn't use any of the more exotic VM types that are only available in these civilian-targeted Azure regions. The other issue with Azure Government and Feature Parity with regular Azure regions is that there are a couple Azure Government regions which do not have managed disks, so you need to use storage accounts for any kind of persistent storage there, but you can avoid this just by choosing not to use those particular regions because they're available in all the other regions. Sorry, all the other Azure Government regions have them. Compliance is similar, FedRAMP high, DISA up to level five. PCI and CJS Compliance, and those are also available in the, not the DISA and FedRAMP, but PCI, CJS, HIPAA, those compliance levels are all available in vanilla Azure as well. So if you don't need the full FedRAMP experience, you can also run on a regular Azure location. So let's say that you're sold on running Cloud Foundry on AWS GovCloud. I will show you how to do that in basically less than an hour of your time. The team I'm on maintains a tool called Bosch Bootloader, or bubble for short spelled BBL, and what this tool does is it combines a set of opinionated Terraform templates with a set of opinions about Bosch deployment manifest and how to run a create end command and get up your first Bosch director. So if you want to use this tool, you can just export your environment credentials as variables. We don't store these environment credentials on disk. We just read them from your environment variables, which is a nice security feature. You can run bubble up, and that will start the process of using Terraform to pave your infrastructure. It gives you a VPC. Then it will use Bosch create end to create a jumpbox, which is the only part of this VPC that will have direct ingress from the internet, and then it will create a Bosch director, which can only be accessed from within that jumpbox that we just deployed. So your Bosch director will be inside the secure VPC, only available for you to use if you have the key that is generated for the jumpbox by bubble. So one of the problems with running this on AWS GovCloud is that AWS GovCloud, as I mentioned, does not have Route 53 services. So if you want to then put up a Cloud Foundry or a concourse environment on this, you need to manually configure some DNS. Here's an example of what bubble would generate for you on a vanilla AWS environment using Route 53. It's not that much. You can configure this DNS yourself. You can create the star domain, just points to the HTTPS router, the SSH domain points to the SSH proxy router, and the TCP domain points to the TCP router, and the Bosch dot domain just points directly to your jumpbox. So that should allow you to have a... You have a Bosch director, you have a jumpbox, and you have now your manual DNS config, and that gives you everything you need to download and use CF deployment which, for those of you unfamiliar with using CF deployment to deploy Cloud Foundry, it's a much better experience than the legacy CF release. And it gets regular security bumps in the form of stem cell bumps and other upgrades. So I highly recommend that you use this if you're going to deploy an open source Cloud Foundry. You can just get clone it, and it comes with lots of ops files for you to use. I'll second this. We switched to it recently having had to work without it when CF deployment didn't exist. It was a lot harder. We had to maintain a lot more of our manifests. It's made our lives immeasurably easier for redeployingcloud.gov. Yeah. So CF deployment works very nicely with a bubble created Bosch director. You only need to use one ops file, which is the operations slash aws.yml. But if you don't like waiting like an hour for Galera VM to compile, you can also use compiled releases.yml, which is my favorite ops file in the CF deployment repo. You will also need to provide a system domain which will be your manually configured DNS domain from before. And this takes maybe like an hour or less to complete. And then you have a fully featured Cloud Foundry foundation on AWS.gov Cloud. You can start pushing apps to it. So let's talk about how 18F is doing this in production. Okay. So when we started, a lot of the things that were crossed out were still problems. So a lot of what we've done in the past, we were ahead of the curve. Cloud Foundry had not been deployed in GovCloud a lot. We bumped our heads against a lot of stuff. We sent a lot of pull requests upstream. The good news for you is that we don't have to go into exhaustive detail on that because it's all taken care of for you. But in general, what you need to know about Cloud.gov, it is a U.S. government tailored deployment to Cloud Foundry. We have a provisional ATO from the FedRAMPJAP, which you now know what it means. It's have a provisional authority to operate from the FedRAMP joint authorization board. And that means that agencies can use it with less effort than using an IS directly. We are 100% open source with one exception, which we're about to eliminate. I won't name it. And it predated the existence of the Bosch bootloader, so we didn't have that to work with. We had to basically kind of net it out on our own. So the barrier to entry for running Cloud Foundry used to be, you know, if you're coming to this recently, three or four years ago, it was pretty heinous. It was pretty hard to get up and running. So we have basically... We run everything with concourse. We have a boot... Our bootstrap basically bootstrap concourse, and concourse runs everything else. And we control everything out of infrastructure as code. It uses Terraform to set everything up at the infrastructure layer. And then we have, like, different concourse environments for staging and production and our development and so on. And everything is concourse driven. Concourse, if you're not using it yet, you should probably check it out. This loading curve is steep, kind of like Bosch. But concourse is really pretty awesome with this. I should also mention that for those of you who want to get your concourse set up on AWS GovCloud, Bubble also has opinionated templates for running concourse as well. And the concourse slash concourse deployment repo on GitHub is fairly new and fairly great for deploying concourse. And I'll tell you that part of us getting through FedRAMP, I mean, it is... We're making it sound very easy to get this up technically. Well, what's the value? It is pretty difficult to get through FedRAMP. There's a huge amount of documentation that used to be done. A lot of FedRAMP requirements are not simply technical requirements. They're process or team requirements. I will say using concourse was a huge leg up for us in having really, really super aggressive CICD, part of our contingency plan, having everything in the environment as code. These things were sort of unconventional approaches. They were new to the FedRAMP auditors. But as has been mentioned in other discussions, when you get the auditors used to this and show them that how you're meeting the requirement, which is usually kind of written with like a change governance board in mind or something like that. They say, no, we have peer-reviewed pull requests and they're all hashed, and concourse only ever looks at hashes. And look, I can see the hash in production is the same as the hash in version control. Like once you show them that how a change gets into your system and deployed, you can actually get a lot of legwork done quickly that way. It serves a lot of existing requirements and process requirements. So definitely, definitely focus very, very heavily on CICD and replication from source. Starting as a contingency plan, like can I create the world, including all my processes for my team, by pushing a button? You can look at CG Provision to see how we do it. It's not as easy as pushing a button because we are still sort of factoring out all the steps. It used to be like 50 steps and that's under 20 and it's done a 15 this week or whatever. A lot of it right now is just secrets management, which we're cleaning up. So if you want to follow what we're doing, that's a good repository to check out. But that is something where we're using as much friction as possible. We want secrets rotation to basically happen constantly. And so we're headed that direction right now. And there's a bunch of other repositories that want to refer to. You'll see all kinds of interesting stuff. So things we had to do that we'll work around from the commercial side. Again, most of the stuff is crossed out. When we started, there were no stem cells provided for GovCloud. Most of the people at Pivotal didn't have access. We had to build our own. And so that took us down the rabbit hole of like, where do stem cells come from? Which is an interesting question. You should totally check it out because it's really interesting. But you don't have to worry about it. So we had to build our own stem cells. The multiple listener capability on ALBs didn't exist. So we were restricted to only using the classic ALBs, which are more expensive and hard to work with. The NAT gateway capability wasn't in GovCloud yet, which meant we had to run our own software NAT gateways, which we really didn't like. It means you're responsible for another HA component that really is not adding a lot of value that you want to get out of your IaaS provider. Again, recently that was deployed and we were happy to say, you know, that's one more thing we don't have to do that was special. And limiting to only two AZs. So now we're at three AZs, and that helps with the HA story as well. Yeah, the default CF deployment manifest does assume you have at least three AZs available at your disposal. Right. There are some API differences and limitations between the GovCloud region and other regions. You would think that whatever is there is the same, but some of the things are not. Unfortunately, this means that when you work with tools like Terraform, where all the other Amazon regions work this way, but GovCloud works a little bit differently, it's likely you might run into a bug. We have hit our head on that a few times. The links here go to the different issues we found. Most of them have been Terraforms to be very responsive at fixing them. We've sent them patches wherever we could. And again, it's gotten easier and easier. But just be forewarned that you might run into weird things like, hey, Route 53 isn't around, so therefore this Terraform command doesn't work the way you might expect it to. Let's see what's next. Okay, so as far as the FedRAMP, I already hit this concourse point down at the bottom here, but as far as FedRAMP goes, we use the Bosch runtime config to basically handle... FedRAMP is still very, in FISMA, very, very focused on host security. And so they really want on your Diego cells and every other cell in your deployment, they want you to have certain host security going on. So we do some hardening on top of what's already done upstream of the project, which is pretty good. We do a little bit more to meet specific requirements the government says. We deploy Nessus agent. Oops, there's the one-not open-source piece. Snort and Clam AV. You might use OSSEC or any other open-source options instead. And we have configured all of our log forwarding to go to CloudWatch to make sure we have a read-only archive, even though everything else is going through log search for Cloud Foundry. But everything else is still configured to mirror into CloudWatch that we have for auditing and for instant response and things like that. We have an immutable archive. And we are doing the Prometheus node exporter. We are using the Prometheus Cloud Foundry project. We contribute pretty aggressively to that as well, which gives you all kinds of pre-built alerting and monitoring for Cloud Foundry ready to go in Prometheus. And again, these are sort of critical elements in our whole compliance story. It doesn't get you all the way there, because then you have to actually document it in a million different ways and have it tested in a million different ways and then argue about it for a million different ways and so on until eventually you get through. But these are sort of critical things we did, again, with vanilla open-source Cloud Foundry. And if it wasn't, you know, we did not fork. Every place we possibly could, we contributed upstream PRs. So we've kind of broken the path for you. As far as services, we're a small team and we don't want to be experts in every service we run. So if there's an AWS service that we can possibly broker, we will. We avoid AWS-specific services because we want to be able to use multiple IaaS providers. We want to be able to use Azure and Google and we don't want our customers to be inadvertently shackled to one IaaS provider. So we tend to focus on things that are like common standards. S3 is a de facto standard because everybody's got a blob store interface. So we kind of fudge on that one. But for databases, we're sticking to Postgres, MySQL, Oracle, and SQL Server. And for other stuff that is not in Amazon or if you're using it in Amazon, it has Amazon-specific pieces, we're running our own. So Redis elastic search in Mongo. We got Kubernetes going pretty early, way before Kubo was announced. This is a really nice way to run stuff. We used the original Docker broker, which was how people used to use Docker to provide services, which is like single container scalability in NHA, which was not great. So we at one point said, okay, let's go figure this Kubernetes to run later. And it was a net positive from what we were doing and we're pretty happy with it. So we run Redis elastic search in Mongo there. If we want to run others, we can. If somebody has a Helm chart, they want to bring to us, we can do that. But if something is in, it is not Amazon-specific, but is available through Amazon. And if it is in AWS's covered scope, which I'll talk about in a second, then we use that if we possibly can. We don't want the operational hassle. If you made a wordly map, like this is the commoditized part, we should not be in the business of doing that. We want a broker and not run it if we can. I said AWS is covered scope. This is one difference between Azure Government and I think GCP's approach here. For AWS, the FedRAMP boundary does not cover everything in AWS GovCloud. So first of all, I have to say, okay, is the service I want in AWS GovCloud? And then after that, you say, okay, now is it in their FedRAMP boundary? And now the good news is that Amazon is working really closely with FedRAMP to audit everything in there and they're moving services in and that's why some of those things had lines to them. Every couple months or three months, there's another few services that kind of land in boundary. But you have to be kind of careful if you're doing something for FedRAMP or just so that GovCloud is not, if you just say, well, I'm using GovCloud, that's not it. You have to additionally look at the subset of GovCloud that is in the FedRAMP authorization zone. That zone is expanding as more things get into it. So eventually, that'll be the same. Azure took a different approach. They have fewer things in their government region, but everything in there is also in their FedRAMP boundary. So if it arrives in there at all, it's definitely FedRAMP. So that's something to keep in mind as you approach these providers. GCP, I think, they're only to have one region. I don't think they're going to split it up. So I'm not actually sure how this is going to work out, but we're kind of interested in how they're going to handle that. Yeah, the GCP as a government IaaS is very much an emerging space. It's an emerging space. It'll be exciting to see what they do with it, but for the moment, I don't know of anyone running Cloud Foundry on that. In the government, you mean? In the government, yeah. Because FedRAMP is like a zipper, you've got to kind of zip from FedRAMP all the way up, and it's FedRAMP turtles all the way down, right? So you can't build a FedRAMP service on non-FedRAMP components, or if you do, you have to have ridiculous arguments about what's in and outside your boundary, which we've done. We had to worry about RAF 53. That was actually a really big obstacle, because we are using it. So yeah, it's a thing. I'm not sure how... I've heard... This is not because anybody's told me, but it's hearsay, take it as that. I've heard that Google's going to not have a separate government region. They're going to try and credit the whole public region. I don't know if that's going to be possible. I can't imagine it, but who knows? Good luck to Google. Good luck. It's pretty tough. It would be pretty amazing if they did that for a lot of reasons. A final word is that the U.S. government is not the only government in the world that is going cloud-native. There are, as for specific IS regions, there are specific AWS regions for China, as well as specific Azure regions for Germany, although I do believe Azure Germany actually is available to all EU customers. And also for government customers outside the U.S., vanilla AWS regions offer a high level of compliance for Australian, German, Singaporean, and Spanish government standards, probably other government standards, but those are the ones that they did advertise. And as for other organizations in other governments that are providing cloud-native services, including services using Cloud Foundry, the government digital service in the U.K., the digital transformation agency in Australia, and the national information society agency in South Korea are all providing cloud-native services based on Cloud Foundry to government organizations. And we should mention that in the Cloud Foundry Slack, there is a Hashgov channel where we actually hang out and occasionally swap PRs. We contribute to each other's projects. We ask each other for how you're approaching this or that. They have totally different compliance regimes, but they tend to be similar in a lot of cases. Some of them, I believe Canada is not in there, but I believe Canada's compliance regime also refers to the NIST risk management framework in FedRAMP. So if you are interested in Cloud Foundry as it relates to different governments, then that's definitely a channel to go check out. Absolutely. All right, any questions? Right, we have one question. Hang on, hang on. We're going to bring the mic to you. Yeah, we're recording, so let me bring the mic to you. You want to handle it, Peter? Okay, thank you. For your developers, once you get this set up, can they just kind of push? Do you see it pushed and then you don't have to worry about anything else, or is there stuff that the app developers have to worry about as well in terms of all the regulations? Yeah, so you can handle 100%. The actual FISMA rating, where you say is at low, moderate, high, depends both on what the system is supposed to do and what kind of information is inside it, as well as the technical implementation. So we can't do that for them. The way I put it is that if there's a mountain of compliance that you have to get over to get to production, we turn it into an iceberg and shove as much as we can down below the water line. So there's a smaller bit that pokes up that is still their responsibility. So ultimately, all we can do is reduce the number of controls they have to deal with or make it easier for them to comply with the controls that are still on their side. But it depends on what their app does. It might also depend on their authorizing official at their agency. Their CISO or their CIO. That person might have a very specific need based on that agency's mission that is not covered by what we do, but they want every team to worry about. So we can't give out ATOs. We have what's called a provisional ATO, which means the agency has made provisions for your journey. Here, we've provisioned this platform, and we've authorized everything below this, but you're still going to have to get an authorization for the part above. Does that make sense? Any other questions? Yeah, provisional doesn't mean it's temporary. It's provisioned like a quartermaster for the journey. So I was curious about how you're handling credentials for AWS-specific services within the platform. We are just now rolling out CredHub, because again, we predate CredHub by quite a lot. Oh, are you talking about the actual infrastructure level, or are you talking about apps? For example, if you're saying you're using S3, so if you have an application running on Cloud Foundry using S3, how are you passing credentials from the highest level to the container? So if we have a broker that's going to provision an S3 bucket, it's still going to give you the four kind of credential pieces you need, and if you look at them, yes, they will say aws.blblblbl, and the domain name. But in terms of how your code would treat them, it doesn't care. Same thing for Postgres, like we're going to just give you a pgsql colon slash slash user colon password at host, right? Are you using static credentials, or have you looked at that? No, no, no. The broker sets up a new credential for every bind. One of our brokers doesn't set it up for every bind. It has a shared for each instance. All of our others, and we're going to fix that one, all of the others, they do a new credential for every single bind. So if I have a credential that's compromised in an app, all I do is unbind and rebind, and it automatically provisioned a new user in the database or whatever for that. So instances correspond to the actual database deployment, and then binds correspond to individual user credentials. So each of your apps has a different credential to that same services. Does that make sense? Any other questions? All right, thank you. High five.