 So, good afternoon, I'm Major Juliana Rodriguez, this is Kevin Christopher Rapsy, if you didn't see us this morning, one of the things that somebody was kind enough to tweet was that we did a live demo, so we had a lot of courage, we were just discussing if the slides continued not to work, we'll just go without slides and talk to you. So, hopefully for those of you who are more visually oriented, the fact that the slides are now working is a benefit. We're going to talk through a little bit of what we said this morning, but we're going to go into a little bit more depth on some of the things that we approached, how we were able to do this, and maybe if we get some time, a little bit about how we've structured it and it's open, so you can take a look at our solution online for being able to provide this. So, Army Cyber School, what is it? So I mentioned that we started a couple years ago, I think we weren't even official until like October of 2015, so we're not even quite two years old officially, we do have a fairly significant student load, and with that we're going to be training two standards that we don't create, because in the military we're given the standards and we have to make sure that as a training organization we meet those standards, in this case we're meeting the US Cybercom joint standards, so the same standards that the Air Force, the Navy, the Coast Guard, Marines, same standards as they all have, and we train to those, we additionally answer to our second boss of standards of the Army, and what is it that we need to train from an Army skill perspective as well as what is it that the Army thinks needs to be trained inside of Cyber as the branch. Our outcomes though, what I explained this morning really briefly was that instead of just approaching it from, okay I have a checklist, I'm just going to go down the checklist and have something for each thing on the checklist, yes we want to make sure that we've covered all of those standards, but we don't want to do it from a perspective of checking boxes, we want to do it from a perspective of what are the students actually going to be able to do with those skills and standards after we've taught them. So that's why we started looking at being able to do a more agile development process so that we enable instructors to create the content that allows us to, if we want a great idea for let's say, let's put malware on one virtual machine and let's have another virtual machine be able to identify what the problems with that one virtual machine are, right? That might take a while, but if we want to be able to get the type of problem solving that our outcomes are for students then we have to enable our instructors to do that. So what I was explaining this morning, the legacy courseware of doing that was that you would write those requirements down so yeah I want this type of VM to have this type of malware on it and I want this other type of VM configured in this way and I want a network that looks like this that will enable them to see each other. So we would write that down and we would give that to hopefully an existing contract or some type of organization that would provide us with that back. And the problem with that was even in the very, very, very best scenario it was still taking about a week to see something back. And that was with a surge level of contract support that was I think like about ten times what we would normally expect to see if we were even a priority still. So that was not going to work for this model that we wanted to create for how we get after those student outcomes. So instead if we approach this as everything, courseware, infrastructure, everything is code then when the instructor gets an idea in my example then they would go into GitHub they would write a heat template for that scenario that I described and they would suggest it up. They would push it forward and then in 12 to 18 hours they would be able to run that content for their students. So it's a massive efficiency gain in time. It's also a gain in cost because like I was saying at a surge level we had about three dedicated engineers trying to answer our requirements but you have the communication, miscommunications. This avoids all of that because the instructor who had the idea writes it themselves, submits it forward and then they are able to see the benefits in their classes immediately. Okay, so Kat and Chris Apsey from this morning, hi. So major order you guys mentioned the GitHub flow and everything is code. So we've gone a bit beyond just doing infrastructure's code and courseware's code. We have applied the GitHub flow to everything that we do. So we have a change management board, we don't do that anymore, we don't have people sitting around saying hey I'm going to do this change, do you approve it, yes or no? No, all that is done via the GitHub flow in version control. Everything is done in Git. It makes it super simple, really easy to track changes over time, really easy to hold people accountable for things they do or do not do, which is a huge boon for us. What used to take months, years, as we said previously, takes minutes and hours. It's a massive change from the way business is normally done. Who here is familiar with the GitHub flow? Who here writes software for a living? Yeah, right? Do you guys know what I'm talking about? Total, it's an awesome way to do work, especially collaboratively across different organizations and time zones, all kinds of great stuff. It's a huge change. For those of you that don't know the GitHub flow, basically it works like this. There's one master branch and from that branch, all code is deployable. It's all considered production ready, ready to go. When someone wants to make a change or make a suggestion or fix a bug or add a feature, they'll make their own branch called whatever it's called. They'll do their commits or commit, they'll do a pull request if you're in GitHub, but we use GitLab internally, so we do a merge request. The rest of the team that's part of that effort will then comment on what they're doing, say, hey, I think you should do this differently. Hey, this objective for this lesson should be changed. The wording should change a little bit based on whatever guidance and standards they feel like they're implementing. Once the team as a whole comes to a consensus, the course manager says, hey, yep. This merge request is good to go. I approve. Now Potter Master. The students will see that the very next day, potentially. That's a big change from how we've done things in the past. The thing that makes it all work is a system of systems called broadband handrail. It is our infrastructure to service platform. It is our GitLab, it's our CI pipeline, it's everything that makes it all work together. We use OpenStack for infrastructure to service, obviously. We use GitLab for configuration control. We use SaltStack for automation and configuration management, and we apply DevOps concepts to everything that we do. The neat thing about the system is that it allows people who have already left the schoolhouse gone on to their gaining units to do whatever mission that they have, is they can reach back and say, hey, what is the latest and greatest course content that the schoolhouse is putting out so they can stay current with whatever they have to stay current with on a mission? How it used to be is once you left the schoolhouse, that was it. You would have professional development maybe three to five years later, you go back to school, you get the latest and greatest stuff, and then you leave again. Three to five more years, you go back, get it updated again. This lets the force constantly see what's going on, constantly stay abreast of new developments, which is a great change and an important change and something we were very proud of. Like everything else, you've got to start somewhere. It's really difficult to buy things in the federal government, very, very difficult. We found some servers, 40 cores total, and I was like, all right, well, we're just going to stick them in the classroom, slap them on a desk, didn't have any cooling or racks or anything like that, so we slept on a desk, it was okay, we got to connect these all laptops, we ran cables through the drop ceiling, that's just how we had to do it, that's the way it is. We had Wi-Fi hotspot as our internet access point to support this environment. We had a squid server sitting between that to kind of cache Debian packages and whatnot where else we needed to make it not make it somewhat bearable, and that's where we ran for a good probably six months or so. And it worked for the most part, right? And this was only a year ago, this was last year. Yeah, so it worked, right? So that showed our leadership that, hey, this concept, it is workable, and it's doable, and we should probably invest more time and money. So go through all the long and laborious government processes we have to do to buy things, we did it, and we've moved forward from there. Before we got to that, we learned a couple of things. We found some challenges. Three big challenges from our perspective are people, processes, and technology. So who here is from an organization that has maybe 10,000 or more employees or personnel, right? Okay, so from my perspective, there's three kinds of people in large organization. There's the obstructionist, there's the incompetent person, and then there's the helpful person, right? And so depending where you're at, you're gonna run into these different kinds of people. The obstructionist is concerned about furthering their own existence. They're not necessarily concerned with the admission of the organization. They're concerned with being the bulwark against whatever evil thing you're trying to do, right? Those people are prevalent throughout large organizations. Yeah, right, change is bad, right? Those kinds of people, that's what we're talking about. Accompanied people, so sometimes people just get in a job, they maybe don't necessarily belong there, but they're really difficult to get rid of for one reason or another, and those people just stay there forever, right? There's people like that all of the place across all kinds of organizations. And then the third kind of person are helpful people, people that genuinely want to do the right thing, people that have the best interests of the organization in mind. Those people we've encountered, and so working with those three different dynamics has been really challenging for us, because depending who you talk to, if we go to the IA team, for example, we say, hey, we wanna do this thing, why don't we make it so that we can make changes on the fly? You think the IA team was all about that? No, they were not. They did not understand fundamentally what we were trying to accomplish. And so working through that was really challenging, processes. So I mentioned earlier briefly that buying things is difficult. So to buy one server, even just calling something a server, takes about one year, one year to buy one server. It's insane. But again, that's the way things are, so we have to work through that. So what we did was we said, all right, rather than doing this once every three months every time we need to expand, just buy a whole bunch of stuff right now and just run with it for the next five years, and that's what we did. So once we bought a large environment, about four peters of storage, couple thousand cores, 36 terabytes of RAM, we kind of ran with that. And that's what we have right now, technology. So the government isn't generally known for being on the leading edge of technology. MPLS is like the new cool guy thing inside the federal government. It came out in 2001, it was the first RFC, right? So that's kind of what we're talking about. So to tell people, hey, we're getting moving into private cloud inside the DoD and this thing called OpenStack, Free Open Source. Free Open Source still scares people to an extent in some places, right? That's a scary thing, take it away that's just the way it is inside the government, right? So we said, we're going to do this Free Open Source private cloud and we're just going to go after it, that scared a lot of people. But again, we got it accomplished. One thing I wanted to point out with that, the reason we chose this graphic is because according to the internal folks inside of DoD, the progress that we made in a year is breakneck speed. So that slug thinks it's going really, really fast and is like we got to maybe ease back a little, maybe we need to re-look this approval that we think we might have given you incorrectly, as opposed to if you're seeing the slug, you realize the slug's not making that much progress very fast. So the encouragement to you all would be if you believe in a vision for change, if you can continue to find ways to communicate that and assess the risks and show the decision makers why these risks are more acceptable for your organization's goals in the long term, then these risks of not changing, right? I think a big thing that I've heard of the past year is change your die. Then that's going to enable you to continue on, even if that pace may not be very fast to what you want. So lessons learned. The first one, it's actually something that from an army planning perspective, sometimes this happens. You'll have one organization given a mission and they'll do an awesome job at that. And you have another organization given a separate mission, and they'll do an awesome job at that. So they're both being excellent, but if they don't come together, then potentially the strategic goals that you have for a mission or a campaign are not going to be met. Similarly here, if we're having teams that are doing awesome things in one course or awesome things developing the infrastructure and they're not on the same path together, we're not going to get to where we need to be as an organization. So it's a constant process of trying to figure out who else needs to have a conversation about vision, where we're going, do they have concerns? Can we meet their concerns? How do we make sure we're headed there together? So it's much like in an open source community where you have to identify where is it that you're going and get a lot of different stakeholders to kind of share that vision so that you can go forward in a unified direction. Second lesson learned is the version control. So Capnapse mentioned that we wanted to be able to identify who's contributing both from a perspective of can we encourage their efforts, do they need additional support, as well as then potentially from a learning perspective, have they committed something or suggested a commit that now we need to go back in and do some retraining. So the version control allows both that learning process as well as the process for being able to roll things back if they didn't work out correctly. And everything as code lets you very clearly see those changes as opposed to the way that the government often does things with PowerPoint documents and Excel documents and you don't know which version was which. If they start to diverge, it's really hard to pull them back together. Our third lesson learned was put a kind of a more military term here, understanding the domain. So when we talk about domains in the military, we're talking about land, air, sea, space and cyberspace, right? So that's our fifth domain. To be able to identify how to operate in the domain, you have to understand that domain. And we looked at this in two aspects, right? So to talk about the domain here is in the cyberspace domain, that's anything digital logic, right? So anything that touches digital logic or anything that runs digital logic. And being able to do our job in cyberspace means that we're able to either modify the way that things work from a cyber warfare perspective. Or we're able to make sure that others don't modify them against us from a defense perspective on warfare. So that's what I mean by domain. If we have folks who are developing and they don't understand the domain, let's say we have somebody who's got an awesome resume. They come in, they're like, I'm gonna do this. And the things that we talk about at a really high level of abstraction sound great, but they don't take the time to understand the system that exists. So if they don't understand the system that exists, and they start developing, and maybe they're spending a lot of time on that development. But they don't actually understand where it fits into what's being developed, and they don't understand what is already, that's not gonna work, right? And similarly, from a leadership perspective, how do you ensure that leaders are able to drive change? So a leader that doesn't understand the domain in which they operate is probably gonna be able to maintain status quo, get some advice from people, maybe make a little bit of effort towards a progress. But they're not gonna understand what would be the meaningful change that needs to be made because they're always gonna be reliant upon someone else telling them whether that's the way it works or not. But if they don't have that understanding of the domain, in this case, what is digital logic, how does that work, right? Then they can't drive the change that they need to drive. Okay, so we're at now, we have actual standardized racks in an actual data center that has a drop floor and air conditioning. Yes, so it's a server room, right? So the DoD can't make new data centers anymore because they're trying to consolidate everything, so it's a server room. That's a good point, thank you. Yeah, so like I said before, we have about 2,000 cores, four petabytes of raw storage, 36 petabytes of RAM and a one gig dedicated internet connection that serves both tenants on Fort Gordon, as well as the remote tenants we mentioned earlier. Now, I also mentioned we use SaltStack for big management. We actually manage the life cycle of the hardware from first powering it on all the way to decommissioning via SaltStack, right? So no one's ever SSHing in a server to change a configuration file. No one's ever even allowed to do that. It's not even really possible. You have to really try really hard to get in. Everything is done via the SaltMasters that we have running. It's actually really helpful, and we'll do a quick look at some of that code later on today. It was a big change going from having teams of dedicated ops people who just sit there and just configure VMs for you when you ask for them to having the instructors themselves do all this work on their own. And there were some growing pains, don't get me wrong, right? So to change something has been one way for decades and just not do that anymore one day, that's a huge deal for a lot of people, right? So the people have been inside the DOD for many, many, many years and they tell them that, hey, the way you've done it up to this point, that's not so great and we're just gonna do it this other way that's totally different and foreign to you. There's a lot of heartburn there, right? Naturally, it's to be expected. But we have a really great team and we've made a lot of progress kind of showing people that, hey, this is probably the better way to do it. Okay, so I mentioned that we're gonna show you some of our code. So first, if I could type in my own password correctly, that'd be good. There we go, okay, so let me zoom in. All right, so I mentioned earlier today that all of our stuff is available online. So we run GitLab like I mentioned earlier. So if you wanna pull down our entire infrastructure environment, yeah, here's the URL, git.cybbh.space, you can go there, look at all of our stuff, take it, reuse it, whatever, it's open source. So we put everything in version control. Our secrets, like the default domain admin password for OpenStack, that's inversion control. It's RSA 248 encrypted, but it's there. If you wanna factor the number, I guess, by all means, go ahead. We keep our user account data in version control. We keep API keys in version control. Everything that could possibly need to run this environment to make it from scratch again is all stored in version control. So if the server room got hit by Meteor and we had new hardware, we could re-spin the whole environment to be back exactly where we are today in about two hours or so. We designed it that way because we knew that, hey, we don't have a lot of time or staff to manage a whole fleet of pets that are constantly having configuration drift and all kinds of other stuff. So by storing it on version control and having it all managed via salt makes things a lot easier on us, less of a really small, really agile team. The way it's basically designed is that we broke it up into a couple different areas, so apps, they're what they sound like. So we consider OpenStack an app, and so inside that app, we have all the states that makes OpenStack work. We consider Ceph an app, so inside the Ceph folder, we have all the states that make Ceph work. We consider each stage of deployment. There's four stages for us. There's pre-provisioning, which is installing the operating system. There's provisioning, which is OS installed, and also you've got Raid Erase configured, you've got extraneous Nick drivers, like we use Solar Fill and Nix with OpenOnload. Those are all configured in the provision stage. After that, we install the actual apps that make it do what it's supposed to do. So at that point, we have, say, Ceph Common installed. We've got Cinder installed. We've got other stuff that server's supposed to do that's installed. And the final stage when we go to production is when those things are configured appropriately. So versus having the very default Nova configuration, we've got Nova configured in the way we want it, with all the Ceph keys to access the RBD backend and all those things like that. I'll show you what I mean. Let's take a quick look at a compute node. So this is our initial provisioning state. A couple of basic things that everyone gets. We set a default root SSH public key that's stored in case of emergency if we actually need it. We install OpenOnload, like I mentioned before. We configure the network card, and then that's it. That's all we do. Moving forward, here you can see, like I mentioned earlier, we install OpenStackClient and all of our different clients, because there's certain times where we need to run OpenStack commands via the nodes themselves, because we want to try and minimize the footprint on which our secrets exist. So every OpenStack service has its own master password. So Nova only knows about the Nova passwords. Cinder only knows about the Cinder passwords. And so by installing the client and all these nodes, we can kind of keep that segmented really easily. We also install ChefCommon, obviously, Neutron Compute. So we currently use Linux Bridge, because it's quick and simple. We didn't have all that time to implement something more robust. But in the future, we do plan on moving to OpenContrail as a Neutron backend, which here works quite well. So we're looking forward to that. And finally, here's all the configuration information. We push down the updated Neutron configuration. We say, hey, we sign in interface to the provider network. We apply our Chef keys that let us access the RBD back ends for volumes and ephemeral storage, do all the things that you need to do before you go full production. Reliance to the nutshell, how our system works. We do this for not only the compute nodes, we do it for the storage nodes, the Chef nodes. We do it for the controllers. And then we do the same thing for all the VMs that we run. So we don't run the controller nodes on bare metal. We actually create a KVM VM that runs just Keystone, like just Neutron or just Nova. We kind of treat them like containers. They aren't containers, but we treat them in the same way. So we need to redeploy. We don't actually change anything. We just kill the old one and just make a new one in its place. And then update HAProxy is appropriate. That's kind of how we approach the whole thing. And again, if you want to look more in depth, you can tick all code by all means. That was really that all we planned on presenting to you guys, but of course, we're open to questions, comments, concerns, and you guys have for us. Sir. So at the very beginning, we use Marinus OpenStack because we needed something to work really, really quickly. And that worked great really, really quickly. But as we got more and more hardware, we kind of determined that applying, particularly the version we were running at the time, across this rather large hardware footprint wasn't going to work in the way we wanted it to. So now we actually live completely upstream. There's no vendor or anything. We just kind of pull the packages down, install them, and just go from there. Answer your question? Sure. Yes. So they're going to do both, right? So at the end of the day, so you guys have heard every Marina Reffelman so the same concept applies to the Army. So everyone goes through basic combat training. Everyone, everyone, everyone that wears the uniform is authorized to wage war on foreign soil, right? That's just the way it is. Everybody, regardless of what your actual MOS is, at the very end of the day, you're still a soldier. You still are able to shoot rifles, et cetera, et cetera. Now that said, are those soldiers that are experts or technical experts, are they best employed as riflemen? No, of course not. So while they are able to shoot rifles and do all those cool guy soldier things, their primary mission is to do cyber stuff. Does that make sense? Yes. Yes, absolutely. So that's actually going to go back to here. So one of the things that we want to do is absolutely allow other folks to suggest things that would be helpful. So we're not, this opens up the opportunity for folks who used to work here or work at the school or folks who never worked at the school but have a lot of expertise in a subject that we're teaching to be able to pull a version of what we have currently as courseware, to be able to create additional suggested content or changes, deletions, additions, and then push that back up. So that's going to have to be reviewed and discussed among the current course management team for whatever course that is before it would then be merged into the master. Can you switch back to me, to my screen, please? Cool. All right, so if you guys haven't seen GitLab at all, so this is a change that was made like a day ago. And one of our instructors said, hey, I'm going to add these two lines into this Python script or correction, the C code in order to do something. So the green is what they added, the red is what they deleted. And so before that code gets committed to master, the team looks at it and says, hey, this is good to go. They approve it, and that's it. It makes it really, really easy to track changes over time. Yes. So sir, who do you work for? Sure, so are you talking about the CIA project? So is it C2S? Is that what it's called? Go Cloud? Sure. So because we wanted to provide students a consistent experience at a predictable cost, offering them 24.7365 unlimited access, like we do right now with our internal private cloud, wasn't economically feasible. It was tough to predict that cost over time. That said, we do want to be able to burst to GCE, to EC2, to Azure. We have unexpected workloads. Like we have large joint exercises, where on-prem stuff doesn't make sense. But we want to be able to move that excess temporary workload to seamlessly EC2, Azure, GCE, and whatnot. So Public Cloud does have a place in our environment, just not quite there yet. Oh, you can't hear? Sure. So folks, if you have any more questions, come to the microphone because we're trying to record the questions as well. So the last one's question was, hey, Amazon has this great cloud for the government. Why didn't you guys use it? It works for Amazon. And so in the answer is, so there's a place for Amazon, we're just not quite there yet. Sir? We have a question regarding network security. What sort of helper hindrance or those three types of people? Your network operations and network security people in which your private network has to reside, can you give me your experience with dealing with that community? Sure. So the traditional IA staff inside the UD is very much focused on security controls, right? Like we all are. NIST, RMF, everyone's familiar with it, more or less you at least heard of it. And so IA people typically are not technical experts. And so when you try and explain to them that your controls are purely technical and there's not a whole lot of policy and process wrapped around them like there is usually, that's a tough thing for them to swallow. What we have found is that bringing them into the loop and kind of helping them learn more about your environment, what makes it different and why it's just as secure as the traditional mainframe environment that has like a team of CNDS computer network defense people that goes a long way. Kind of showing that one, that you care about what they're trying to do and they're just blowing them off, that helps out a lot. Absolutely. Saltstack doesn't seem to quite have the same mind share in the HopeStack community that say Ansible or Puppet or something. So what led you to that path? Sure. So Saltstack, it's on Python guy, first of all, right? So that kind of rules out Chef and Puppet almost immediately. So it really is with Ansible and Salt. And so I looked at both and yes, there was an arbitrary decision by me. So while the mind share was there, I felt that at the time Salt had a more compelling feature set. Was it more complicated? Yes, absolutely. But it had features beyond what I found available in Ansible at that time. But tooling perspective on this infrastructure, do you have any kind of sandbox where there's penetration testing tools, whether it's Metasploit or some other kind of tooling that people can use in that environment? Yeah, absolutely. So I think I can pull this up real quick. So within here are a couple. Let's see. We've got some old templates where we actually installed via Heat Template things like Metasploit. So Metasploit has a deployable shell script where you can kind of inject that as the user data when we make an instance. And it would kind of install automatically. Let me see if I can find it really quickly here. All right, so anyway, so yes. So the answer question is yes. So we deploy all those tools, not as kind of golden images, and then save them forever. Make them automated, right from Bootstrap. Yeah, another Saltstack question. Are you using the upstream formulas or the ones that are part of the Salt project or basically writing your own? We roll their own everything, yeah. And it's available. Well, congrats on your success so far. Looks like you've gotten past the bureaucracy very effectively. I'm curious how other aspects of Cybergman been interested in the work that you've done so far, as far as collaborative projects in outside of the Cyber Center of Excellence? Sure, so there is a large effort underway to build what's called the PCTE, the Persistent Cybertraining Environment. That's a duty-wide project that provides all services with kind of the same kind of thing we've done here. That is a multi-year project. I don't even think it's designed to have the first aeration available till FY20, I think somewhere around there. So yeah, so we talk with them all the time about, hey, what are your plans? Here's what we're doing. They can borrow bits from us or not. It's up to them. But absolutely, we make sure that everyone knows, hey, we've done some of this work already. If you want to take it, by all means take it. So one of the big aspects of that also is that we are not PCTE, and we are focused on individual soldier skills. So we're training individual skills. That's the purpose of what we created, our system of systems, and a lot of those other collective efforts that have already gotten funding and have groups of folks focused on it, they're focused on immediately providing collective training. Now what's the difference? Really the difference is in the bureaucracy involved with it. So there hasn't been a whole lot of focus on dictating what is required for the individual training skills. And we've actually talked with a couple other services whom I won't name, but our ability to provide this and scale it out easily has given us a bit of an edge in being able to provide the needs of the joint community versus other solutions. Participant in the OpenStack summits, is there like an open, excuse me, a meetup that you, all of the defense organizations attend to exchange, but more realistically, have you collaborated with other defense organizations or national security organizations? I'll take that, okay. So we haven't coordinated directly with NSA, so our infrastructure is completely unclassified. For public release, obviously, because we're releasing it publicly, whereas most other IC organizations are not that way. And so to collaborate their classified code with us is kind of, it's a thing. So they're aware of what we're doing, we talk with them on some basis, there's no direct code sharing, that answers your question. And you repeatedly mentioned GitHub, are you using public GitHub or GitHub Enterprise? So we're using GitLab for right now, which is, it implements Git, open source things, separate projects, but we do plan on mirroring to GitHub at some point in the future, but we're just not quite there yet. Thank you. Can you talk a little bit about your documentation, how you go from having the idea and putting it into something that's gonna become a manual of some form to publish thing and the life cycle management of that? Sure, you talking about course content or infrastructure? Yes, yes. Okay, so as far as course content goes, so we followed Lune's taxonomy, which kind of defines what objectives in a class should be like. Then we take those and then we sign them to instructors or course managers say, hey, I need an exercise that addresses this particular objective and they go in and make that and they try and apply it back to master and then we improve it and then we inject it like that, ingest it like that. As far as architecture goes, so most of what we did was based on available business best practices. So I looked at white papers from Red Hat and Supermicro and Mirantis and Juniper and I said, hey, here's what all these guys did, here's at the scale that worked at and kind of applied those to how we made our own environment. Okay, have we reached a culmination point in questions, other questions? Okay, so if we switch back to the other computer, so git.cibvh, right, broadband handrail, so cybvh space, that's available publicly and has some open repositories there. The cyberschool.army.mil, you're gonna have to have a DOD certificate to log on to that but if you do, that has a lot of information about the school points of contact and then if you have any questions specifically for Katnapsee or myself, then our emails are at the bottom of this slide. So we've appreciated your attention. We hope we've answered your questions. If you have additional questions, if you just find us or email us, we'd be happy to answer them. We're happy to be part of the OpenStack community. Thank you. Thank you.