 So I'm Todd Myers and my partner here is Kevin Fitzhendron. We're going to walk through a little bit of history and how we started to build a relationship with the Macellous community and especially Macellosphere and where we are and where we're going the next year or so. So we felt it was very important as we looked at the type of data that we have to be prepared to process over the next 10 or 20 years. We have to have something that's abstracting and taking care of the resource negotiations and scheduling for us. So Apache Macellous was a no-brainer in our mind. And during that process over the past three or four years, we had been dabbling in a framework called scale that was specifically focused prior to Macellous on doing batch processing and even prior to containerization. And as that evolved, it was a perfect fit to take our NGA scale, open source project and put that into the universe of deployments. And I believe that is part of the universe of deployments in Macellosphere as we speak DCOS. But beyond that, in order for us to get to the scale of processing, what you see in the bottom right hand corner, we needed something to be able to insert packages without having to wait for three or four months of meetings and requirements and having different people compete against different priorities. And so we needed a way to bring these things in and have them be isolated, localized for different types of deployments. Whether it's a domain that's on a open domain or whether it's a domain which is a closed domain. So we have embarked on gathering up the universe packages, adding some of our own. And we've created an environment that allows us to control the entire local universe deployments not attached to the Macellous masters. And we're gonna go through that and Kevin will highlight that in subsequent slides. So why are we doing this? We needed something to come in and basically create a paradigm shift. A paradigm shift of how we are going to bring in framework services, containerization, evolve our programs, our applications, our system designs so that we have an environment that allows us to quickly insert without having to go through the regular process of bidding for contracts and vendors competing against each other. So we wanted to abstract that and provide a gateway. And so for us, the abstraction of DCOS was perfect to provide a unifying environment for our infrastructure. And also leverages a great opportunity for us to have the large, what we call large programs of record, use an environment without them having to create a separate architecture. We also have created an automatic process for authority to operate, which means each customer that comes in to the cluster that we host, they don't have to worry about that. So we're laying the foundation and the foundation is, so we wanna greatly enable our developers. We want the developers to build a program to join applications and data sources and phenomenologies. And we're starting to see that we're going to a scale that we've never had before. So to speak a little bit about metrics, in the past, we would have very low utilization on our machines, single digit numbers, percentage of CPU. And we have seen over the past year on average between 50 to 75 at the peak utilization of every process that we're running for certain phenomenologies. And this is really important because NGA is a service provider for the Department of Defense and also Intelligence Committee for Geospatial Intelligence, GOINT. And that data has to be leveraged and available quickly. And as we go forward to the next 10 or 15 years, the data sets are getting larger, the sensors, the overhead, the satellites are coming in. And we need to have this type of architecture right where the data is coming in. So this is really setting the stage for both DoD and Intelligence Committee. So I'm gonna hand it over to Kevin and we're gonna talk about one of the key components of, what's really important for us is addressing the DCOS and isolation. Thanks, Todd. I hope everyone's enjoying the week so far. Last day, I'm ready to get back and apply everything you learned back at work. So I'll come to Henry, engineer, supporting INDAS, NGA, contractor. I'm gonna be talking about when I get to the next slide, some examples of what we thought were probably some beneficial things to more than just us put out there in the community and start conversations about. But before I get there, it helps to kind of give a little bit of a background in some of the design constraints, design goals we knew we wanted to start with from the get go to kind of set ourselves up for success. Because it is a kind of rigid environment, a lot of policy and regulations being in a DoD area for good reason when there's war fighters depending on the products or output that we're producing. So you don't want unreliable systems to get out into production or systems that can't protect the data in the proper way to get out into production. So in the gist, this slide next to me here puts all that into a few sentences. But the gist of it is we knew we wanted a repeatable way to deploy a DCUS cluster in an incredible fashion in an AWS cloud-like environment. Using as much DevOps, CIACD automation as possible because we're not a real deep staff team. So we knew we didn't have time to continually repeat manual processes. So we needed a way to kind of set those tasks off to the side and execute them whenever we needed to without having to reinvent the wheel. So having said that, that was a big undertaking to try to do this and go through accreditation process, have security internal do the vetting of password requirements and data encryption at rest and all the things that come with being in our arena. So the strategy was break things into kind of a three-tier stack. With the bottom of that stack, we refer to as tier one, be the cloud services that we consume from the Amazon-like environment. The next stack on top of that is our tier two which would be the actual DCUS cluster and the sidecar components that kind of enable it and fit it into our unique enterprise configurations. And then the last tier on top of that would be the tenant tier, tier three. So knowing where our user was going to be, we want to do everything in those first two tiers, what we refer to as core, to kind of enable that and empower that developer who would be our users for them as tenants. To just show up in the cluster with their little bits of code that are unique to their application, just to make up an example real quick. Like human resources wanting to do a new time tracking application for time card systems. They're probably going to have a database cluster, postgres, whatever it is, a container that's unique to the code that whatever they're doing their ETL or processing on the database and a web front end. So they could just show up with those three little bits and inherit all the work we've done on the core tier one, tier two pieces without them having to reinvent that wheel. Them wanting to do their own DCLs cluster and security pulling their hair out because they'll need to go through and do all the reviews for that unique cluster in addition to our cluster. Anyone else that wants to do a cluster, it's not an internal security process review process isn't, I've done DCS once over here in this corner. So we don't have to do it again for these other guys. It's every time you go through and install an application you have to do that accreditation process. So they get to take advantage of in a bite size approach for the accreditation process, everything we've done on the underpinnings in the core. So having said that, let's go on to the other slide with the examples. So I won't have time to go through all of these in a few minutes. We're gonna save some time for Q&A afterwards. So maybe one or one and a half ish out of these. So if I don't talk about one that's near and dear to your heart on this list, I'll be around the rest of the day. So come hunt me down and we'll talk about it. So to expand on that previous example for HR. So say HR wanna do a time tracking system as my previous example. What if there's 48 different ongoing activities just within HR alone? And they all independently decide that they want their own DCS cluster. And they all show up to the security, internal security team and say this is what we're gonna do. Security's gonna go a little crazy trying to wrap their arms around what all these environments are doing and having to review all these independently. And then keep track of them throughout their life cycle as they're running in a production environment. So us knowing that we wanna take kind of the scalability piece that's lacking on the human security side and automate that as much as possible. So we thank Ben and the team, the federal guys Regan and Keith and support engineers that went through some of our crazy requests that we've submitted to make some changes in the baseline offering at DCOS. And documentation updates so that we could do what we wanna do on this second bullet for the local universe. In that example of 48 different clusters, there's gonna be 48 different local universes that in this isolated environment. We have to roll our own local universes cuz there won't be internet connectivity to the vanilla setting that's in a cluster when you first turn it on is the things. Universe.mesosphere.com and that's all the community contributed packages and security is not gonna have time to go review every new one that gets dropped into that public GitHub that gets rolled into that universe.mesosphere.com site. So we went to security early on knowing this is gonna be a future problem and said hey, at the same time work with Mesosphere. We want to extract that local universe off of the cluster and set it off on dedicated machines on the side, independent of a cluster. And that allows us to wrap it with an all scale group and put an ELB in front of it. And now we can farm out that URL to other people in the DoD, consume it ourselves, hand it out to other people. And then work with security to say, hey, we know that these are kind of the popular items out in the Mesosphere universe that I think everyone's familiar with the universe. But if you're not, it's kind of akin to the CentOS Yelm Repos for just adding packages on the fly to your cluster and Linux since. So now we can give them like, we know we want Jenkins, we know we want Postgres and Cassandra. And they went and took that list of 20 or whatever we're gonna talk, however many we had, knowing that's just a starting point that's gonna change. And let them kind of cruise through at their pace through that baseline set of applications that would be available internally to our teams and others in DoD behind this firewalls to consume those. And they came, gave us thumbs up on those items. I'm like, great, fantastic. And then we set up a pipeline that would go grab those artifacts out of the public GitHub universe repo and kind of create that big, large 10 gig, a 10-ish, double digit gig image. Pull that inside and do all the security vetting and accreditation regulations that we have to abide by, malware detection, that kind of thing. And put those on dedicated machines off cluster and do vulnerability on the fly testing and kind of fail those unit tests if we see any weirdness going on. And security like that idea because now they have like a central place they can go to and kind of wrap their arms around and know this is the baseline. I'm going back to that 48 example, 48 different clusters in HR. If they all had their own little universes, security would never really be able to keep up with developers browsing in the user interface and looking at one button single install for deploying this and deleting that. It'd be constantly changing. New stuff's coming and going out of the public universe. So now they have like a bite-size approach to doing this accreditation which they like. So working with Mesosphere, figured out how to rip that apart, put that on its side behind a auto scale group and now everyone can kind of take advantage of that themselves. So then that would trigger a process if say another group outside or even ourselves, I want to add these three new things. I see you have these, but you don't have these. How hard would it be to add? We'd trigger that process with security to say here's going to be the delta. And then go off and come back and give us a thumbs up. Yeah, we're cool with those. And our pipeline would trigger based off the changes we commit in a master branch internal and get to go through the pipeline, pull in the new artifacts and kind of canary them through and then everyone just, and then the wiser developers are browsing in there dedicated or our DCS cluster that we were kind of farming out for the enterprise for developers to use and develop and deploy applications and be clicking in there and it's in the back end coming over to our centralized one that's a nice little demarcated point with its own ACLs and security groups that just filter down just to show what that's supposed to be used for with ports unique to the universe. So, coming down to the wire here on time. So, I'll just briefly mention the Jenkins file deploy pipeline. You've heard me mention pipelines throughout the talk here on these couple slides. So, us wanting to be good stewards of our own enterprise, we have a group that kind of rolls out high available, redundant CICD tool capabilities for us to consume. We don't want to reinvent the wheel. So, we kind of look at their offering and they chose Jenkins for certain functionality. So, we could have gone with Go CD. We could have done Git, CI, but because Jenkins was there. If you haven't used Jenkins in the last year for kind of pipeline use cases, how they recommend go check it back out. They've released probably a one to one feature capability to most of the other popular integration tools, deploy tools that have all the major offerings where you can do stages in parallel and capture artifacts and do all the normal pipeline things. So, now we have an internal master branch that holds a Jenkins file. And we go in there and we define our little pipelines and our unit tests along the way. And it allows developers kind of show up to us with their once out of the cluster. And the plan is when they want to consume something out of the cluster, they can kind of define in the marathon JSON or a scale recipe or even a bash script that's just a bunch of DCS CLI commands to provision stuff out of the environment. We see them commit into their branch to want to consume our resources. That spawns off a pre-agreed upon process with our security to go and see, that's the change these people want to make. They go off and do whatever they gotta do just for that tier three of the new tenant that wants to show up. And they come back with a green light, go ahead, thumbs up. And then we merge request that and spawn off our pipeline. And now we can do testing in an isolated DevTest stage. Process everyone is kind of moving towards nowadays to get stuff into production and fail those builds quickly, learn quickly how maybe we're rolling out a new version 111 of DCS. And we know we've got scale version three or whatever and other kind of tenants that are part of the known baseline. And see how changing a rev of one thing kind of breaks others or maybe it all works and it goes through. We've done our job well defining those unit tests ahead of time and integration tests and future tests and security checks. We have that confidence that why stop it, let it roll all the way through our environments go out into production. So might kind of get off the stage bullet before I hand it back over to Todd to close us out. What's next? We know we're interested in things like Spinnaker and their new support for DCOS. So we want to kind of get off the uniqueness of having to do our own kind of platformy glue scripts because that's more stuff we have to maintain. Unless we have to uniquely maintain that's unique to us, the better leverage common tools. I think the common theme here this week, Terraform. We're doing kind of trickery and user data and putting stuff in S3 buckets and throwing stuff around to seamlessly design this pipeline and move in artifacts around. So Terra data is a nice way that I felt mentioned this earlier. So I'll just do a quick highlight on it. The only way we could get to the other side is accreditation process with security because that's not usually a process measured in hours or days or weeks or sometimes not even months, span of year or so. Was to work with them and they're very flexible. So kudos to those internal security guys at NGA. They kind of focused on infrastructure's code in a master branch of a GitLab repo. And we can go to find how our environment's gonna be running over here is going to look and it gives us a CM approach to kind of make changes and figure out the known state and you give them a point in our environment security like that, they have a common place to go. Previously you would have a multi-million dollar DOD contract with a big contractor and they go off on their own four walls and they design and build this thing in and they show up to NGA, put it out there. And that's day one of security starting their scrub process going through looking at operating systems and the password requirements and they surface out many, many days later out the other side with kind of a review of how things work. So here they get to be just in tune as we're developing, as the developers for coding infrastructure and see the changes being made and say, hey, I like what you're doing here, you're closing these, that's great. And along with that are the active governance kind of things with Ansible or Puppy or whatever that's in the live environment to trigger and notify people that change has been made that maybe wasn't approved or is approved or we can catch that. So I'll kind of stop there because we're only a minute left, but kind of back over to Todd. Okay, thanks Kevin, you're welcome. So now we can have fun. So what's next for us is now that we've done all the hard work up front, automated and made it very easy for customers to come in and consume at the tier three level and not have to be down to the tier two and tier one level. Now we can focus on objectives that really push our functional geospatial management responsibility for DOD and IC to these enduring principles here, geospatial processing at a global perspective. We're embarking on a major push for bringing in data science straight craft in our agency and what we see is that the culmination of these actually provide a different way of thinking, business intelligence and how we think about innovation and challenges to bring that in quickly and insert them inside our threats, cybersecurity. And I'll foot stomp the one that's really driving all this is to be prepared for the encroaching sensor phenomenologies that are going to be coming in to need to be processed against. And ultimately to truly have multi-data center functionality. I'm really, really interested and excited to see what's happening in the community with respect to that. We're almost there. So we do have one more thing. And so end as has been an effort to streamline, make a repeatable, provide it for people to use and consume and create and control their own destiny. So today we're officially announcing that we're in the process of open sourcing end as it'll be on, I'm sorry, GitHub slash NGA Geo and slash end as so you can check that out over the next couple of months and you'll start seeing bits to flow in there. Everything we just walked through is completely wrapped up in that open source project. So thank you.