 We're going to go ahead and get started. So if you want to grab a seat, there's still plenty up front. My name is Mark Carlson. I'm the CTO of a company called ECSTeam. We provide systems integration services for Cloud Foundry amongst many other things. And today, one of my customers is going to be given a presentation from Digital Globe about what they've been doing with looking at platform as a service offerings, including Cloud Foundry, in their space, which is satellite imagery. So I want to introduce Mike Wrasbinski, who's a director of Cloud Architecture at Digital Globe. And he'll be followed in a few minutes by Steve Wall, who's one of the architects and developers for ECST. So welcome to Mike Wrasbinski. Hello, all. Everybody hear me OK? OK, so basically let me give you a quick game plan for the presentation. Just to give you guys a little bit of grounding in terms of what we're all about and all that kind of stuff, I'm going to give you a quick description of Digital Globe, talk about some of the stuff that we do, and talk about the use case that we're trying to tackle. Then we'll get into, like Mark was saying, a bunch of the discussion about the technical and the cultural challenges that we've faced in terms of getting through this stuff. Hopefully give you some info that travels well and that you can take back and apply back home. How many of you guys, everybody use Google Maps? OK, everybody, how many of you guys have gotten in there and punched the button that says, show me satellite imagery? I'm expecting everybody's hand to go up. Come on, everybody's hand to go up. So if you've done that, then you've interacted with products from Digital Globe. We fly satellites. We collect high volume, high resolution, high accuracy data from our satellites. We pull it down to the ground. We do a bunch of digital image processing and that kind of stuff to create all sorts of different kinds of imagery. And then we blow that out across the world so that folks like you can use it. We currently are flying a constellation. There's black and white satellites up here. The ones that are in color are current and active. The ones that are in black and white have recently been retired. So we have an archive of about 15 years of imagery that will show you how the world has changed over the last 15 years. If you took a look at what we have in the archive right now and laid it down on top of the Earth, we'd cover the entire Earth's surface eight times over. Since most of the pictures that we take are of land, you can imagine that that's actually a deeper stack than that. On a monthly basis, we gather an area roughly equivalent to about 60% of the Earth's surface. And the map that you're seeing up there is trying to show you how that's distributed. On our latest effort, back in September, we launched a new satellite that we call WorldView 3. It has basically from 400 miles up, it can do a pixel that's about 30 centimeters on a side. Very, very cool instrument. And it's in production and doing commercial imagery and all that kind of stuff. And just like most of you, about 10 minutes after the thing is launched and you're all excited and happy, you get to work on the next one. And so our next, I'm sorry, we go with UAL and we can talk about that later. So WorldView 4 is actually a satellite that we acquired as part of a merger between Digital Globe and GOI. Used to be called GOI 2 for those of you that keep track of this space. And so we're looking at the intro of WorldView 4 as an opportunity for us to implement a new ground system that has a bunch of the architectural characteristics that we wanted to put into our existing ground systems because, but couldn't because we are a 24-7, 365 kind of operation. And so we're looking at this as a great opportunity. So Mike, I hear you scream, what's a ground system? I'm glad you asked, this is a ground system. Basically, the idea is that you've got satellites that are circling the globe. You have ground terminals that are used to send and receive commands and receive imagery from those satellites. There's backhaul that brings that imagery back onto our networks. And if you look at the core of our business, it really fits into those six domains that you see up on the board there. Ordering is, I want to tell you what kind of imagery I want and what area of the globe I want you to go take a picture of. Mission planning is, there's a whole lot of people that are asking for imagery, and there's a whole lot of clouds in the earth. Let's figure out when and where we can take the pictures and get you what you need so that you can go avoid evil and do good. Mission control is that part of the system that actually communicates with the satellites and gets the data back to our network. Inventory and content management, you can think of that as a great big card catalog, a great big card catalog that has all of the metadata associated with the image strips that we collect and a great big bucket of data that are the bits that we collect. And those sit around until we decide we need to do a production run. We basically take those raw bits and spin them, do all that digital image processing that I was telling you about, and create various kinds of imagery, which we then mix together with all sorts of other data to create data products that you might be able to consume. And delivery, that's just saying, OK, once we've created some of that stuff, we need to get it to the people that want it. And so we might dump it into an S3 bucket. We might use FTP or Cygnet to deliver it to somebody that wants it. Or we still write hard files, and we still ship disks around the globe. So that's the business. That's what we do. And the ground system is what we're trying to create. And so we got into this thing where we were looking at the way that we wanted to build this new ground system and the way we wanted it to work. And Platforms of Service was an obvious candidate. We wanted to be able to do, well, I'll tell you about what we wanted to do in a couple of seconds. But the whole idea is that we went through and created a set of knockout criteria that said, if we're going to use a PAS, this is the way it should work. And then we went out and evaluated the various offerings that were available on the market for providing that service or providing that function. And Cloud Foundry was a pretty obvious winner in that regard. So then we went out and grabbed the open source and the pivotal versions of Cloud Foundry, pulled them into our labs, and pressure tested them to make sure that they really conformed, that they really did live up to what we had learned and what we came to believe about how they should function. A couple of things about that. In doing this, we took a bunch of applications that we had, services that we had, and ported them into the Cloud Foundry environment. Ranomon OpenStack, Ranomon VMWare, Ranomon AWS. We tried different languages. We've done Java and Ruby and Python. And in all cases, an application that's running on any combination that has pretty much run on all the other combinations without any changes, it's been pretty slick. And once we got through all that, then we could say, OK, let's go back to management and have this conversation about what our approach is and our staffing models and how we're going to accomplish this stuff. OK, so a couple of quick things. In order to be able to accomplish this, in order to be able to make this thing work, we're moving a lot of people's cheese at Digital Globe. There's a lot of places where people are having to change and find new technologies that they're going to use, come to grips with new approaches and all that kind of stuff. And a couple of things that we've done. First of all, challenge the status quo every place you go. There's a lot of conversations that have been going on about that. And frankly, the technical team has been much more receptive to that than our management. That's taken a lot more time to convince them that this is a worthwhile gig. We've realigned all our scrum teams so that they are maximally autonomous, so that they don't have a bunch of dependencies team to team. They're not focused on technology. They're focused on function. And they control their own destiny in terms of creating and delivering function to the business. And we've done something else. And you'll hear more about this as we go. We've created a suite or a progression of eight end-to-end tests. The first one we affectionately called the Hello World test. It was essentially a distributed Hello World that had a bunch of services talking to each other, but it was doing it on top of the PAS and our orchestration and a bunch of that kind of stuff. In terms of what we're running right now, just to talk a little bit about the stack. Rapid development with minimal drag. We're using the scaled agile framework. Continuous integration. We're a DevOps shop, and we're using the Cloud Foundry hooks for being able to do that. We're using Jenkins for automated deployment and pipeline. Something that was a surprise to me, but it shouldn't have been, is the whole idea of environmental parity. The fact that we have these build packs that have been created, the build packs can be deployed to all the different environments and the testing environments that we want. And that those things function and are the same code in all situations has been a big win. Our aim is a burstable environment. The ability to have both on-prem and off-prem and to be able to ooze back and forth between those two environments. We also have to support completely air-gapped environments. We have customers in various places around the world that we give a little mini-ground system to, and they can actually have shutter control. Those systems run completely autonomously. And we're doing microservices and a bit of pragmatic architecture that Steve will talk a little bit when he talks about a three-layer setup that we've got. I guess that I'm going to do this and get off, but why Cloud Foundry? I mean, the things that Cloud Foundry is giving us is the ability to run all of this stuff on multiple platforms and to be able to migrate back and forth at will. The ability to support all the different languages and the amount of hygiene that we don't have to worry about because Cloud Foundry provides a bunch of those facilities is terrific. There are some downsides. For us, these are not near-term downsides, but they're things that we have to come to grips with as a group. One of them is that, especially for a bunch of the commercial offerings, they are priced by the instance. An instance is a running instance of an application. And every time you replicate it, you get to pay the bell rings and you get to pay again. And we are actively pursuing this microservices strategy. And so the idea is that we're going to see a lot more instances of applications as we carve up the ones that we've got into smaller pieces, and that costs us more. So that's one of the reasons that we are actively pursuing Open Cloud Foundry as well as some of the commercial offerings so that we have the option of being able to go wherever we want. We've had a lot of feedback from various folks in the group. Why are we doing this Paz thingy? It seems like it's bleeding edge. Why are you doing this? And our answer is because it makes us fast. We don't have to launch tomorrow. We've got about a year. And the fact is, is that we feel that we can work out whatever kinks we've got in the systems and that our development will be that much faster because we're on the path. For people that have their heads wrapped around the current one, this is actually simplifying our environment. And although they've paid the brain damage to understand what the existing system does, it's still kind of a jump to have them relearn the way that that all works. A lot of people have said, why don't you do a toy project? Let's do something small. Let's not do something that's urgent or that's critical to the business. But the fact is, is that the way we've mitigated that is through these end-to-end demos that I was telling you about. Building function over time, demonstrating it, standard agile practices kinds of stuff, right? Doing something where the code that we create and as it gets accreted adds on and that we're actively demonstrating that. And if in fact we run into a problem, we can back this stuff out and go to a standard virtualization environment. And the last one is, it's open, but is it really open? I guess the thing I'd say is that there's a lot of vendors that are in the Cloud Foundry Foundation that have a reputation for vendor lock. And because of that, we are doing our development on the open source version of the code so that we know that we're not vendor locked and we have the ability to move wherever we want. That's my gig, guys. Sir? So Mike talked a little bit just now about the cultural challenge that we encountered, but there's also technical challenges you encounter. So Digital Globe has been around for over a decade and in doing so, they've built out a lot of really good applications, but the architecture that was used initially is more of a monolithic application. You know, and with a monolithic application, there are challenges on how fast you can roll that thing to production. So you make a small change to a monolithic application and you really have to do a lot of good regression testing to make sure that your change didn't break the rest of the application. So that was one of the things that they want to get away from because that is moving features and functionality slowly into production. So changing the application or the environment from a whole bunch of monolithic applications into a whole bunch of microservices, it's a major paradigm shift for an organization, not only from a development perspective, but also from an operations perspective because they're really used to, okay, we have our big application, we're gonna roll that in to test, we're gonna test that application, we're gonna roll that into production and all the systems, the processes, the procedures, everything was tooled for that environment. Now you're going to a microservices architecture. It changes everything in an organization and not only from a cultural perspective but from a technical perspective, there's a lot of fear in that. So that's something that we've had to deal with along the way. One of the things that we did is we defined architectural patterns. So not everything fits nicely into a pass. We came up with different flavors of applications so this kind of gave us a vocabulary when we were talking with the community at large as to where an application resides. Ideally applications would go into the top layer which we call the pattern one application. So these are the applications that fit well into a pass. These are the applications that do conform to a microservices type architecture. Sometimes the pragmatic reality of a situation where we're gonna launch a satellite and it's gonna be up there in a year. You can't do everything. Sometimes you wanna bring over some legacy applications and put them into VMs. So that would be kind of a pattern two type application. And in those pattern two type applications, one of the thoughts is you would make proxies in the pattern one in the microservice layer. So you would have this proxy interface that would go down into the VMs where we don't wanna throw that code away yet but we have a path to migrate that code into a pattern one application. And then there were the pattern three applications where you need every millisecond of compute power and the latency between the latency that a virtual environment causes is unacceptable. There's all this data coming down. There's a lot of bit bending on the images that happen there and you need that high performance compute cluster. So those types of applications they just need to reside on bare metal. So not everything fits well into a pass and I think that's something an organization really needs to accept. It's not an all or nothing proposition but you have to look at your environment and say what's pragmatic, what fits well where and then make that decision. So our path to adoption, we set off on this trail, we kind of blazing new ground here so that's what you see here with the wagons. This is a place that this organization has never been before. So what we did initially is we developed one service, we developed an eventing service. So there are eventing services out there but what we said is we wanna make a very lightweight eventing service that does just what we needed to do. Publish an event, distribute it out to consumers. So kind of taking a very agile approach, Yagney ain't gonna need it yet, we did a very lightweight service but this service served a couple of purposes. It got us into developing what a microservice was and actually one of the cool things when we were developing the eventing service is we realized it's not one service, it was three microservices. So we now had three microservices, we had a template for how to develop microservices, we got it in to Git, we got it in to Jenkins, we got our deployment pipeline started off. So we figured some things out, we didn't figure it all out but we figured enough out that now we said okay we wanna bring on a small team of developers and that team of developers we called the pioneers. So the small team of developers came on, they started developing a few more microservices, we got a business process management layer in so it orchestrated the workflow and then we had the microservices use the eventing service to send events through the system. So that was our, Mike alluded to what we were calling end-to-end demo, so that was our end-to-end one hello world type of a demo. So we got that initial build out down and then we kinda start, now we're in the process of rolling it out to the company at large. So here's where we are currently. We created the template microservice, we created an OAuth security framework leveraging UAA. So some of those microservices in the end-to-end one demo we did put security in there, we put OAuth2 in there to kind of start bringing security into our architecture from the beginning. We leveraged the microservice to implement so we've done two end-to-end demos now. So we just kinda built out our end-to-end one demo, put more into it in the end-to-end two demo. We extended it, the Java framework to use Python and Ruby. Our Jenkins pipeline, our continuous delivery pipeline is maturing, it's not fully there but we've got it to a point now where we're deploying to cloud found, we were doing a blue-green deployment and we're executing some automated tests. Right there is actually something that's interesting to talk about. All of this, the bedrock of everything we're doing is automated testing. You can't do what we're doing here with manual testing, it just is, you're not gonna get the velocity through your pipeline that you want so you have to have a really good suite of automated tests. And then so we're doing the training workshop. So we got all our training down, we got the governance down, we got the patterns down, we got the templates down and now we're ready to unleash this on the organization as a whole. One of the really neat things that we did or one of the really neat things we saw was the move to self-provisioning and that really helped out the organization a lot. So in the legacy environment, all the organizations were very tightly coupled. So if you needed to get something done you had to coordinate among a lot of different organizations. If you wanted an environment built out, the application team identified we needed an environment built out. So now we have to get operations involved. We need software released into a different environment. We need to get the release team involved. So while a lot of this was automated, the coordination between organizations was time consuming and in a continuous delivery environment it's just unacceptable. So when we moved to self-provisioning in the new environment, there's these clear lines of delineation between the organizations and it gave the application development team the power to do what they needed to do. So if we needed a new environment built out, we had the access to those compute resources to build that environment out. So when I did the event service, I said, hey, I needed to do some performance testing. Normally that would take a coordination effort across organizations to get a performance environment going. Here I had access to a pool of compute resources and in an afternoon, I set up a performance test environment, started hammering my microservices and of course initially the first test it just fell on its face but I was able to do that, I was able to see that I needed to make my instances a little bit bigger so that they can handle the scale of events coming in and I was able to do that in an afternoon. Normally that may take a week or two or who knows how long, maybe I wouldn't have even went down that path. It was just something that came to me. I had the ability to do it and I went and did it. So that was a really cool experience for me coming from years of scars, of having to go to all these different organizations and coordinate these efforts. So that was pretty powerful. So right now we just kind of want to open it up for any questions from the audience you may have. Yep. Just curious how much of your cloud foundry deployments have you had to do a kind of custom development for like buildpads and things like that versus how much is just out of the box cloud foundry? And the other question slightly separate is how do you approach the whole private and public kind of hybrid cloud approach of being able to mix and match with your kind of scalable architecture? So there was a question here on buildpads and how much customization did we have to do versus just taking the canned version? So with the buildpads we had to do quite a bit, not a lot of customization but all the buildpads that we are using, we created offline buildpacks so you take a fork of the standard build pack and bring it down and then make the tweaks because as Mike alluded to, these are being deployed to air gap environments which mean no internet access. So we'd bring them down, there was some security requirements that we needed on these and so they're in our repository, we built them, we deployed them on out into cloud foundry, works really nice. Do you know, Mike, the OpenStack distribution? No, I think we're on Juno. That was one of the evaluations that we did in terms of which paths we were gonna be using and there were a number of differentiators but the primed differentiators, you guys might chime in here but really related to the amount of work that was required in order to be able to recover from failures and to keep the system up and dealing with providing services instead of somebody in there with a wrench trying to get it fixed and on its feet again. So the question is, are cloud foundry and OpenStack have them together? So cloud foundry requires, you wanna take a look? So cloud foundry requires an IaaS like OpenStack or VMware or AWOF. Don't deploy cloud foundry necessarily on its own, it needs to ride on an IaaS. So you can do cloud foundry on OpenStack or VMware or AWS. Which is one of the powers. Which is one of the powers of cloud foundry, right, your IaaS independent. But when we deploy on to Amazon, we expect to go straight on top of AWS and use the features of that platform of that infrastructure as a service. Question? Is that gonna tie us back to my other question? Probably we're on the same surface on multiple platforms. So we're on prem, right now. But that doesn't preclude that we can go to AWS. I mean, we've done an installation there. And as we expand beyond that basic factory, so we've got a bunch of things that we're doing related to big data analytics associated with our stuff and that kind of thing. Those are a combination of on prem and cloud based. And as long as we've got connectivity, we can move equally well between the two. We stage data to both. Okay, we'll probably should wrap it up here and that's what we got for you guys. Have a good conference. Thank you guys.