 My name is Sean Anderson and I am an application transformation lead at Pivotal and I'll go into a little bit more detail what that means, but we are here to talk about modernization patterns to get your applications to the cloud. If anybody is in the wrong spot, feel free to run away now. So as I was asked to do the fire exit announcement, have you guys seen this? Basically note the exits nearest you in the event of a fire alarm, be calm, exit. In the event of a water landing, your seats will become a flotation device, which is very unlikely, but you know, we have to say it, but please follow the directions of any public safety staff. So who are the Pivotal app transformation team? Basically we are a group of 45 now 50 engineers who specialize in adapting and creating tools for transforming applications, especially monoliths and mission-critical applications to the cloud. We've been doing this for more than three years, which in this industry is a very long time. We have done now more than 50 transformation engagements around the world, and with each of these engagements we create a collection of recipes that we put in our cookbooks. They're basically lessons learned and patterns to apply, and how do we continue to rinse and repeat this process. A lot of these patterns that we've compiled over time, some of which you've seen out in the industry, the industry standard patterns, but what I'm going to talk to you about today is all real-world experience, but it also can be a very dry subject. So I'm going to enlist an amalgam character called Pete to help walk us through this, and Pete basically represents a collection of customers. So I figured it might be a little bit easier to digest some of this content if we have real-world examples, and we have somebody that looks very hipster like Pete to walk us through this process. So who is Pete? Well, he is an enterprise architect at Widget Co., which I'm sure you've all worked at. He maintains an order management system, and in his spare time he likes to read about microservice architecture and secretly dreams to apply some of these techniques to his business. That's something that I think most people have in common, but in Pete's world he has some constraints. His order system is a big Harry monolith. It's been something that has been created over time, usually since the 70s, for example. In Pete's case, there's some mainframe components. There's of course the ever-popular components that are tied to the latest hotness from ten years ago, and then those people left. And of course you have the system by acquisition. So Pete really is concerned with just keeping his job, keeping his system up and running. Now this system is mission critical. It runs the business. It's also, like I said, monolithic, and it needs to be up. So a whole bunch of nines. However many nines you want after the decimal point, that's what Pete's bosses say. And it of course shares a mega database. This is a database that contains the God tables. It's the storehouse of the world basically, and Pete knows that changing applications means at some point changing the data. So what is Pete's problem? Well, his problem is that the C-suite understands that, hey, we need to do some modernization. We need to be able to stay consistent and keep up with the competition. We have some problems. We can't get software out the door quickly. The bigger the applications get, the harder they are to maintain. Like any other enterprise, there are QA departments. There's performance departments, security departments. Everybody has their gates to go to production. And another way of saying gate is something thrown in your way to slow you down. And so Pete knows that the C-suite really wants us to go to the next step, and they don't necessarily understand that. So the C-suite says, hey, the bosses want to keep the business running, but then they also want a holistic approach, and they want to use DDD and Agility and allow for innovation. Ordnative DevOps. So what does Pete then think about? Well, he's thinking a lot of things. The first one is, of course, Buzzword Bingo, and he won. But more importantly, he's thinking, these guys are monsters. This is really, really challenging. And they don't necessarily understand how challenging it is. So of course, they become sea monsters in his mind, but that's less important right now. So the next step then that Pete does is he understands that we need to get a direction. A guiding principle, something that is the target. There are processes that we at Pivotal use, like snap analysis, event storm, Boris diagrams. Those processes are something I'm not going to go into today, but the outcome of that is we have some sort of guiding principle, a direction to go to approach this problem. And that's really the first pattern here. Pattern zero is know where you want to go, not necessarily how you need to get there. And so that north star, that direction, really is the notional architecture. Today in this room has experienced the six-month-long architecture assessments where all you do is build block diagrams and UML, and you try to anticipate everything that could possibly happen in your system and plan for it and design an architecture around it. We found that that doesn't typically work really well because it's hard to get going. People don't enjoy doing that. And most importantly, you don't know what you don't know. And the only way to get there is to have some sort of direction that is notional and start moving that direction and learn and expand as you go and have the courage to change that direction if it becomes necessary. But for Pete, his choices were, well, we can do this all as a green field new application. We can do this big bang approach where in parallel, let's create the whole new ordering system, which from a developer's perspective is easier, but from a pragmatic business perspective, it is not easy because business has to keep moving on. You need to add new features, you need to keep development going, and very few companies want to invest the capital to actually double your budget to create something, to replace something in place. So typically, that means we want to do an incremental approach to this. And by incremental, that means simply that we may be taking thin slices of functionality and putting that to production. And slowly over time evolve and strangle off the monolith or the legacy application and start forcing workloads to be on the new platform. And in this case, Pete wanted it running on Cloud Foundry for all the platform benefits that you folks are probably very aware of. So the compass may give us a good idea of where we don't want to go. I kind of think of it as if we have an arc, just a direction. We've eliminated three quarters of the directions out there that that's good enough. We know where we don't want to go. So let's just go the other direction. So for Pete in this conversation, his North Star is agile, DDD, the desire to get as close to Cloud Native as possible. But this is not always the case. Sometimes you want to just do good enough. Maybe you get six-factor compliance instead of all 12 or 15 factors. And that's fine. It's more important to just get moving. The next thing that he really wants to do is make sure you iterate. Small steps, aim small, miss small. So in Pete's case, it was really important to be able to let's start trying something and have that feedback loop. If things don't work or as we learn more information, you can start to evolve the architecture. You may evolve priorities, for example, and de-prioritize some part of your application. And then minimize and manage the new tech debt. Now that's something that can be really challenging, because what we're really trying to do here with these patterns is we're trying to replace tech debt. This whole monolithic system is really technical debt. And the more we add to that to enable our new features, the more we have to remove later. And finally, we have to keep the business running. So hopefully as we're going through this, we can start adding these thin slices of functionality and keep the existing business running without fail. Remember all those nines, it's really important to keep those. So how's Pete going to do it? Well the first step he's going to do is he's going to try to find seams in your system. The seams, for example, can be API calls, message queues, just logical function calls, RMI, SOAP, RAS. Basically they're just places where it either physically or technology wise makes sense that hey, here's a break, here's something that is fairly easy to carve out. But it often is located or situated around a capability of some sort. So even if you have applications that are calling each other with native Java calls, for example, you still might have a seam there because the functionality itself lends itself well to being pulled into a new component. So with these seams in this order system, by identifying these thin slices of functionality or these bounded contacts, you're able to pull something out, right? So that's the goal is we want to be able to pull out some features, strangle off that monolith, but where do we put that capability? So now we've successfully identified here's a chunk of code or a chunk of functionality we're able to pull out, but we don't know where to put them. So we put that capability into our notional domain-driven design target, in this case it may be a microservice, and this service may be very small to begin with, it may be basically a framework that has a happy path thin slice through your ordering system, for example. But the important thing is now this is the first step of your new notional architecture. But that leaves your ordering system with these dangling connections. The existing order system now doesn't know how to talk to your new functionality. So that's where we start to look at pattern two. And our pattern two is anti-corruption layer, and this really is just a pattern that allows you to wire up the order system and fill in the blanks for this missing block. And sometimes the anti-corruption layers can be very complex because you may have interfaces or integrations with other systems that also depend on this functionality. So what that does for you is in this ACL, it gives you the ability to have translation or composition layers, something that the old system knows how to talk to using its same technique. For example, if it makes a soap call today to some system, then the anti-corruption layer that we create will speak that same soap language, and it may translate the data, it may do some data composition, it may even make calls under the covers to other systems for you just for the sole purpose of making it work with your existing microservice. And even validations, things like that. So what that gives you is it makes it so you have fewer changes to your dependent systems. Typically when you have a model, if you have something that the reason why you're modernizing to the cloud is specifically because you can't get changes out the door quickly in your current system, the last thing you want to do is try to require changes to that current system to enable your new, because now your changes aren't going to go in for four months or six months and you have to go through all those hoops. So this anti-corruption layer gives you the ability to kind of sneakily inject yourself into the existing system and almost trick it into calling the new system. And what it doesn't know won't hurt it, hopefully. It also gives you the ability to keep your new services pure and create very robust tests and API-driven test cases around it. But more importantly, that anti-corruption layer can contain most of that new tech debt. And by tech debt in this case, it might be those translations from soap to rest or maybe it's taking a large canonical data object and stripping it down into small pieces that your new system can digest. And over time, what happens then is your ACL, as you're moving more and more to production, eventually that piece will die on the vine. So your ACL will be running out there as a production application, possibly inside a cloud foundry as its own deployment. And over time, as more and more new systems start using your new microservice, the old ones just kind of wither and die. And that's kind of the idea behind this strangulation. So over time, any new features, any new services that need your bounded context, your capability, your new business capability, they can start going through your new API and take advantage of all the cloud foundry scalability and stateless transactions, things like that. And you don't have to ever call back into your old system. And that sounds very rosy. Now, in practice, it's still very challenging. These are hard subjects to solve. But this approach gives you the next level down of how you can start peeling things off. So an example of another pattern here that forms, and you'll see this kind of repetition, where really our goal is to gain control over these seams. So if, for example, one of the seams you identify is, hey, this application uses message queues, that's one of the easiest things that we can do is start shunting these messages to our new systems. So what that allows us to do is, OK, we have our new application. We put it into production. We've gone through all of our testing. We know that this app has a queue-based API, and it does what we want it to. So now let's just start consuming messages off of that queue and then possibly telling the old system, hey, you no longer need to consume these messages. And you may do that even by tricking the old system to start listening to a dummy queue, for example. And so you don't have to do code changes. You're just basically no longer given it work to do. And eventually that system will wither and die. It just isn't used anymore. So you can accomplish similar things with topics, or if you're using publish and subscribe. And this common use cases of these event shunting, by the way, is we've had a lot of situations where people integrate with their mainframe using IBM MQ series. And your event shunt may be consuming messages off of MQ and then putting them on rabbit for your new service. And that's totally fine. Really, it's just how can I be sneaky and make sure that the old system thinks it's working exactly the way it was. And now I've translated things to my new North Star environment. So with topics and publish and subscribe, it could be the same exact thing. So we have our new service, and he is subscribed to this topic. So now there are components getting the same message in two places. But at some point we know that there is, from our old service, there's a point where he's putting messages on a queue for downstream consumption. In this case, we just simply say, great, let's just simulate that. We'll make sure that our service at the end of its processing now continues on that downstream. Really, it becomes just a matter of black box, what came in before and what goes out is what we care about. So let's just simulate that same process. And now again, we've shunted off all of the work from that component, that thin slice in your monolith, and it can wither and die again. So there's other ideas here where, of course, we don't have message queues. So we may be looking at things like proxies or facades and adapters. All of these are very similar concepts, and I'll go through these one at a time. But essentially what we're trying to do is the same thing we did with event shunting. But we're taking these microservices that currently are being pointed to by a particular, or not being pointed to by a particular client. And we're saying, hey, let's inject a proxy. Let's put a piece in there so when they're making these rest or soap calls, we change the configuration to point to the new service, and the new service simply is a pass-through to the old. And when we do that, that instantly gives us control. So now everything's going through our new proxy, and we can then wire that up and say, awesome. Now, as things are going through this proxy, I can decide if I want to route it to the old service, route it to the new. There's a lot of deployment patterns that let us say, with this proxy, I want to route, say, 10% to my new system, just to make sure that I know it's working the way it should be. The other 90% can still get routed to the old. But again, it gives us that capability of now controlling the flow of the applications. A facade is similar to a proxy, but usually it has to do a lot more work to make things happen. In Pete's case here, he has an application that's running in a web logic environment, and it's using RMI to talk to other components inside of this web logic system. And that sounds very complicated. You use an RMI. We can't easily make an RMI call into the cloud. How do we handle this? So again, we have our microservices already out there, and we could create an RMI facade. And really, what an RMI facade in this case is, is a piece of code, a chunk of application, that implements the same interface that the existing RMI target does. And we tell the new service, now talk to the new RMI component. So really, you're just, you know, you're throwing a decoy in. Now this RMI service, it may be necessary to have him running inside of the web logic environment, for example, and that's fine. Maybe it's an EJB that you deploy out there, that the EJB has code into, then make REST calls on your behalf. So that's all fine. It's just now you have that facade where, again, it's the place where your tech debt lives, and it's doing this transformation. One thing you'll note is, as we're doing this, we're able to now start injecting instrumentation and monitoring and logging in each of these patterns, each of these layers. You're starting to see that now that we have control, we can actually learn and understand more about how our applications use, because most of the time people's big, hairy monoliths don't have log aggregation and good instrumentation and monitoring. So we're getting control there. The next one is an adapter, which is also similar to Proxy and Facade, where in this case, let's say Pete has some downstream systems that he makes calls to, or maybe they're dependent systems, he's making a call to a customer application or even a CRM tool, something like that. And our microservice then also needs to consume data from that, similar to like a database. So our adapter can be put in place that speaks the same language as the downstream service, and then it can convert our more native REST calls or whatever we decide is our application protocol. And that adapter then is really just a wrapper around this external service. And the same benefits apply here, where that adapter has the consolidated logging, it's got all of the instrumentation. So now we're able to get some analytics out of the application as a side effect of being able to gain control here. So the last pattern I think I'll have time to talk about here is gateways. And gateways are similar to Proxies in that they will take traffic from an external source and route it through to your back end services. Dual is very popular for building your own gateways, one of the Netflix OSS stack applications. Also Apigee, MuleSoft, Boomi, basically what we're doing with the gateway is where a consumer today may be talking to our system directly or through our existing API layer. We may want to say let's do the same thing we did with Proxies, let's have that consumer talk to a gateway. You may have applications where they're already using a gateway like Apigee or something to do security endpoints and do some of the control of your access. But if not, we can, we'll plug in a gateway here that says now we're going to be intercepting these calls again from the consumer. So we've gained control from that scene that we identified. And once we have control of that, we can let the gateway do this routing. So an end user or maybe an external customer, say you're a cell phone provider and you have partners that you enable to access your systems to be able to register new customers, things like that. Those APIs can't change to the customer. So we do that in the gateway layer and our back end then is now isolated and you can continue your development and your fast processing through that. So in summary, the way I look at doing these patterns is similar to if you've ever been driving and you notice that they're widening the highway and you have an overpass. And in that overpass, they're widening it to three lanes. And what typically they do is they'll build a bridge wider next to that traffic and then they start routing the traffic through to the new bridge and then they could tear down the old. It's the same kind of thing. I've had people say, hey, to me, it's like a heart transplant. You pull the heart out and you have to route the blood through this machine. And I don't like using that analogy because it seems a little self-serving were not that important to compare ourselves to heart surgeons, but you get the same idea. So really, in summary, we're trying to gain control at the seams. We're using anti-corruption layers, which can be a facade or a proxy or an event shunt or something built specifically for translation. And that helps us keep that tech debt isolated and it helps minimize the probability that we have to change the system we're trying to strangle. And over time, your monolith gets strangled. It just simply stops being used. You don't even necessarily have to pull code out. You just stop using that chunk of the application and your dragon is tamed. And finally, the ACLs over time can just die on the vine because they are no longer used. So in Pete's case, his result was, hey, we've got applications in Cloud Foundry. Most of the time, you can start within a couple months and actually get large portions of your monolith strangled. Our goal is two weeks or a week to get usable code in production. But the bottom line is this is a very hard problem to solve. The key to Pete's success was he kept an eye on his guiding principles. We're going this direction. Sometimes you do things that looks ugly and there's a little dirtiness in there. Let's just try to contain our dirtiness to some place where it's not going to bite us down the road or bite somebody else down the road. And of course, be agile, iterate, learn, adapt, and break things into manageable chunks. But in Pete's case, the biggest key to his success was his company's commitment to making this work. And that's why the iteration is very important because you can prove to those sea monsters in the sea suite that, hey, we know what we're doing. We actually have repeatable feedback quickly. You don't have to wait eight months. And that's how Pete became successful. And that is it. So Pete's happy. Thank you.