 Hi everybody Hi lunch wake up good I know you're all gonna be in a food coma in a few minutes, so we have to go and we have to go fast There's no way we can possibly complete this talk in 30 minutes, but we're gonna try anyway Very quickly me Matt Stein They called me a principal software engineer or whatever that means I do a lot of stuff at Pivotal around Cloud Foundry in spring and I spend a lot of time on airplanes as it turns out Managed to get here on time, which is good I wrote a little book That you might have seen floating around if you don't have a print copy of this yet And you want one just to tack the fine people at the Pivotal booth and they should have some I think they have a few hundred of those so if you want to get a hold of that This is sort of my cloud native application architecture propaganda that you should all consume and believe and And go do So So Andrew Schaefer gave a talk yesterday who went to Andrew's talk Okay, so Andrew's talk was Without doing this on purpose a great setup for my presentation He actually helped me craft the abstract for this presentation and this is these are his words So he said now that you have Cloud Foundry. What are you going to do with it? It's kind of an important question, right? We have this platform How can we use the platform in a way that's actually going to give us Give us what we want and so the first thing that I would say is that if you're smart Don't try to do microservices on day one You should actually start with a monolith Actually if I could build everything I wanted to build as a monolith and do continuous delivery and and Be agile and innovate and get all that right I would because it would be a whole heck of a lot easier than doing microservices Microservices are hard stuff So start with a monolith while you're small while you can you're gonna build something new This is where you want to begin small teams small monolith Make it 12 factor If you have something small doesn't matter if it's a microservice or a monolith or anything else Making something 12 factor and and having that contract between your application and the platform So that such that they get along well with one another is Is relatively easy to do it's a lot harder to take something that's big and been around for a long time and Turn that into a 12 factor app, and I'm sure many folks in the room have felt that pain but we all know that eventually things get bigger and And We don't really have a good answer to at which at what point is it too big In fact microservices are a terrible. It's a terrible word When we talk about microservices, we see this micro thing and we start to think ooh something that I can count I can tell you how big it is and I can measure it and I can put a number on it like lines of code or number of operations or Things like that Don't do that. That's all wrong. Don't do that at all it's not Quantitative micro it's Qualitative micro role responsibility capability focus scope Those things are really hard to measure because every person in the room in every context that you come from You're gonna have a slightly different answer to the question of how big is too big that it's time to start decomposing this thing into something that we're calling microservices, so What are microservices? Microservices Has a lot of definitions They're floating around lots of hype. I like Adrian's definition best He says the dreaded word service-oriented architecture in His definition it's like I thought that microservices were different than so a we're not doing so anymore so a is bad Microservices are good now. Don't don't that's that's wrong Actually if you go read Wikipedia First five paragraphs or so of the service-oriented architecture article There's really great stuff in there and you read about what the focus of what we were trying to achieve with that was It's actually all very good stuff And then we went and implemented it and We got it kind of wrong Because we missed the two things on either side of it that that Adrian adds in his definition one the idea of loose coupling Because what do we do when we did so a well? We went and found something really big to couple ourselves to which was that ESB and we put all the Important stuff inside the ESB and then any time we needed to change something We had to go through the ESB and all the enterprise architects cling to the ESB like barnacles and Start to put up this barrier is to change Doesn't matter where your monolith is if it's an application or if it's a bus or We were talking about this morning about how how an f5 can load Load balancer can become a monolith as well Anytime you start putting all of your stuff in one place you can get into the situation So loose coupling means what it means that I can deploy my service anytime I want to and I don't have to ask you if it's okay Now that sounds hard and it is hard But if I can get there We would I think we would all agree that that would be very powerful and one of the ways that we can get there Is this idea of a bounded context? What's a bounded context? Eric Evans started writing about this in domain-driven design 12 years ago. Who's read the domain-driven design book? Okay, you don't have to read the whole thing Start about I think chapter 13. He starts talking about this thing called strategic design And you read about three chapters of that and you realize this is a textbook on how to do microservices Except he never uses the word microservices Because honestly what we're trying to do is just take principles and concepts that actually worked very well and Arguably are a set of the small part of the set of small things that we can say is true about software engineering Very few things that we can say are true versus false There are a handful of principles out there and these things start to feel like they're in that category but You look at a domain You know that if a domain gets too big you go from one side of it to the other and you take the same term and You try to understand. What does that term mean and all of the different contexts along the way from one side of the business to the other How many different definitions do you come up with if you come up with more than one You don't have a bounded context because the domain is not actually internally consistent across itself So pick whatever your central concept is I was working with an airline customer. We're talking about a reservation Said how many different definitions of reservation do you have probably 17? Okay, well if you've got 17 different definitions, can we all agree on one? Course not we can't we all have slightly different Connotation you go a different context whether it's an order or a movie or something else You have these central concepts that everyone treats a little bit differently And so what you do is you find context where if I keep the boundary here All the words mean the same thing from one side to the other That is the thing that Either is big or small or somewhere in between and you're gonna come up with a different answer For every context in your business and you're gonna come up with different answers across every business but if you can get that right and Then you can bound that thing with an API and you can say nobody gets to know what's going on inside the box You only get to know what's going on at the API level and you can create these bounded contexts That are loosely coupled. You don't know what's going on. You can't couple to details You've got this wall this barrier kind of keeping you from becoming too bound to the things that you're dependent upon and Maybe you can start to deploy services whenever you want to Now if that all sounded very complicated, it's because it is And there's a sense in which You can't just start doing this I Think the way Martin Described this this idea of you must be this tall to ride the microservices ride is probably one of the best ways that I've heard this explained So if you can't do these four basic things If you can't provision new environments in Seconds or minutes if you can't monitor things reasonably well if you can't deploy a new line of code very quickly and if you don't have Something that feels like a devops culture and and Andrew talked at length about that So I'll refer to his talk if you want to know what I mean by that then You probably need to come back next year Fix these things first and then maybe you can go do that Now as it turns out you start to look at cloud foundry and I talked about this last year There is a nice relationship between microservices and cloud foundry in that not just anything will run on cloud foundry well as it turns out the things that we Build that feel like microservices tend to run kind of well on cloud foundry and You have these issues that you have to deal with when you start deploying microservices Provisioning new environments provisioning new code. That's something we sort of know how to do in this world So there's a sense in which you bring these two things together one doesn't require the other But one can definitely help the other And so we have all these great features in cloud foundry that help us To deal with a lot of the concerns that we run into with microservices now I can provision code quickly repeatedly reliably I Can scale I can let the health manager or Diego whichever flavor we're on right now Take care of making sure that when things die that they come back I can deal with a lot of my routing and load balancing concerns and I can run all the data services that I want to run all the services You know Bosch all of the things and things are going to be Working pretty well in this microservices world, but that's not enough so So Dave sire who works on a spring cloud project that that I work on he made this statement at spring one last year that No microservices in island Doesn't matter if we can build small services It's good, but it's not enough Being able to build small services and deploy them is good But it's not enough being able to build them deploy them and run them and keep them running is good But it's not enough Because as soon as we start to decompose a monolith and as soon as we start to put network boundaries between the things that we're building we start to create these nasty things called distributed systems and Distributed systems are hard as Well, and so we start to run in to a lot of new challenges That maybe we didn't have when we were writing code inside of a monolith How do I get configuration? Information out to all of my microservices and then all of my scaled-out instances of my microservices Consistently and reliably. How do I discover where things are? How do I once I know where things are actually route traffic to them and do load balancing? called Cloud Foundry does some of this, but maybe we want to do things even more sophisticated with what Cloud Foundry can do today and Obviously I deploy more things. I have more things that can break So the more things you're running the more likely something in your system is going to fail if you don't actually think about that and Something does fail and failure doesn't mean it died failure might mean that the latency got to a point where I Have enough load on an upstream service that I fill up all of my thread pools Waiting on this thing to respond and then that thing does die and then something dependent upon it dies And we have this cascade effect. You don't run into that when you call a method That's running in process with you and you get an exception You do have failures, but they're very different types of failures a little bit easier to figure out what's going on and There's a sense in which monitoring becomes even more of a concern If you have one app, I can plug a monitoring tool into it and I can I can pretty well tell you what's going on If I have ten apps, I can plug a monitoring tool into ten apps I can pretty much say what's going on I have a hundred microservices deployed that are composing a distributed system and it's only the Composition of those things that gives me the behavior that I'm looking for where do I put the plug Into that system to find out what it's doing So that I can know how the system is behaving There's no physical thing that represents the system We have a bunch of little things that are running around and the behavior that emerges from that composition is the system How do you monitor that? These are the types of questions that we have to answer so we need some sort of a representation of the composite system and I started being involved in conversations about this several months ago And the first thing that we started working from was this idea of the big a app as opposed to The little a app, you know, what's the little a app? Those are the apps that we deployed a cloud foundry, but everybody's got Some set of apps that they deploy to cloud foundry that we put a user interface in front of and the customer knows that as an App, but there's actually lots of little cloud foundry apps that are that thing and We say, okay, I need a representation of that that I can manage and work with and so the first tool that we had To kind of deal with that was this was a manifest and a manifest can tell me a Lot to solve this problem I can name several applications and the code that produces those applications and I can say deploy this thing and buying this thing to these services and I can get something that looks like what I want The problem with this is that it's very static This is a point in time description of what the system should look like right now But if I need to change that I Need to go back to go and start again and I probably end up having to deploy this whole unit again and that's fine But when I get into the world of microservices, I want the ability to be a little bit more dynamic and I'll tell you why So then I start thinking well, what I really want is something like Bosch But for apps for Microsoft so Bosch is very good at taking one. I've got a cluster of things that happened to live on VMs and I can describe that thing as a system and tell Bosch go make this so and it will go make it so and As things change in that environment, it will keep it the way it ought to the way I described it It will converge to some desired state eventually and then I go into my manifest And I say make me a little bit more of this thing and apply it and it will do that and see it and cloud foundry manifest Will do that to an extent But even then Bosch Wants to own the whole thing right Here's a cluster make this thing exist But there's no concept of okay now I want to without redeploying anything split it into two things and have this manage The left half and this manage the right half But when you start to talk about microservices you have this This topic out there of it's not just about technology. It's about organization and people and you have this very strong idea of Decentralizing so you think about Bosch Bosch is Centralizing management of a cluster I want now decentralized management of a cluster and I want all the pieces that form my composite system to actually be able to Act autonomously meaning I want all the teams to be able to do what I want to be able to deploy my service whenever I want So if I have a change I can deploy it now. I don't have to wait on you. I can't really do that in that world So so Andrew, you know brought this up yesterday this idea that if you write it you run it If I write it and I run it that means I'm responsible for deploying it So again, I'm back to the square one of I have tens or hundreds of services and I have tens or hundreds of teams that are managing those services and Deploying them and somehow I need to get a system out of that But all of the tools that I've worked with to this point in the Cloud Foundry ecosystem aren't geared toward that So how do we create a composite system? So Netflix did this and Netflix started from exactly the principle that I just described We'll have multiple teams Multiple teams will own their own services owning them from Build to deploy to run to where the pager and everything in between But they still needed a way to compose that into a system And they wanted to compose it into a system in such a way that any of the components can fail at any time and the system should keep working and then they did us the service of Taking these components that they used to build a composite system battle testing them in production you know running You know, what is it two-thirds of the of the traffic in the evening on the internet through them and Then open sourcing that stuff So now not only do we get to hear about how they do things? Well, here's a lot of the code that we used to do it too. So go go use that and And you can go grab that and start using The Netflix code today, but you have to figure out how it works You have to figure out how to run it and and deploy it and manage it sort of on your own so in the spring team Several months ago. The idea was had well, what if we take these components and we take the spring programming model that already Huge number of Java developers know and understand and enjoy and are productive with and we apply that programming model to the Netflix components such that I Don't have to relearn how to write my app I can just say oh now I need these new distributed systems patterns that I've learned In my app if I just annotate things Appropriately and configure things appropriately then now I can go back to focusing on business code again and not worrying about all the Distributed system goodness is going to work the way that it should so we have all of these now not just deployment level patterns, but application and service composition patterns. It's this idea of deploying and running code That's kind of what cloud foundry does But it doesn't really have an opinion about what is the code that I'm building Now cloud foundry is very capable of what it's we could run a Composed fault tolerant distributed system on top of it, but cloud foundry doesn't really have an opinion about what that thing is And so what these patterns when we take spring cloud and we layer it on top of cloud foundry do is allow us to do exactly that So we have several components. I'm going to run through these quickly. I know that I'm running out of time already But we'll see what happens All of this also happens to work on lattice and all of my demos are going to be on lattice And by the way if I change slides really quickly, they're already on the internet and I'll tell you where they are So if you don't get your picture, I'm sorry I've seen a couple oh he changed, you know, it's it'll be fine so the config server is A way for us to put all of our configuration information in a central place in this case We chose get gets really good at doing what? Versioning things and making an audit trail of things and that's something that you really like to have for your configuration I want to know what changed. I want to know who changed it when did it change and I want to know Maybe a reason for that and gets very good at that. So why duplicate it? Let's just put a service in front of it that can distribute that to applications So we have a rest API in the config server and then we have a client binding inside of a spring application that knows how to take that information reconcile it with any configuration that's local and Create something that's Consistently configured across all of the system. So now in my get repository I have some description of what the configuration for the composite should it should be and then I can distribute that appropriately to all the individual pieces But then I want to update that in real time I don't want to go through a deployment what I'd really like to do is say this piece of Configuration should change and that's going to affect these small components in these apps I'd like that to happen right now and So we add a component called the cloud bus to make that happen the cloud bus is just a management backplane that happens to be backed right now by rabbit MQ and When I send a refresh event to something that participates in that bus What it will do is send a message to the bus so that all of the other participating applications receive it So let's see if we can make this work very quickly So you'll notice that I have a bunch of stuff. Oops. I just turned off mirroring again There we go bunch of stuff running on my lattice cluster here Let's make that big and Here's my config server And the important thing that I want you to notice is that there is a greeting property in here that happens to say right now Let's make sure that that's up to date. Okay, right now it says howdy because I'm from the south and I need to do that and I have another application here that says howdy world right now. So I want to do a couple of things First of all Let's make this big as well You will see that I have only one instance of this app that we're going to show off. Let's go ahead and scale that up to five instances and Then let's grab the logs for that once it's done While that scale is happening Wrong file. Let's go in a demo dot yaml. Let's change that greeting Well, what do we want to say? Oh, okay, we'll make it speak Spanish sounds good Let's commit that To our repository So that's out there If we go back to our configuration server, we will see that the greeting has in fact updated to say oh la But when we go to our application, it still says howdy So we've got another step and that is that we need to send a refresh event. So if we Say LTC logs Discs config And what I'm going to do is send a post event to this route Bus refresh Not bush. I make that mistake every time There we go So what you just saw and all those log events was each app received And here's app number four says received remote refresh request The result is that It now doesn't matter which of these apps I get routed to I now have that change distributed across the cluster Okay, let's keep moving So now we want to find out where things are Eureka is a service registry that allows us to do that and Very simple application registers Consumer looks up what it wants to find and is able to connect to it directly We have Eureka here and you'll see that I have a bunch of stuff registered right now And if I refresh that you'll see that I have the five instances of the producer going now We had a problem here in that You can argue whether it's a problem or not, but we have this thing called the router and The router wants all the traffic to go through that We don't have these things talking to each other So this idea of consumer talking straight to producer doesn't really happen And what we end up doing is registering the route for the producer in Eureka And then we can do this trip from ribbon that's capable fully of load balancing What's gonna load balance me to HA prox here whatever load balancer we have in front of this then through the go router and then down to the producer Well, we added in CF release 195 some environment variables that you automatically get in your app environment that tell you what's your DEA IP and port are There's an equivalent for Diego for cell IP and port and then also in 204 the ability to allow host access on The DEA so if I'm in a container I can actually talk to another container on the same DEA or cell that I'm living on So after 204 we're able to do this and Lattice has an equivalent setup that allows Consumers to talk straight to producers and this may end up being the last demo that we're able to get to but We will see So let's do an LTC list again, and let's take our producer And one thing I want you to note about the producer you see there's no route assigned So I can't even talk to this producer outside of the VPC in which this Lattice cluster is living But I do have a consumer service and the consumer is basically just going to tell me exactly what that What that producer is doing which is in this case is producing an increasing counter sequence So I only have one we keep going up a number every time. Let's scale the producer Out to say 10 instances and watch those start up So we're filling up our cluster nicely So once all these start up. I want you to pay attention to the logs right now. They're not that interesting just a bunch of startup stuff Very close to done here. Maybe I should have done five I'm flirting with the clock Well, we should start to see some of what we want so some of these are up and registered CC I've got right now for Four instances of the producer up So as that's happening now what I should see is as I'm hitting this You'll see that I'm actually load balancing across the instances of the producer But if you were to look in the logs and actually now we're in a good spot You look at the logs you see that We are in fact hitting the producer app, but you don't see any router logs. We're not going through the router at all So now we're able to do client side load balancing inside the app So I'm gonna have to get out of the way of the next presenter But very quickly There are some other patterns. I wasn't able to show you circuit breakers but circuit breakers a state machine that protects you from those cascading failures and Then Zool is a component that allows me to do intelligent routing You can learn more about those on the Netflix website. We have a lot of other things that are coming We want to be able to support alternative stacks like console zoo keeper at CD J Rugged is an interesting circuit breaker library that Comcast has produced. We want to do distributed request tracing in the vein of What Dapper and Zipkin are able to do Leader election locks state machine stateful patterns We really want to improve the developer workflow for microservices It's not great today, and then we also want to do a lot around switching patterns. If you want to learn more I put a bunch of links The most important link is you can get to this talk on github and and download a PDF It's also on slide share and then all of the demos that I was able to show are in that second Repository so like I said right after the talk I will tweet the links to all these things so that you can go get the slides and Thank you very much