 All right. We're here to talk about microservices. Isn't that exciting? Don't you want to know more about microservices? Because I can tell you, if you do some microservices, not only will you be faster and more awesome, you're also going to be better looking, slimmer, younger. It's amazing. Nope. Not at all. Okay. Let's go ahead and dive into this particular presentation. It includes a series of slides. I'll walk you through very quickly to talk about microservices, give you some context. And then I have a whole cloud here running on this laptop where we'll show you exactly a bunch of microservices running on the laptop and different things you can do with them, like a blue-green deployment or canary deployment as an example, just to kind of walk you through the mechanics of dealing with your microservices. But a microservice, by the way, is actually super simple to code. That's the nice thing about them. People are very excited about writing one, because writing one is actually very easy, aside from one little trick that we'll talk to you about, and that's called the circuit breaker. Because if you're going to do distributed computing and you're going to have A, call B, call C, call D, what happens if C breaks in the middle of that chain? And so we'll show you an example of how that actually executes here using histograms, Netflix histograms, which is the circuit breaker. So let's go ahead and dive in. One thing we should make a note of right up front is MicroProfile, of course, is 1.0, now available. We made it ready for Java 1, and so that was the big announcement here. Of course, they have really cool MicroProfile t-shirts they've been giving away and a MicroProfile lunch that's available. But do go to MicroProfile.io and check out what's going on there. And that's basically where Enterprise Java and the various groups within Enterprise Java, like Red Hat, IBM, Payora, Tommy Tribe, including the London and Brazilian Java user groups as an example, came together to define a new specification around Java E, specifically for microservice type architecture. Okay, there's also a free book. I think we've given away most of the copies, hard copies that are here, but you can also go to our website at developers.redhat.com and get the electronic version of the microservices book. It walks you through how to do spring boot, drop wizard, and wall fly swarm. They all work the same way. Basically, they all produce what's called a fat jar or uber jar, and it basically is an embeddable Tomcat for all intents and purposes within your microservice as an example. You can see how to do all that. So there's one thing that you should note about microservices. It's all about agility. It's actually, well, it might be more fun to code and a little more simple to code aside from the circuit breaker aspect. It is really about agility, business agility, making sure your organization can respond and change more quickly and more rapidly. If that is not of interest to you, don't do this just because it's the bright, shiny object. What I'm finding when I go speak to microservices about microservices to developers is they're super excited about it. They want to use the bright, shiny object, but they need to understand that it's really within the context of greater business agility. So it's at the top of inflated expectations at this point. As I said kind of earlier as a joke, it is going to be the next greatest thing ever, and you're going to be way cooler by doing it, but at the same time, it's just hype at this point. You will have to decide which use cases really matter from a microservices architecture, okay? So your journey to microservices should include things like self-service, on-demand, elastic infrastructure. If you don't have cloud-like infrastructure, you're not going to have a lot of fun in microservices land. So if you're not really about the cloud yet, you might want to take care of that first and have elastic self-service infrastructure. Here's the easiest test. If you're a developer and it takes you three weeks in your organization to get a new VM provisioned, fix that problem first, and for most people it's like three weeks, it would be fast for my company. It's really six weeks to get a VM, okay? The next thing is think about Dev and Ops. If you're an organization that has developer silo, DBA silo, operation silo, compliance silo, and what the developers do is throw their junk over the wall to the poor operators who have to try to make it run all weekend when it falls over and dies. Yeah, that's not going to help you here in microservices land either. You need to solve that problem first. Learn how to use Dev Ops, learn how to bring the culture of those two organizations together so that the developers own it in production. Developers own it in production. That's the critical aspect of this, okay? If you don't have any kind of automation, if you basically are literally putting CDs and CD trays and feeding software in the server still, you've got to solve that problem. You've got to have automated installation, automated infrastructure, looking at puppet chef, Ansible, and of course, the next containers. And then I like to talk about CI CD, deployment pipelines. One of the demonstrations that Rafael will do later on is specifically around how to automate your full pipeline and do an end-to-end Jenkins run so that it actually deploys directly to production. And then once you do all that, you might be ready for microservices. So this is important that you understand there's a bunch of prerequisites to doing microservices. I believe if you just go whole hog into microservices land with none of that operational efficiency, none of the operational excellence prior to that, you're going to suffer a lot of pain and not really enjoy the benefits of microservices. And of course, if you do all this, you could be like a Silicon Valley, right? Unicorn. You can be like your own Netflix or Uber or Facebook, you know, and all like the cool kids. So let's kind of put this more in greater context. Typically, if you're a software person, you build something within the operating system in a Java world, like I've always been a Java guy for the last 20 years. It's going to go inside a Java virtual machine. You're probably going to use an application server. Then you have your ear, right? And J2EE and JavaEE, we had the ear, Enterprise Application Archive. Typically, that ear has multiple war files. And I've seen projects with like multi-gigabyte ear files because it has several large war files inside it. And you're going to have maybe a couple dozen to a couple hundred jars. It's typically never less than a few dozen, and it's often going to be very large. So you're going to have numerous jars. And here's the gotcha. Everybody in this architecture has to agree on the application server we're using, the JVM version we're using, all the jars we're using from Maven Central, the DTOs, the JPA entities, all that we have to agree. This is the monolithic architecture. And this is the specification that we've had for the last couple, you know, at least the last 15 years in Java e-lan. So that's the way it's worked. And then you also have this big team of people who also sacrifice small animals in order to keep that monolithic architecture alive. And it takes all these people to agree to deploy that particular application. So if the 18 programmers and the analysts and the operators and the QA people and the project managers, if they don't all agree that the monolith is ready to go to production, they slow down and they test it some more. But this is why it takes so long to push it to production. This is why for a lot of people it takes three months, six months, nine months to push it to production. If you break it all up in individual microservices though and start thinking about decomposing that monolith, and you specifically have two pizza teams where the developer is now responsible for it, they own it in production. You build it, you own it. That's the Amazon philosophy. Then you then have the ability to move faster with each of these individual teams. To kind of give a little more context on that, we have a monolithic system that takes 24 weeks, right? That's eight three-week sprints. So you have developers working in three-week sprints. That's the average sprint time for most people. You have eight three-week sprints and it takes that long to push it down to production. But if you start doing things like automated testing as an example, and most people really don't do that good of a job of automated testing just yet, we have some, but we don't necessarily have the best code coverage and we don't necessarily have the best integration test. But that form of build automation, continuous integration, and you've heard probably Jess Humble's, you know, how do you know if you're really doing continuous integration? Rule number one, trunk is always ready to go to production. It's always a release candidate. Number two, everybody checks in a trunk daily. And so if you're not doing those two things, you're not even doing CI yet. Jenkins means, Jenkins does not mean you're doing CI yet. Jenkins is just a tool. Continuous integration is actually a discipline. That's probably the way to say it. But then you start looking at things like Linux containers, automation and orchestration solutions, infrastructure as code. You think about your continuous delivery pipeline. You think about zero downtime deployment so you can actually go fast to production and actually test in production with a blue-green deployment or canary deployment. We'll show you that in a second. And then you have a really high-trust environment where everybody's working together. You can take that monolithic architecture, the same monolith, and get it down to one-week increments, believe it or not. We actually have talked to many customers who've just simply gone down the process of all this automation and all this change from their cultural standpoint and it's still a monolith going at one-week increments as opposed to three-month increments as an example. So it can still go very fast. The magic is when you have to break that three-week sprint barrier. So if our developers are operating at three-week increments and now you want to go below three weeks in terms of deployment time, you can't do that with one big-ass code base. You've got to break it up. It's kind of the idea. So if you get to the Phoenix project, if you've not read it, I highly recommend it, but they talk about 10 deploys per day. Well, how can you even do 10 deploys per day? You break up the monolithic architecture into separate subteams, right, separate microservices, all of their independent sprint cycles, all going to production based on their specific needs, based on when they have to patch their application, when they have to upgrade the operating system, when they have to upgrade business functionality and capability. That's really how the magic actually happens. So think about those independent teams. Those are critical. But let's just talk quickly about some high-level characteristics and then we'll dive into the demo. So deployment independence is absolutely, by far and away, the number one criteria in my mind. You've got to be able to deploy individually your microservice at any given moment of time. If you can't do that, you're not doing microservices. So deployment independence is critical. It's also optimized for replacement. I mean, one of the rules people like to talk about is you could rewrite the whole microservice, recode it in a two-week or three-week cycle, one sprint. So it's optimized for replacement, not for reuse, because if the business needs greater agility and you can rewrite the whole thing based on a new technological innovation, that's better for the business, right? So being able to change that rapidly is what's so critical here. And many of us don't work on stuff that has to change all that often, but in this case, microservices is about greater agility, faster change, right? Organized around business capabilities, not siloed and dev and ops and other IT silos, but actually aligned with business units and business goals, products, not projects. So this is some of Martin Fowler's content. The team responsible for the microservice is a product team, not a project team. They are wholly responsible for their product in production. They're on the pager. They want to guarantee uptime and they want to guarantee customer service. They're doing that around their microservice, and that's definitely a different philosophy from how we normally think. API focused, got to have a great API. Typically a REST API is how a lot of people like it. Focus on smart endpoints and dumb pipes. This means the logic is not actually out in some enterprise service bus with a centralized team of people managing it. The logic is in the endpoint itself. It's in your microservice. That's a critical aspect of it. Decentralized governance, there's no ivory tower enterprise architecture team who dictates all standards and how everything should be done and how all code should be written. Each team responsible for responding and responsible for their business group is going to do what they need to do to get the job done and have greater agility. So you can go on and on here. One that's actually a deal breaker for most people. Decentralized data management. Everybody gets their own database. And I know I've talked to a lot of people on this topic, hundreds and hundreds of people. And the number one issue is my Oracle DBAs would never let me do that. Right? And here we are at Oracle OpenWorld. But if they hold all the power in your organization, yeah, you probably won't get a chance to do too much in the world of microservices. It depends on who has the political authority there. So let's kind of get into the demo. Okay? Let's kind of show you the demo. It's much more interesting than going on and on on Slideware. But I'm running the whole thing on Kubernetes OpenShift. So OpenShift is our Kubernetes for the enterprise. So this is Red Hat supported version of Kubernetes. These are all Docker containers. I've got a bunch I'm here running. Let me kind of zoom in here. I've got a bunch of Docker containers. And what this is is we've done several versions of Hello World and different microservices. And you might be thinking, well, that's kind of bogus. It's just Hello World. But we want to minimize the business logic and focus on the mechanics of how to do a microservice. You can always add a lot more business logic. That's easy. But the mechanics are what's interesting. And each of these microservices is written to use invocations. A calls B calls C calls D. You have dependencies to manage. And you have the circuit breaker to deal with those dependencies. We'll show you that. So there's an aloha, which is vertex based. There's a gateway that we'll show you. It's also set up for blue-green deployment. We have Bonjour, which is a Node.js service. It's set up for Canary deployment. We have a front end, a user interface. We have a wildfly swarm service. We have histricks in here. We also have a spring boot microservice. So building a microservice is super easy. You just can go to a solution like wildfly swarm generator here. And all you got to do is just kind of pick your dependencies. Like here, you can see I picked Jaxrs. You can generate a project. That gives you a default project right out of the gate. So if you just want to build a simple microservice, you go out and do specify it. You can see here I picked web. Let me re-add them. Make it easier. This is spring boot, right? Web, actuator, JPA, generate project. I have a microservice. So that part's the easy part. Building a microservice is actually super easy. Connecting them into a robust architecture is what's hard. And so that's why we recommend using Kubernetes for that. So here's what this application looks like. Kind of show you what this is right here. So we have numerous patterns we've implemented. This is actually very popular for retailers as an example. So I've gone out and spring into a lot of retail organizations. This is the architecture that they like. Basically the browser is the aggregator. The browser invokes individual microservices and has those REST APIs that it uses just through Ajax commands, right? It's just an Ajax call. Hit A, call B, call C, call D. And that's how they'll populate their user interface. I wonder if I have that still in my slide. Just kind of make that more concrete. I took it out of the slide deck. But if you go to the main slide deck, which you can have the Bitly link. It's just Bitly Hello World MSA. I just want to kind of show you what I'm talking about from a retail standpoint, because it is a point of confusion for a lot of people. And it's fairly interesting like here. So if you're dealing in a retail world, right, the pricing, the details, the images, is it based on your location, what inventory you have in your local store? So when I've talked to several retailers, this is their architecture, right? It's individual service calls to the back end in many cases. And so that's what we've emulated here inside this demo. Okay, this concept here. And we use different architectures, different applications for the microservices. We also have the concept of API gateway. And this is where the API gateway is the aggregator. So it's a server side aggregation, and the server does the invocation. And we also have this, the chaining concept here. And this is literally where you actually have the browser invoke the gateway and then actually call a chain of microservices. This is more like your Netflix architecture and how they do it. But this is where you actually start seeing potential problems and resiliency. What happens if a loha breaks? Does the whole system blow up? Or does just a loha break? Okay, and that's really what you got to think about. And so let's do that real quick. We're going to go up to a loha, and we're going to take it down to zero. Okay, so we're basically shutting off that server. So from a Java code standpoint, it thinks it's running on its own server, its own hardware, it's its own JVM, but it's running in a Linux container, all running in a local virtual machine on this laptop. I'm running all this on this laptop right now. And I'm running a lot. You can see I'm running, you know, three containers there and one there. But let's go back and look at a loha here. So watch what happens. And I'm going to bring up the Histrix user interface too. Let's go here first. When you hit refresh, see it says a loha fall back there? I don't know if you can see that. So basically the code is written with a circuit breaker that says if I try to call that other resource, and it's dead and gone, or if it's too slow, and that's the more insidious aspect, I have a fallback position that I take, a fallback piece of business logic. That's what the circuit breaker is for. And if I look over here at the Histrix user interface, I'm going to bring it over here so we can zoom in a little bit more. Oh, that's not going to work. I'm trying to make this so it's easier to see. It's not easy to see all this. There we go. Let's hit refresh. Okay. See what's happening there? See the orange timeout? Those are requests that are timing out. So Histrix is monitoring that connection. It's actually seeing are you hitting it? Are you hitting it? And what does a user do, by the way, if your server is down? They hit the refresh button, because they're helping you along, right? The more requests they jam in, more likely that you're going to come back up. So Histrix protects you from that scenario. It's basically saying, look, please don't pound the dead thing further to death. And that's what it's going to do. And so watch what happens. You notice now the circuit is open. Okay. And I guess we kind of missed that. It happened when I was clicking the button, but the circuit is now open. Normally it says close. See the green close there? It's now open. And our CSS is off. So it was to say open there. But now if you notice, it's going to be a faster failure. You notice it's more responsive, because it's basically saying, don't beat the dead guy to death. It's already dead. Bounce off of it. And so that's what you see here. But if I come back into my browser, and I spin up this container again, let me hit the right button there. Okay. So you can see it's coming back online. So again, it's rebooting that Linux container, that Docker container. And again, this is all Kubernetes with the OpenShift user interface on top of it. As it comes back online, my service will continue to work, or start working again, I should say. And you notice too that the first two services, Ola and Ola, worked fine. Aloha is the one that was failing. Let's see if we can get it back online there. It's working again, and you notice it went closed. So it's self-healed. So as soon as we bring that service back online, the circuit goes closed, everything's good again, and the system's all working. And you can kind of see how it's working out there. Okay. You can see the 11 requests that I put into it. So this is HISTRICS and the kind of capability you get with that. And this is why it's so important to have a circuit breaker when you have dependencies on other network calls. HISTRICS, by the way, work with JWC drivers, any kind of network call. You know, we're just using it for HTTP invocations in this case, but it works with any kind of network call. Okay. Let's show you a couple other items that are pretty cool. Here's our API gateway. Let's go over here to the API gateway. Can I show you that one? Okay. Notice right now it says green. So the green one is the one that's active. I can also go update the blue one. The idea behind blue-green, by the way, is there's two hot active solutions all the time, blue and green. And when you're ready to roll a new deployment, you roll it to the non-active one and you switch the router over to it. So let's do that. Let's go into the code for that one. Okay. And it says blue here. I'm going to make this... I hate that it does that Java one. Okay. Java one 2016. Let's see if this works out or if I did something wrong. Hit save there. Go to command line. Is this the right one? Nope, not the right one. I got so many systems I'm running right here locally. It's hard to remember which one to jump into. Here we go. Okay. Maven package. Right? So you're going to have to run a Maven package there because it's got to do the Java code compile. So it's doing... compiling that Java code that I just edited. And then I'm going to pick the OCE start build. So this is actually going to do the Docker build. So if you saw Raphael's session earlier, he showed you how to do a Docker build and then run on Kubernetes. This is doing all that in one operation. And it's going through that process of actually compiling that code, doing the Docker build you can see right there, producing the new Docker image, hosting it in the registry, and then running it out inside Kubernetes. So that's the process it's going through right now. And if you watch carefully here, you can see the build happening right there. And it's doing the redeployment. Now here's the crazy thing. It did the redeployment, but no one's using that service right now. So there was no downtime for anybody, even though we just went through the whole cycle of redeploying it. Because if I go back over here to my user interface, see it still says green. It doesn't say Java 1. Let's switch the route. And actually one thing I'm going to do is I'm going to run this polar over here. So you can see that polar is actually hitting it. And let's go ahead and switch the route. Let's go to, yeah, we'll go to blue. And there it says Java 1. So we switch the route that quickly and failed over, if you will, to the blue version. And we can decide, let me go back to my user interface over here. Okay, there's the Java 1 there too. But if we decide that's not the change we wanted, we need to roll back, okay? We just update the route one more time to go back to the old version. So that's the magic of the blue-green deployment. And I would argue, again, you see it's back to green now, and it was zero downtime for our user. So there we are, green. So that, again, if you're living in a microservices world, you're trying to do 10 deploys per day. You're trying to go super fast. You need something like a blue-green deployment. Now, blue-green deployment is all or nothing, right? It's all blue or all green. If you want to get a little bit fancier with it, you can do a Canary deployment. And we actually have another deployment set up here for Canary. I'm going to run my other polar. Just to kind of give you the Canary version. So this is the Bonjour service. Bonjour is actually a Node.js service. And let me bring up the code for that. Okay, and this is the Bonjour one. Here we go. And so it's a Canary. I can call it the Java 1 Canary. There, hit Save. Go back over here. Same thing like we did before. I'm going to run my build. Yep. So we're going to run the build. Where's Bonjour? There it is. So it's going through this build cycle now. But again, we're building and deploying, not the active one. All right? The old one is still very active. You can see it's still serving customers, giving it the old version. And it's going through its build cycle right there. You can see the build running. I know it's very small, but zoom in on that. So the build is running. And once it gets to the build cycle, we can decide how much traffic we wish to move to that new Canary-based deployment. So it's actually taking a moment here to do the build. There it goes. All right. Looks like the build is done. Now watch what happens. Okay. Let me make sure my screen is refreshed correctly. And... Here we go. So I'm going to go now take 25% of my traffic to the new Canary deployment. So there's the new Canary. Let's see if we can get in here. Too far, too far, too far. Okay. The zoom in trick makes... messes things up right now. Let me hit the right button. Go to one. Here we go. Okay. So it's bringing on the new Canary online. And then if we did things correctly, you're going to see Canary Java 2016 in that mix. And let's take down there. There we go. You can see the Canary one right there. So the Canary is taking up about 25% of the load. And if I want to alter that, I can drop this down and drop this up. Now it's going to be 50% of the load. Okay. So now you can see how you can partially roll out that Canary across all users. If anything fails, you can roll it right back. So again, the Blue Green deployment, the Canary deployment, is a nice way to actually deal with your microservices, doing 10 deploys per day, going super fast. Again, you can use a pipeline and do all those things in production also, a deployment pipeline. That's part of this demo. But if you guys want a chance to run this demo for yourself, I encourage you to do so. Let me hop back over here to the slide deck. If I go into here, I have links at the bottom here. So you can follow me on Twitter, at Burst Sutter. Email me. And also go to sign up at developers.red.com. But Bitly Heller World MSA is this whole deck, plus all the other slides that I didn't show you, as well as links to the demonstration and video recordings of the demonstrations that you saw here that Rafael and I have been creating. So please go check all that out and then have fun diving into microservices land, with circuit breakers, Blue Green deployment, Canary deployments, like I said, Jenkins pipelines are in there also. So all the things you need to basically get started with your microservices journey. All right, thank you very much.