 We're going to be talking microservices. Who's heard of microservices? Oh, a couple people. All right. We're going to be talking about microservices for Java developers. We're at a Java conference. How appropriate. And the content kind of will be coming from a book that I wrote recently called Microservices for Java Developers. I have a copy right here. There's a ton of copies over there and we're we're going to look today at what a Java developer might go through as they go down the journey of doing building microservices, getting into this architectural style. I wrote this book. I am a frequent contributor to open-source projects. I give talks. Raphael, Benavidas and I will be giving a much deeper talk on this same material tomorrow at 4 p.m. Called Kubernetes for Java Developers, I think. No, no, no. Docker, Kubernetes and Jenkins. Sorry. Okay, so why are we talking about microservices? Why are we talking about any of this stuff? And you know, especially the folks that came from the SOA background and before that with client server and you know, we were constantly thinking up new ways of what to call distributed systems. Microservices, if you look at it and look at the same things that people talked about with SOA, it's very similar. The foundations are very similar. How people went about implementing it. Let's not redo that. But microservices at its core is about speed. Optimize it, not just the technology, the business. Optimizing the technology and the business for speed. Now what? Speed of, speed of what? Super fast. You know, we saw a reactive presentation earlier today. I'm talking about the speed of change, being able to change the application, being able to change the software fast. Now, why would you want to do that? A lot of our traditional enterprise companies are being disrupted by startups. And those startups are doing this by going to the cloud and learning Node.js and all kinds of programming languages and starting things up and experimenting and trying new things and seeing what works. And then out of that you get things like Netflix and Uber and Amazon and Zappos and so on. Traditional enterprise companies need to be able to adopt that same sort of way of thinking about things. Experimenting, trying new things. If something doesn't work, change it. Change it fast and figure out what delivers business value. So one thing we've learned is that creating value with software is about experimenting and learning and trying new things. In the Java world, when developers get their hands on technology, the sort of things that they'll touch and the next few things I talk about are all related to how do we speed up our ability to develop and deliver software. For example, Java developers may touch Spring Boot. They may touch Waflai Swarm. Spring Boot is the sort of container list, container story for the Spring framework, allowing you to simplify configurations, simplify dependencies. You get built-in metrics and monitoring and so on and package it all up as an Uber jar. And this paradigm sort of started with DropWizard a few years ago, six years ago. And then recently Waflai Swarm is adopting that same model but allowing users who are familiar with Java EE technologies to be able to build micro-services in the same pattern with just straight Java EE. And you can't talk about micro-services and Java especially without talking about Netflix OSS. And these projects, you know, Eureka and Ribin and Zool, and these projects came out of the Netflix community. They were written about, I don't know, six years ago, five or six years ago, because Netflix was building out a cloud-native platform on top of AWS. And a lot of the things that are associated with Netflix and Netflix OSS when they first released it was associated and assumed AWS deployments. And these things are very valuable for building elastic applications. Distributed configuration, service discovery, so on and so forth. And when Netflix released this, the Java community started to up, you know, it was an uptake in this open-source community and people started using it. But what about what about non-Java? Part of the reason of using micro-services is to be able to deliver speed without being tied to your existing legacy ecosystem of tooling and languages and so on. What about, you know, Node.js or Golang or Python or whatever, those also need these features. And a very interesting thing, Adrian Cockroft from, you know, former chief architect at Netflix said that you can't just copy, you can't just, if you're going to innovate, you can't just see what everybody else is doing and copying them, because that's not innovation, right? Just using these tools isn't going to make you cloud-native or micro-services. And plus they were written like five or six years ago. You know, if your cloud strategy is five years out and you're going to use five-year-old technology, you're going to be 10 years behind when you actually get there. The way Red Hat is looking at micro-services, and especially for Java developers, is how do we take the experience of deploying and managing large-scale applications that Google has built upon for the last 10 years? Google has an internal cluster and container management solution called Borg, and they wrote a paper on it. And Google has written many papers on lots of distributed systems concepts, but they said, you know what, damn it, I'm sick. I'm sick of writing all these papers, and some people at other web companies rewrite everything in Java and put it out of Apache. We're going to write our own version of our paper this time, and we're going to open-source it. We're going to bring the broader community along with it. We're going to bring Red Hat along. They know a lot about this stuff. They know a lot about driving open-source adoption, open-source communities, and we're going to call it Kubernetes. What's interesting about this project is that a lot of the distributed systems concepts are baked right into the platform. So think about that for a second. Go back and look at Netflix. With Netflix, they had to handcraft all their stuff and put it together and manually assemble it themselves. They had to own their own service discovery servers and complicated Java client libraries for doing load balancing and service discovery and so on. And it was only available to Java. With Kubernetes, these sort of things are built into the platform. Service discovery is a first-class citizen in Kubernetes. So it doesn't matter what application, what language you're using, you can get service discovery out of the box in Kubernetes. Use whatever programming language you like. Versioning and routing and auto-scaling and self-healing and all these things that you would otherwise have to try to bake into your application, they're baked into the infrastructure. And if you're going to use containers, which you absolutely should, if you're going to do microservices, then take a look at Kubernetes. At Red Hat, we're investing all of our time in building our platform as a service as a Kubernetes distribution. So OpenShift is Red Hat's distribution of Kubernetes. So here's an example of what service discovery looks like. If you understand these handful of concepts, pods, I guess I don't get any. Okay, pods, labels and services. A pod, just think of a pod as your application. It's a Docker container. It can be more than one Docker container, but for the sake of discussion, let's say it's a Docker container. Labels are key value pairs that you can use to assign to any of these objects, including pods. So pods can have labels, metadata associated with them. In this case, we're calling some of these pods app equals VaultDB and version equals 1.0. And you can apply a lot of these labels and slice and dice and group your applications however you'd like. The last important concept is the Kubernetes service. Kubernetes service is the service discovery abstraction. When a client in this case wants to talk to VaultDB, the client talks to this Kubernetes service. The service uses a selector based on the labels and finds and groups the pods that belong to that cluster. So my client talks to VaultDB and that name is associated with my service. And then Kubernetes will do the lookup through DNS to find those pods in the back end. Now, you may have heard from some of our other friends that DNS is no good and in a traditional environment, that may be true. But in Kubernetes, that DNS name maps to a static IP that never ever changes. There's no caching problems, there's no lookup problems. It's always the same and it's always that DNS name. So just use DNS, use the service discovery that's built into the platform. You don't need complicated stand-up Eureka and ribbon and all these things and trying to cluster them and maintain them and all that. Just use the Kubernetes service that's built in DNS. Any app can do this. No JS, Python, Ruby, Go, C++, whatever can use this abstraction for service discovery. What if you do want to do client-side load balancing or client-side discovery? Then, again, use Kubernetes API and then embed those libraries in your client when you need it. If you need to do complicated domain level routing, then bring those client libraries in, rely on the Kubernetes API which knows of all of those pods and labels and all that stuff. And then you can do that client-side load balancing if you need it but you probably won't need it for a good majority of use cases. So Kubernetes simplifies being able to build and deploy and maintain microservices. I write a lot more in more detail in my book. But another part of just moving fast is continuing to improve. Continuing to try things out and see whether or not they work and taking metrics and taking that feedback loop and maybe redoing some things. So there's an open-source community out there called Fabricate that builds developer tooling on top of Kubernetes, on top of OpenShift to be able to move fast, to be able to get that feedback loop, including things like one-click CICD. So with one click we can install Jenkins, Nexus, Git all integrated together and have a nice UI over the whole thing. And I'll show the deeper demos of that tomorrow at my session at four. You know, Chaos Monkey built in, Java Maven tooling for Maven plugins to be able to allow Java developers to build their microservice and deploy into Kubernetes without having to touch any of the Kubernetes stuff. Cube CTL or Docker files or whatever. Just use the Maven plugin. That's what we're all Java developers are very familiar with. Oh, let's get to a demo. And I'll show you an example of what that looks like. We're going to start off building a very simple microservice. How many people have heard of Wildfly Swarm? A couple people? How many people have heard of Spring Boot? Okay. We're going to, if I have time, what time is it? We've got like ten minutes. Okay. We're going to try to create a microservice with Wildfly Swarm and with Spring Boot. And we're just going to use the tooling that each project provides and then we'll see how the Fabricate Maven plugin allows us to deploy it right into Kubernetes. So here we're at the Wildfly Swarm Generator project. And we're going to add a, refresh and make sure we have internet. We're going to add CDI. We'll add JPA. Just simple data sources. If you're familiar with the Spring Boot, start.spring.io, it's very similar. And I'll show that one in a second. We'll generate the project that creates for us a demo.zip. We're going to move downloads at demo.zip over here. And unzip it. Demo.zip. Demo. So we have our very simple Maven project here, Java application. If I do Maven, Wildfly, Swarm Run, this will build the project, build the Java E application as an Uber jar, just like Spring Boot and just like Drop Wizard do. And run it, run it locally. So it's up, it's running. Now if I do a curl, 8080, hello. It should just give us some hello response. It's just simple, simple out of the box application. Now, let's say we add some more stuff. And then we want to package it and deploy it inside of a Kubernetes environment. So the way we do that is we build a Docker image and then we can, or we can attach it to OpenShift, for example. OpenShift has S2I that will automatically build the Docker image and so on. But we're not going to do any of that. We're not going to touch OpenShift. We're not going to touch Kubernetes directly. We're going to say setup. What does this do? We're just adding a, we're using a Maven plugin and we're calling the setup goal. And all it's going to do is add another Maven plugin that adds some configuration and that's it. And then we're going to be able to deploy it into Kubernetes or OpenShift through Maven. So I'm going to hit setup. Pretty easy so far. Maven, Fabricate. Wait, I got to do the Wildfly Swarm demo. Then do Maven, Fabricate, Run. Just like Wildfly Swarm Run or Spring Boot Run. Maven, Fabricate, Run. This is going to build a Docker image for us. It's going to create the OpenShift or the Kubernetes manifest files for us and it'll run it inside of Kubernetes. It'll tail the logs and it'll show it just as though it were running locally. It'll give it a second to run. And I didn't touch kubectl or oc command line tools or Docker files or any of that stuff. We just used this Maven plugin and we see it starting up. So this is the Wildfly Swarm window just like we saw a second ago and it looks like it's up. We're going to look at our pods. We have our demo pod running. Look at our services. Kubernetes service. Remember what we said was how we can discover these pods. We're going to expose this as an OpenShift route, which allows connections from outside the cluster to be able to reach inside the cluster and talk to this. And then if we do, we see the same response except this is running inside of Kubernetes. Packaged up as a Docker container running inside of Kubernetes. We can do the same thing with Spring Initializer, the Spring Project, Spring Boot Project. Build your project. Use that. Fabricate setup command. Give it a second. You can see all the logs actually even from Kubernetes right here in the command line. When I do a control C, it should shut all that stuff down. Come back here. This is the setup command. That'll set up your project and then you can use Kubernetes pretty simply. Tomorrow when we demo it, I'm going to show you a little bit more comprehensive set of tooling that allows us to create this project and attach it directly to a CI CD pipeline and then check it out and then we'll do some more complicated stuff. Make changes. Watch rolling upgrades and canary releases and all that stuff. That's at our talk at four o'clock tomorrow. I think it's called Docker Kubernetes and Jenkins. That's all I'm going to talk about for today. If anybody has questions. All right. Yes. So the question was if you're running an existing version in production and you want to deploy a new version without taking downtime, how would you go about doing that? Now that's not a very simple question or answer if your application is stateful, if you have databases running in the background. If it's not, the answer would look similar to you could do what's called a blue-green deployment or you have version one running and you stand up a version two and as traffic is going to version one, you run some smoke tasks or poke at version two and then when you think it's ready, you flip the router over to direct traffic to version two. And then you leave version one up. And then if there's problems, then you switch back. So that's one approach. Another one is a canary release. So version one is running. Maybe you have five instances running and you come up with a version two and you put version two in that cluster with version one and you just watch it and see what it does or you keep it off to the side and only direct a portion of traffic to it. And then if it's good, then you can initiate a rolling upgrade where you take one of the old versions down and bring one version up or you can do a blue-green deployment or something like that. Any other questions? All right, well thank you guys for stopping by.