 Hello, and welcome to another Dev Nation Live. It is 2018. We're going to kick off things with an incredible session by John Clingham and Edson Yanaga. Today, we're going to talk about MicroProfile, and I know you guys are super excited about microservices. Everyone's excited about microservices. And MicroProfile, of course, is going to give us a standardization around that technology, and these guys are going to deep dive into the topic right now. We're going to spend about 30 minutes together today. We do normally keep these sessions for other short, and that way we can kind of get into the topic pretty quickly, keep the fluff to a minimum, and try to dive right in. But I do appreciate you folks showing up on chat. We had a nice conversation right before we went live about where people were from and what the weather was like, because we got a lot of snow here in North Carolina, and that's uncommon right now. But let's go ahead and get this party started. If you do have questions, throw them into the chat box on the Dev Nation Live screen, and then I'll pick those up and verbally ask them of John and Edson at the appropriate time. So do put your questions out there. At this point, let's turn it over to John Klingham. All right, oops. Can everyone see my screen? Let me go full screen. Looks great. All right, so my name is John Klingham. I'm a product manager at Red Hat, and I'm responsible for a wild-fly swarm as a runtime, and I'm also a co-founder and current co-project lead of Microfile. And if you think about, I assume most of us are Java EE developers, and we've been doing it for a long time. The issue with Java EE is that it's kind of been, well, it's very mature for what it does, right? But the iterations have been slowing down, and we kind of recognize that as an industry. And so a bunch of us in the community, a bunch of vendors kind of got together and thought, well, let's begin creating some specifications around microservices. And the idea with that isn't necessarily to follow kind of the standards process. We actually wanted to move much more quickly. So we treat Microfile much more like an open-source project. And we all collaborate around specifications, and then the various folks that are interested in creating implementations go off and create implementations of these specifications. So at the end of the day, it's a community of individuals, organizations, like user groups, for example, vendors, all collaborating around microservices specifications. And this just gives you a snapshot of some of the vendors that are involved, and some of these vendors actually have implementations. So Red Hat is one of the co-founders, along with many of the other vendors, and we have a Microfile implementation that I'll be talking about. OK, so Microfile, as I mentioned, is a collection of specifications. So the first iteration of Microfile, what we did is we kind of set the foundation of the programming model. And that's where it's basically based on some of the Java EE specifications, so CDI, Jaxrs, JSON processing. And a little bit more recently, we pulled in the common annotation specification. So that sets the foundation of the programming model upon which the other specifications layer. And over the last year, we've actually delivered, sorry, I'm counting on the screen, eight new specifications. And if you take a look, these are definitely geared toward microservices. And the microservice is kind of patterns, programming patterns. So fault tolerance is all about circuit breaking, bulkheads, retries and timeouts, config. It's about externalizing your configuration. So you don't have to tie a configuration specifically to your application in one particular context. Health check is a particularly interesting one because the idea is you expose a health endpoint. And if you're running on a cloud environment, like Kubernetes and OpenShift, those environments kind of take a look at health check endpoints and know if they should continue to direct traffic to that service. So as you can see here, we've actually got a pretty good swath of applications, a representation of microservices programming patterns. And that's just what we've done basically over the last year. We actually launched in 2016 in the summer. And we've moved it into the Eclipse Foundation at the end of 2016. And that took a little bit of time. But it is a full Eclipse Foundation project. And throughout 2017, we actually created these eight specifications that you see here. Now, one thing I thought I would point out is that these specifications that are highlighted in this red box were released literally within the last month. So you will not really see these in implementations yet. So there's still mainly specifications. But all the vendors and open source projects that implement MicroPropile are kind of in the process of implementing these specifications. But the other ones are available from multiple vendors, including Red Hat. So where are we at today? So you've seen the specifications. And we've decided for MicroPropile 1.4, which is the current release that we're working on. 1.3 is what we just finished. We're going to take a pause on features and maybe just do some opportunistic clarifications or maybe a dot release of a specification. But we really want to focus on developer enablement. That means examples, better documentation, YouTube videos, that kind of approach. Because a lot of functionality, we realize, well, you know what? Let's help developers actually go off and implement their applications using these technologies. So in the first quarter of this calendar year, look for MicroPropile 1.4. And you may even, throughout the quarter, this quarter, be able to kind of take a look at the progress and actually start using some of our examples and how to and give us feedback. Actually, in fact, if you're interested in doing that, go to microprofile.io, click on the Discuss button, and you can join the MicroPropile discussion group, which is just a Google group. And tomorrow, we're actually having our first planning meeting around MicroPropile 1.4. And if you'd kind of like to see what's going on or actually participate in creating demos and documentation around MicroPropile, you definitely have an opportunity to do that. All right, MicroPropile 2.0, we're scheduling for basically sometime in the first half of this calendar year in the second quarter. And this is more around Java EE8 alignment. So those specifications, that base programming model I mentioned around CDI and JaxRS, will up level the support of those specifications to their Java EE8 equivalents. So stay tuned for that. And we'll see what else is interesting in MicroPropile 2.0 that we might be able to put in. But again, join the discussion group and give us some of that feedback. All right, so the implementation that Red Hat delivers of MicroPropile is through Wildfly Swarm. And Wildfly Swarm is a rethinking of Wildfly, the application server. So Wildfly, which is productized as JBoss EAP, Wildfly is a highly modular application server. And so what we've done with Wildfly Swarm is we've taken that modularization and rethought of it as kind of maven dependencies so that you can only, if you're only developing with a subset of the Java EE technologies, you can actually package those Java EE technologies, just those technologies, along with your application, into a single JAR file and just run it as an Uber JAR, right? So you can just Java S JAR and run your application. Or if you want, you can kind of create a customized app server. And again, it comes back to choosing just the technologies that you want, maybe just some Java EE technology, a subset, and some of the MicroPropile technologies. Package them into a JAR file, put that into a Docker layer in the Wildfly Swarm world. We kind of call this a hollow jar. It's just the pieces of the app server without the app. And you bundle them together, or sorry, you put that in a Docker layer and then you just apply your war file and it's in a Docker layer on top of it. So really cool. Again, microservices is the focus of Wildfly Swarm. So it's rethinking Java EE as a microservices platform by adding in MicroPropile. We also optimize it for Kubernetes. So we map the features that are available in Java EE and MicroPropile to the features that are available in Kubernetes. I think Edson's gonna show some of that. So it's super lightweight with UberJAR support and a war file support as well. All right, just thought I threw one last point here. That is that Wildfly Swarm is actually a supported product at Red Hat under a product called the OpenShift Application Runtimes, which is a product that includes Wildfly Swarm, Eclipse Vertex includes Spring Boot and Tech Preview of Node.js today. All right, so these are kind of all supported within OpenShift Application Runtimes. And to learn more, if you wanna learn more about MicroPropile and the specifications, go to MicroPropile.io. If you wanna learn more about Wildfly Swarm and actually begin coding with MicroPropile APIs along with the Java EE APIs, go to wildfly-swarm.io. And actually one last point I think I'll make is that's kind of what you'll find is that MicroPropile specifications are kind of being added to the Java EE application servers. So a lot of vendors are actually augmenting their Java EE implementations with MicroPropile. So if you're wondering, do I have to use one or use the other, really at the end of the day, you'll probably be using both together. And I think with that enough talking, I'll let Edson maybe show you some code. All right, let's see some codes because we're developers, we want to know how can we use all of these awesome features. So I'm screen sharing, can you see it? Looks great. Okay, so here I am at the wildfly-swarm.io website. This is my favorite MicroPropile implementation. And I just want to show how easy to start using the MicroPropile specifications in APIs. So I'll get to the Snake Generator and I want to create a new project. I'll just name it the EarthFactors DevNation. We're doing something live. And the only dependency that I need to show you all of these features is the MicroPropile one. So I'm gonna choose the MicroPropile dependency, generate the project, unzip it, and I'll open the project. So while my project is being configured, here I want to open my endpoint. Here I have a hello world. I don't want to show you just this, the simple JAXRS hello world because we're going to show, I use something different from MicroPropile and we're going to start with the health checks. So health checks are a fundamental requirement these days for if you want to create a cloud-native application because since we're not dealing with a single instance anymore, we have multiple instances running into production. You want to perform things like blue-green deployments and rolling upgrades and you need the platform like Kubernetes and OpenShift for that. So you need something that we call a health check and MicroPropile gives you that for free. And if you run this application, just as it is right now, that nation live, I just run a clean package, just to generate my wild fly swarm factor. Okay, I'm going to execute that, it's running. So if I go to my browser here, you can see that I have, still I have my hello endpoint, but I also have my health endpoint, which gives me some statistics. It shows me that my application is up and running and a platform like Kubernetes and OpenShift would use the return code of this response, which is like 200, okay? To say that, well, this application is up and running, but you can bet that just plain endpoint doesn't give me much. So I might want to check if I can make the database connections, if I can successfully complete a query against my database. So I can implement my own health checks inside my MicroPropile application. How can we do that? So I'm going to create a very simple health check here for the purpose of this demo, some gradient class, simple health checker. And yes, I know it's too small, so I'm going to increase the size of my fonts here. So what do I need to do? I just need to add the notation health from MicroPropile. I need to say that this is an application scope, it's being, and this class needs to implement the health check interface. So after I implement this methods, which means it's going to return that, you probably want to add your own custom business logic to check if everything is working properly. Like for example, you could check if you have like enough disk space in your container to see if your application is going to run successful. So I just have to return a health check response, name it's a simple check, and I'm going to say that my application is up and view this response. That's everything I need to run a custom health check. So let's run our application again, I'm going to stop it, generate my Fedjar up and running. So let's see my health check and point again. So now you can see that I've just added my custom health check here, name simple check, this status is up. If for just in case my health check wasn't going well, it went down, some outcome of the overall health check would be down, and my application would report as being unavailable to my platform like Kubernetes OpenShift. So you could use this status in your rolling upgrades or other upgrade strategies that you might have such as canary deployments. So this is the basic health check thing that I wanted to show you. And also you can't have a truly proper cloud native application if you don't have some kind of metrics that you can expose through a monitoring platform. So your application needs to provide this kind of statistics so you can later check if your applications behaving properly into production. And again, micro profile gives you that for free. If I just access another endpoint called metrics, I can see that by default I get a lot of different metrics available for here like CPU, garbage collector, thread count, and heap, and all of the other things. And if you want to, you can also provide your own custom metrics but it takes a bit or more of effort. So we're not going to show it today because I have other interesting stuff to show you, OK? So that's basically health checking and metrics. So let's start to dig into another feature profile which is the config API. So you want to externalize your application. If I get to my hello world endpoint, sometimes I need to get some internal information from property files, from environment variables, from Java properties that are passed from my command line. How do I get this information, right? Instead of trying to implement something by yourself or using some property source like a library, you can use something that's provided for you, integrated or every with CDI. So let me get here my code. Suppose I need a property called greeting. So I'm going to get string greeting. And I want that property to be detected automatically here through CDI. I'm going to say it's indexed. And I'm going to use my config property annotation from my profile and say that, well, this property is greeting and I can't use that sign application. So instead of saying hello from wildlife swarm, I just want to use the string that I just provided. So that's how it works. I can go here. My wild fly swarm application is running. And just in case you don't want to ever package and your wild fly swarm Maven plugin that can do this automatically for you, you can run your application inside Maven. So it aligns your application development if you're in development. Oops, we got some error, which means that, well, the texture is too big. But I got a deployment exception which states that for this code, it can't run into production because you said that you need a config property called greeting and this property is not available anywhere. So I need to provide this configuration if I didn't want this config property to be mandatory, I could say that, well, you could have a default value of hello. That's one option. But I can also one of the interesting options that we have for this kind of thing is that it can be an optional string. So if you want to solve this thing by code instead of in a narrative fashion here in the annotation or you want your optional string to be dynamic, I could say that, well, we think that the value provided the property be like something like aloha, just because it works for how I am going to say aloha. So it can be an optional property too. Now if I run the application, providing an option of t, it should run properly. OK, now it's ready. If you just get to my endpoint of your value, not what I was expecting, right? Let's see something different. I'll try to provide the value. Let's create folder. First option is for you to provide a property file, resources, math.int. So if I create a file, microprofile.config.properties, I'm going to say that now greeting should be aloha, which is a Portuguese version. Now let's see the output of if I provide a property file. Provide it's ready. It's provided aloha because I provide a file. But just in case, I don't want to use the information provided in my property file here, which is microprofile. Properties, I want to provide this information in my lines. I can say that greeting should be salute. And I'll say jar, target, demo, sworn, dot jar. So the command line property takes precedence over the property file. So we should be this version of my application running, salute. And we can also get, I can export like an environment by a verb called greeting and say that we should have something, sorry. And run this again. Sorry, it's not really a hello, but I'm out of options of my mind right now. So that's how you use the config API inside your application, your microprofile application. And last thing I want to show you right now is the circuit breaker configuration. It's very simple. If you ever use it or heard about history, for example, well, Netflix or SS provided us a great implementation of a circuit breaker. But now that we are in 2017, maybe we can use like a standard way of providing this kind of capabilities to our application. So we're going to use a circuit breaker. And I want to show that it's very, very simple for you to provide either configuration or fallback implementations for your failing methods. So for us to demo this, I just created another REST endpoint, which is already running here at my localhost 881. It's providing a JSON response with a list of persons. So what I need to show, I'm going to create a remote request for that. So first, I want to create a new class here called person, which is going to be my implementation. It's going to have an ID. It's going to have a name. So let's generate the getters and setters. And I'll hit a constructor with all of these options too and also a default constructor. And you guys that follow me know that I'm not a fan of constructors, but Refactor. I'm going to create a factor method. So back here to my endpoint, now I'm going to create another endpoint saying that, well, maybe I can have a response or maybe I can reply just directly. I want the list of person, which is a list, get people. I'm going to say that I want a path to be slash person, get. And I want to make a remote call for that. And I also want to return uses, application slash JSON. So what do I need to do? I'm going to use the Juts or S clients, new clients. So for the client, I'm going to target localhost 881 person, 81, F person. I'm going to get this target, target.request.get. And I want to get a new generic type. That should be a list of person. Yes, should return my people. And maybe I don't want to return the full JSON list. I want, well, let's return this for now. I want to perform something. OK, this should be OK. Now let's run this application with the Maven plugin. It's ready, so if I just access here, my endpoint is slash person to respond local person. OK, I get the full response. Maybe if I want to, now it's working properly. It's just a mirror in here, but I want to show that I'm performing some transformation. So I'm going to return just a list of string. And I want to return people, string, map, person, get name, text, collectors to list. So let's run again just to show that I'm performing some kind of transformation. You see now I have a JSON area consisting of only strings. And here I have my remote endpoint, which returns me a person, full person, I'm performing some transformation. What happens is my remote endpoint gets down. So my remote endpoint is not responding anymore. And my application, now, it fails because I don't have any circuit breaking. If I want to provide the circuit breaking implementation with a micro-profile, it's pretty easy. I just need to create another method, string, which can say like cache names, which you probably are going to implement like a cache approach for that. And I'm going to say that, well, I want to return a new, well, this. And I want to add some, it's going to meet John, sorry John, just to see that the fallback is different. And I'm going to say that, well, just in case this thing fails, I want a fallback method to be cache names because if I get an exception, I want to use and call this other method that I have locally, just I don't want the exception to blow in my user's face. So with that in mind, we're going to run again a wildflagsform implementation. And so my remote endpoint is still down. But if I access right now, you can see that I have a fallback. And this is just a default behavior. If you want, the micro-profile circuit breaker has sensible defaults. But if you want to customize that, you can use a circuit breaker annotation. You have a lot of different parameters that you can choose and how do you want to configure a circuit breaker implementation. But if you want to stick to the default, just add the fallback annotation and you already have your fallback here. You can use your cache values or fix the string, which is what I used. And that's what I wanted to show you today. You know there's much more into micro-profile, but that's everything that we could show you in just 15 minutes. Yeah, we are definitely running out of time. But there are some great questions I want to make sure we hit you guys with. One is, Ilya specifically saw that you're using the health metrics endpoint and you're returning JSON payloads in that specific case. Why not have something like JMX and being able to hook up J-console? Like for instance, is there a fraction involved by Swarm to turn on JMX? And can I use J-console to interact with that? Or do I have to go through this health check, which is basically a rest endpoint with JSON payload? Okay, you have a JMX endpoint available by default, so you don't even have to add that while the fly swarm already exposes. So, and I just said JSON, but it's not only JSON, the metrics API provides you plain text, provides you JSON, and also provides you like a Prometheus format. Okay, no, that's fantastic. Hopefully that answers the basic question. There was also a point that on our generator page, there are apparently some duplicate artifacts in that list. I've not seen them, but that came up during the chat while people were watching the demonstration. They felt there were some duplicates there. But by the way, if you guys see anything like that, please do open an issue for us and we'll definitely chase that down. And John, there's actually a great question for you. For a person who's already bought into JBoss EUAP, Enterprise Application Platform, and they're not buying into the container universe just yet, how can they start taking advantage of things like micro profile and fly swarm capabilities if they want to do microservices, but all they are is an EAP user today? Yeah, good question. So you don't have to use a wild fly swarm inside of containers obviously and what Edson showed was just running on the desktop. So you can continue to use swarm in that manner. So right now, from a Red Hat perspective, if you would like to use the micro profile APIs, then wild fly swarm is the way to do that. And upstream, wild fly swarm has pretty much all of the Java EE technologies plus micro profile in terms of product, wild fly swarm includes the wild fly, sorry, the web profile technologies, not necessarily the full Java EE app server stuff yet. So provide feedback on maybe how you might wanna use micro profile and EAP or how you might wanna use wild fly swarm outside of a containerized environment where we'd love to hear feedback. Okay, and one more great question I'm looking at here and I did ask a clarifying question myself, it's from Carlos and it's related to the fact that if I'm using wild fly swarm in a OpenShift or Kubernetes world, is there any special configuration to deploy multiple instances of wild fly swarm in a cluster? And I assume in that case, they mean an application server level clustering. So John, what information do you have on that form of clustering in the context of a Kubernetes cluster OpenShift cluster? Yeah, the interesting piece there is in a containerized environment like Kubernetes where you have a bunch of administrative capabilities already available to you, it makes less sense to actually use the built-in application server kind of clustering. So what you get with Kubernetes is the ability to automatically scale up and scale down your instances and in that sense, they're already being managed as a cluster. What you don't get in that scenario is session replication. But in kind of the microservices world, what you would tend to use is a Redis or an Infinispan or something like that. Some place to externalize your state. I believe that EAP might actually support clustering on top of Kubernetes. I'd have to... Actually, I think one of you guys might know that a little bit better, but generally speaking, clustering is kind of managed by Kubernetes through scaling, auto scaling, and replica sets, and externalize your state. Right, right. And that was really the clarifying question I asked of Carlos. If all you're after is failover and load balancing, you get that for free now in a Kubernetes world, an OpenShift world, right? It does that automatically across the pods that you have out there, multiple instances of the same pod, same application. And then if you're dealing with shared application state, externalize it to something else is what we often talk about. We have an interesting demo out there that specifically does that with the JVolkStateGrid and Infinispan, where we actually take a Spring Boot application and externalize all the application state to it. So you actually can see rolling updates happening in Kubernetes with no impact to the actual end user experience. So one other question, and we're running out of time, is microservices with Java EE is important, is it important for the Android developer coder? So does the Android community see anything here in this world? So John, what do you think about that? I haven't even thought of that. That wasn't really the target audience of MicroProfile, for instance. But it gets back to join the discussion group, right? Again, MicroProfile.io, go to the discussion group, join the Google group from there, and ask these questions and provide us use cases, right? Because this isn't really something that we had really kind of considered. Actually, one quick thing I thought I would bring up as you were watching Edson kind of writing some code is you could actually hook up, JBoss or upstream wildfly has a deployment scanner, right? So it can redeploy applications when the war file changes. You can actually do that with wildfly swarm as well, right? There's a built-in deployment scanner. So you can actually create what we call a hollow jar. Again, it's all the pieces that you want in your app, mine it, sorry, in the app server, and you pass in your war file kind of as an option. But the nice thing is anytime that war file changes, it'll get automatically reloaded so you don't have to restart wildfly swarm every time. So it's kind of neat. Okay, one last question I think, because I think it's a pertinent one from Doug. You wanted to know if the wildfly swarm generator war file could also be deployed on EAP, and that's something I never thought of before. Have you looked at that? Yeah, what wildfly swarm will do hidden inside of the target directory that Edson had, there's both an Uber jar that you can Java and Sjar, and there's a dash war file. Sorry, a dot war file as well that can be deployed to Jbos EAP. So if all you're doing is Java EE technologies, then the scope is basically provided and your war file will be pretty vanilla, just bytecode for your app. If anything that's not provided will be included in the war file. So you could, although I haven't quite tried this yet, you could basically take the war file that Edson has with the micro profile stuff, it'll be actually included, the libraries will be included in the lib directory, and you could deploy that into EAP. Something worth trying out. Yeah, that's an interesting use case. I hadn't considered it. So Doug, thank you for bringing that up. And we are out of time for today, but I'll thank you guys for your time, your questions, your attention. As always, watch the URL for Dead Nation Live. We will continue putting on new content all the time. There'll be lots of great sessions coming down the road for 2018. John, Edson, thank you guys so much today for giving us a nice little intro to micro profile. It's always fun to watch Edson type on the file. Thank you. All right, well, thank you guys so much and we'll see you out there in the Twitterverse or always feel free to email me. You guys should all have my email address based on the announcement emails that have gone out about this session, but do check us out at Dead Nation Live, right? So developers.red.com slash Dead Nation Live. Thank you so much.