 Hello, can you hear me? Okay, so I will just really quickly dive right into it because I have it really long There is a lot of stuff going on micro profile right now. So just quick show of hands. How many of you heard about my profile? Yeah a lot and how many of you actually tried it? Okay, so at least you will have something new to learn. So just quickly about me My name is Martin Stefanko. I work in Red Hat in Bitefy and Jbozy AP application server and for the last year or so I also am active committer to micro profile. You can find me on Twitter So to start Why micro profile? I would just say how the enterprise Java look light in the past 20 years What we had was java currently called the Jakarta E which is a set of 34 or 35 specifications aiming for web development in Java What was the major problem with these specifications is the release cadence That they did in the past decade or so which were around three or four years for a basic release Around 2014 I think containers and cloud came so we In a community decided that we need to do something about it We cannot wait for the next java year release and that's why a group of vendors like Red Hat IBM and Tommy tribe got together and created a new initiative put it under Eclipse project and Basically aimed it to create an open source specification for optimization of Java microservices These are all really nice and fancy words But what it really is is again a set of specification. Currently we have 12 of them Which were some are adopted from Jakarta E and some are edit as brand new This is the latest release from October last year where you can see Everything basically that is comprising now the micro profile Sorry There is also a lot of topics currently under discussion from once Just to mention some examples. We are working on distributed transactions based on the saga pattern Active programming service meshes as opposed to previous talk or micro profile concurrency There is a really broad Community of people who are contributing to micro profile and there is also a lot of implementations that you can currently choose from To name some differences from Java EE. We are trying to do well trying We are doing everything open source and in open ways. So that means that everyone can be heard every decision is being made in the public forums and Everyone can be part of the overall decisions Code first approach means that every specification that is to be included in micro profile needs to first have a working TCK and some implementation that is passing this TCK Did this is a different concept from reference implementations from Java EE, but it there needs to be some working code first and We are doing currently three releases per year in this mentioned months and in two and a half years Since micro profile started with it already six releases. So you can see the difference from Java EE Currently the next release is planned for February 6th, I think so in two weeks micro profile two that's two will contain again a bunch of updates and including a new APIs Everything that I talked about so far is available on micro profile that IO and much more as blocks or videos So, please go and check it out What I will try to do in this presentation is to show you each individual specification and Just to give you an idea how you can use it in your Java micro services for this reason. I created a simple Application where we have free services one is profiling which is based on turntail Which is redhead offering of micro profile second one membership is based on pyra micro and The third one is based on open Liberty which is offering from IBM So with or without further ado, let's start with the whole bottom row as this specification comes from Jakarta EE I Suppose that most of you are already familiar with them. So I will just really scheme really fast through them So Jack's RS Java API for restful services is basically a way how we can define the rest or HTTP ends points in web applications for the past several years All you need to do to define such at points is to really create a class that extends and Jack's RS application class and Annotated with application path The value that is provided in this annotation is then used as a root resource path Which is prepended to every resource that you create To create an actual resource you need to do two things at at path annotations which specify the individual parts of the path that you specify your individual resources and add some at HTTP methods Annotations that will specify what HTTP method you need to invoke on this resource In that sense when you access for instance here slash API with post HTTP method you will get invoke that log even method. I hope that you can see it Jason P or Jason processing is a specification that allows you to work with Jason documents in Java Basically, all it is is this single Jason class which provides factory methods for each individual sub functionalities of the specifications in this example, we are creating a really simple Jason document with a build-up pattern Except for creating and reading Jason documents. It you are also able to do parsing depending on events as you are going through your Jason documents or make pointers directly into nested structure of your Jason document and change or patch this individual parts Jason B came into and is or Jason binding came into existence because Jason parsing miss Really important feature and that's mapping of Java classes directly and just Jason objects as is shown in this example So Jason binding really does only this one thing It contains two classes of which is Jason be builder which can be used for configuration of the document that you are going to output which may be pretty formatting for instance and Jason be class which only contains to Jason and from Jason methods where you can transform directly Java class and it's Jason a present representation However, I don't really think that you will ever be using this directly But you will be using it every time you specify consume or produce this will specify content type and accept headers in your HTTP invocations and you will return directly some classes So the framework in the background will automatically marshal and unmartial Your classes to Jason documents CDI or context and dependency injection is basically a way how to put together all of the individual parts of your application and frameworks in declarative manner so you only need to specify what Dependency is your class require and you will get them injected with an inject annotations directly into runtime The runtime then guarantees you that this object will never be null and you can directly use it in your application the runtime knows what kind of object to inject Depending on the Of the type of the object that you are injecting like for instance here. We are injecting client object However, you can have a multiple beams that are able to be injected into a single type and for that reason you can also use CDI feature which is called qualifiers Which are custom annotations that you can also include on the bin definitions and on injection points to specify which instance You want to inject this will be useful as we will be coming to make a profile specifications So how you can define a beam? It's in two ways. You can annotate it with some scope the most useful ones are request scoped and application scope and this basically just define the How long the beam is going to exist request call will create a new beam for each individual server request There are also other scopes that you can use but I will only mention the dependent one Because this is the default one and it just says that the bean has the same scope as the bean where it is injected So it can change depending on where the injection happens The other way how to create a bean is by Produce annotations which can be placed on a field or in them on the method where you can do any additional custom configuration that you may need for your object to be returned and then this object is returned So this would be everything that I have to say to the annotations that we took from Jakarta II and Let's dive into specification that we added on top of that Configuration 1.3 is really a way how you can extract your configuration away from your application to be able to change it dynamically and This change needs to be reflected in your running application without the need of redeploying How you can do this is again by a CDI injection and a custom qualifier, which is called config property This Annotation has only one required field Which is the name of the property that you are going to look up and it also can have some default values And then you can directly use it in your code Easy as that There are three different ways how you can inject configuration into your application The first one if you directly use an object you will need to somehow specify this value So if you don't specify the default value and you try to deploy this application without required property on the class But you will get an exception on the start The second one is injecting an optional value that means that then you can get optional empty and the third one which is most useful is CDI provider where Each time you are accessing this always real this property the runtime will reload this property from your configuration So this is how you can actually update dynamically without repackaging and redeploying your application This specification also allows you to inject directly a whole configuration in a config class Where you can call some get value get property names and something Which is called config source and config source is Location where the runtime is going to look for your configuration By default there are three default config sources So where we look first is system properties if the value is not found there We will look into environment properties and if we are not successful We will scan every meta in micro profile config properties file on the class path However, the specification also allows you to define your custom Config sources, which can be for instance config maps or in-memory YAML files, etc So how you can do something like this is really easy You can just implement a config source interface which except for a really Defective of default methods which you need to implement like get value or get properties You can also override something like get ordinal which is Priority of your config source For instance this value means that it will be loaded even before system properties And you can optionally avoid override the name of this config source Which will be then used for some logging purposes if you need again to some Dynamically configuration of your config source on for instance You need to open the ML file somewhere and find find it somewhere You can do it by implementing a config source provider Which has only one method which returns a collection of the config sources that you want to use like this The last feature that I want to talk about in configurations are converters so far I only showed you the string injection which is straightforward because all the properties are defined as strings but this Configuration spec also requires you to define Each implementation needs to define some default converters Which are shown of the left-hand side and also its unboxed variants and from version 1.3 It also requires you to be able by default to parse some collections like array list and set The default delimiter for these collections is commas But what happens if you want to inject your custom type which is not known to the runtime the specification also allows you to do that and For this reason you just need to implement a converter interface Which takes a type of the object that you want to actually transfer and in the its convert method You can do your individual splitting of the string property and returning your object so this would be everything that I had to say to config and Let's move to help check Help check came into an existence because we are targeting micro-profile for a cloud providers And if you are familiar with Kubernetes and its liveness and readiness probes This is the basic concept how you can tell from the application if the application is running or not Micro-profile held aims to be compatible with these cloud providers It needs to be machine to machine understandable That means that it needs to map to some default protocol which for now in Kubernetes at least our response codes And it should provide enough information for the human administrator. We will get to that what that means So to define your custom health check you really need to again implement another interface Which is called the health check and mark it as cd. I've been with at the health qualifier This is a functional interface that only contains one method Which is called call and it returns a health check response This object has a really nice builder interface where you need to specify two required properties that is the name of the health check and Its final state, which is a Boolean up or down it also provides you a utility methods which are that up and that down and Additionally you are able to specify with with data Method also some custom key value pairs that you want to include in your health check This is that information that you can add on top of the basic machine-to-machine Understandable responses so the human administrator administrator can for instance in a case of error find out what actually happened This configuration is then Available at slash health endpoint where you will by default get adjacent document Which contains two fields the first one is the global outcome of all the checks that are included and Second one is the array of checks on the right hand side you can see the same check that we define on the previous slide with all the custom data that we specify with with data and that would be it's to the health check interface and Basically, we are halfway through so there are there any questions so far. I Don't think that I understand a question Yes, this is by default expose at slash out anyone can call it Okay, so if there are no other questions, let's move to metrics metrics Speaks for themselves is a way how you can define custom application metrics in your application so you can then collect it somewhere The metrics specification specified three different scopes of metric the first one is base Which is something that each implementation is to provide then there is a scope of vendor which are optional vendor specific Metrics and then what is really important for the end users is the application school metrics, which are something that you define in your code Here is just for a general knowledge a list of all the base metrics code where you have like really Generic stuff as hip memory thread counts or GC times But what is really nice about this specification are the application metrics that are custom metrics Which are specified by you as a developer So on this slide we have an example of two of them Which first one is timed again, it speaks for itself It just measures the time in the your specified units of the execution of the method where it is placed The second one counted again just counts how many times this method is invoked So we already covered time and counted but there are also annotations for gorge Which is any custom metrics that you want to use This is the only required one for you to specify the unit in what it is measured Metered is used to measure the frequency of request of some invocations. So how many times per some time you need you are invoking a method and At metric is used to inject the whole scopes or the individual metrics as objects to see the injection I have an example on this slide where we are injecting directly a counter and This just means that we need we as a developers are then responsible to increase it or Work with this value. This metric will still be included in the general metrics that are collected There are also other objects for which there are no annotations like histograms and something else But you really need to have a use case to use such kind of metrics Again similar to health check We have a single endpoint where all the metrics are exposed slash metrics But the specification also defines that you can then work with parts and select individual scopes and also access individual metrics If you are accessing individual metrics or basically all of them Two HTTP methods are supported. The first one is HTTP get where you actually get the value of the metric and The second one is HTTP options where you get all the metadata that are associated with the metrics It's usually is type unit and some description and name However, you need to explicitly ask for a JSON format that is was shown on the previous slide because by default slash metrics will output something like this, which is a format that is unique to Prometheus and Prometheus is a set of or apply software toolkit for collecting of the metrics and easy queries on them So if we add metrics to our application We need to run somewhere and Prometheus instance that will periodically ping slash metrics in Endpoints on each individual microservice and we also added a graph on which is a graphical dashboard for Prometheus Metrics and in that sense you can easily spun up something like these where you can see for instance even a health check or number of requests to membership service and with that I Think that I'm finished with metrics and let's move to open API Open API is or my profile open API is a specification that is based on yet another Specification which is called open API version 3 and It just provides a Java or Enterprise Java binding to annotations of this open API standardization into your micro profile applications it is based on the annotations from Sveggar if you are familiar with it and Basically, it outputs a really generic format Which is really easily readable both by machines and by humans So it's used by many companies like on this like you can see on this slide If you just include open API Implementation on your classpad you automatically get Open API document exposed on your slash open API endpoint Which is really basic without much information, but it can be useful to someone also and then you can start augmenting your Implementation with Sveggar. Oh, sorry open API annotations and There are really around 30 of them So I will just really mention some of them where you can specify for instance title of your application version and contact information or The servers when do you intend your application to be run on? Individual endpoints you can specify the name of the endpoints and descriptions and what is very useful is that you can specify? API responses that means that you can add descriptions to return codes or response codes That are being returned from your endpoints You can also specify what is the content that will be produced or is intended to be accepted After you do all of these you will get a really nice and long document with all of the stuff that you configured but What is really nice that this is Sveggar compatible and you can easily create a new eye Which is really simplistic But it serves its purpose and if you are not skill in front end you can get a really nice clickable interface with just Maven dependency So this would be open API. Are there any questions? Okay, so let's move to another simpler one, but really important rest client Rest client is a specification that aims to provide a type safe rest client on top of Jax RS that you can directly Using your applications. What does it mean is that if you are familiar how rest client invocation look like in Jax RS? It can be something like this. I Am doing here a really simple Check of the status if we get the valid response and what rest client allows you to do is to reduce this to this It's really not easy as that but this is eventually what it boils to This is normal Jax RS resource definition as we saw when we were talking about Jax RS and How you can define a rest client is basically to create a really similar or the same interface as this one and just add a single Annotation register as client This annotation is used so your runtime actually knows that it this interface shouldn't be parsed as Jax RS resource But as a rest client In this example, I am also registering a custom Provider which is something that is again Special to my profile rest client and we will get to this in a few slides So how you can use this then this interface in it is in two ways You can directly inject it with at rest client qualifier and then you can directly use this in your code However, you are required to specify what is the intended target or the URL where you expect your service to be exposed and this is done by a Micro-profile config property, which is fully classified class name MP rest slash URL something like this The other way how to specify it is by a rest client builder where you are required to explicitly specify the URL you are using So just to get back to the provider that I mentioned two slides back rest client provides something which is called response exception mapper which allows you to map HTTP response codes directly to some exceptions that are going to be thrown in a method invocation of the client It needs to be a Jax RS provider and it needs to overwrite two methods The further one just says which HTTP response code should be mapped by this mapper and the second one actually Transform the response or the response code into some exception that will be thrown in a metering in the method invocation of the client When we put all of this together, we can basically Mask or make a transparent that we are using rest on the background because if we make from a client Invocation to a server If the response is successful, we are transforming some pojo with json b into Jason on the other side again with json b we will deserlize the json into pojo And if some error happen, we will be throwing exception on the server This will get mapped into response code and the same response code can be again mapped to the same exception on the client Any questions? Okay, so let's move to jwt jwt stands for json web token and it's a way how you can do Authentication and authorization in your micro profile applications This is again based on a different standard or specification that is supported by many companies, which is called jwt and Basically what it boils down to is the usage of tokens or security tokens We don't really have a space here to go how jwt works on the background But there is a really nice presentation from David Blevins from Tommy tribe on this topic So if you are interested there will be a link to the slides in the end So if we want to authenticate the user with our application We have for this reason a user service which is using only basic authentication and other services are using jwt So if user wants to get a token it needs to first authenticate with the user service Which in turn returns back to him or her a token. I Also provided an examples for all of the stuff that we will be doing from now on But I don't really have a time span here to go to all of them So if you are interested, please be sure to check them out after The JW token that is return is base for 64 encoded string Which is containing three parts delimitered by commas and the first one is header Which basically contains only the type of the authentication via which here is jwt and the used encryption algorithm The payload is defining a set of claims Which is basically only thing that you are interested in JWT This is key value pairs and the third part is the signature of the token token issuer This is a decrypted version of that payload part where we can see the set of key value bar key values pairs where This means something to the service that are going to authenticate the users Micro-profile JWT only specified the new needs to specify to require claims The first one is upn which is uniquely defining the user or the principle and the second one is Array of groups which will be directly mapped into Java security rows in your application, then you can I Also included here for a good practice specify this in old API But JWT is really this where you have a custom annotation logging config which is used to Replace logging configuration from web XML if you are not using web XML if you are this security configuration will take precedence and Declare the roles that are being used in your applications on your individual jugs are as resources You can then directly again open API where you can directly reference the specification from the previous slide Direct of it the job the JWT you can directly specify the roles that are allowed to access this method If they are not you will get back a 401 unauthorized The only thing that is missing right now is to how to specify a public key on individual services So for instance profiling is able to decrypt the token that is Received from the user and this is again done by integration with micro profile config and there are two ways How to say specify it directly with public key property or with publicly location, which can be a file or You are all So in that sense if our user already has a token it just sends it to profiling profiling bill Decrypt it verifies that the user has a role that is able to access the Resource and he or she has it will is return the data Again, I also provided example for this, but I don't have time to go through that what is really nice to do about JWT is that this token can be passed through the chain of your invocations and each service on the path is able on Itself to decrypt the token and verify the claims. This is something that is really nicer that other ways of authorization Authentication because you are not required to do any additional traffic just to authenticate the user and this would be everything to JWT and Let's move to open tracing Open tracing is a specification which is used to track the request in your distributed system. So basically You will include some identification into your request and you can see which Services are actually have been included in the processing of this request This is particularly useful if you have hundreds of services, you don't know that where the branching will happen So you need to track that the errors are It again builds up on yet another specification which is having the same name open tracing and There are many implementation or like software that is compatible with this Specification the most used ones are Jagger and Zipkin So how you can use this in your code if you just again include micro profile open tracing on a class path You will directly get tracing for each Juxer as method but This specification also allows you to specify at trace annotation Where you can optionally disable tracking on some of some methods or you can also include methods which are not a Juxer as capable By default Tracing works with the concept of spans where span is some basic tracing logical unit By default open tracing will start a span when you are entering the method and close it or finish it When you are ending the method But micro profile open tracing also allows you to directly inject configured tracer instance instance where you have a programmatic Way how you can control the spans so this can be then tied to your business in the execution in your method and What you will get back is something like this where you can see what all services actually have been included in the processing for how long This is a zipkin console and this is the similar example in Jagger and this would be everything that I have to say to open tracing and let's just quickly move to fault tolerance as a law specification Fault tolerance is really about providing different strategies to provide a resilient microservices So it provides you with different ways how you can handle failure states in your applications again, it was based on long years long-year Software projects like the hysterics and failsafe that has been used for several years in applications so it's not something brand new and What it aims to do is really to create a configuration which would be separate from your business applications So you can do for instance timeouts and retribes not in your business logic, but by some other framework Fault tolerance consists of five annotations representing the fine strategies and let's just really quickly go through all of them So timeout speak for itself again. We can specify an open API you just provide some timeout unit where If the execution will take longer like for instance here We have a flag that will timeout after 10 seconds. You will throw an exception that will get mapped to 504 again, we have an examples, but I don't won't go through them and Retry as a second strategy again pretty straightforward If some service wins down and we need to some events will come out and we need to call this service We need to somehow react that we get an error code So we can for instance retry after some delay In this example, we are also using asynchronous annotation, which is six annotation of full torrents. We just marks this method as Executed in different thread. This is really important if you are using for instance retries So you will propagate the context from the incoming threat to the threat that is actually handling the request So not you will actually get maximally five retries and not five retries in each thread Retry annotation really basic. You will just specify the delay and maximum of retries Here we are actually calling the membership service So in the similar examples, we will try again and seal down then it will come up and after delay We will try we'll succeed and we are good to go and return the request but what will happen if you actually try again and again and you still don't get the Value response so you actually reach max retries By default you will throw an exception However, my corporate file full torrents also provide yet another strategy how to deal with favors and this is a fallback annotation where you can specify a String name of the method in the same class Which will be used as fallback handlers on an implementation of a fallback handler interface Which is again functional interface with one method where you can do Some different or different execution of the request that doesn't mean that you don't fail You just return some different fail Different value or some different value. So the service which is calling you doesn't really necessarily needs to fail You can do here for instance some log in send some email or in our case. We are actually just sent Saving the requested even under a failure index into our elastic search again. This example is shown in this gift I don't go to it and The fourth annotation or the fourth strategy is called secret circuit breaker Secret breaker is something which can be in three states. The first one is closed When the circuit breaker is closed all you are handling the traffic normally if some error happens The circuit breaker opens and in that sense you will automatically start rejecting all incoming requests right away This prevents you from unnecessary time out or doing some valuable work, which will be thrown away eventually After some delayed period you will put circuit breaker into a half open state Where you allow some number of requests to pass into your system and depending on this is request If they succeed you will actually open the circuit breaker if any of them fails you will keep it closed So in our example if elastic search database will go down and request come start coming There is a three-second time out for our elastic search where Profiling service will then timeouts and return an error. So what will happen if events come keep coming and keep coming There is no really any reason to actually timeout all of the subsequent requests if we know that probably elastic search is still down So we can directly return an invalid response. So fail fast or fail preemptive The definition of circuit breaker is again with a simple annotation There is a bunch of properties that you need to set up for the secret breaker. I Don't want to go through all of them. You can look it up afterwards and again this example is shown in Gifts in this end and the last annotation that I'm going to talk about today is bulkhead bulkhead is a concept from actually both where each both is split into a Compartments like these ones. So if you actually a breach part of the boat, you don't get water In all of your boat only in that part that it was breached. This can prevent you from thinking just as I know What really bulkhead is in micro profile is Specification how many threats are able to concurrently execute your methods? so if you are able to concurrently handle only two requests and Every other request is rejected until the first request is finished and then you are able to process it again You can specify it with a simple annotation which takes an integer number of requests that are able to Concurrently execute this method Here is an example using jmeter where It is shown that if we try to access it with true free threads Concurrently we will get back a too many risk too many request response and that was the last Specification that I have for today so I Just a few closing remarks micro profile 2.2. This is the screenshot from yesterday Is almost released as I said we are aiming on February 6th Actually, even these two specifications has been already released, but people are just lazy to update the milestones and Call for action if you really like what I just showed you and you want to get involved This is everything evolving right now. So everything you don't like or you want to extend somehow You can voice it right now. So The easiest way how to do this is by our weekly and bi-weekly meetups where which are available at this link in a calendar There is a meetup for every sub specification directly one hour just talking about the specification Every decision is done on these hangouts or in Google groups. We also have everything on github specifications API's and DC case and If you are just interested to try it in your applications, you have many implementations to choose from So what I really want you to take out from this Presentation is that if you like what you are doing What you see to create a really cloud-ready resilient applications in Java is really simple And all you need to do is include a single dependency so That would be everything for me if you have any questions Please go ahead So, okay Not yet, but I know that they already have this idea There are Other languages or other teams that are trying to include some of these Specifications, but it's not that easy to include it directly as most of them are mapped directly to Java notations But for instance, we are more working now on that long-running actions Which are distributed transactions and I know that we already have some note guys, which would be interested to also contribute So it's more spec to spec basis. So, okay, if there is nothing more, thank you for your attention