 Hello everybody, I am glad that you came to this presentation, even despite this final. I am Udra Chaloupka, I work for Red Hat, I work in Wi-Fi development team. I am particularly working in Narayana, which is transaction manager, which won't be the topic of the talk today, even I will touch a little bit the transaction as well. I would like to introduce to the approach that Wi-Fi takes to provide the creamages and how those creamages are deployed to Kubernetes. This is the set of tools that I will present to you and approach cheese that is taken. I will be talking about Kubernetes, but in general some of the automations are done just for open-shift that make the work with it easier. I will be showing some small steps what all needs to be done to get it running. I will talk about S2I, which is still used for building images. Wi-Fi use is gallon for provisioning and then about the operator, which is the way how to deploy to Kubernetes. This won't be about microservices. I will talk rather about when you have your beloved Java application and you want to put it on to the cloud or to into the doc image. I will be a little bit like switching between my slides and some commands that I will be typing and trying to show you what I am doing. Here I have a simple demo first, which is a simple Java application from the quick start of Wi-Fi. This is just a website with some injection of the service that then returns some information. When I hear this application, the Wi-Fi approach to get this being containerized to Docker is using S2I. S2I is a gravitation to source to image, provides a way how to create reproducible Docker images from the source. This is the diagram where it is shown how this works in general. There is some source code that is passed to the S2I machinery and then that is built with the builder image, which is prepared. Here we have one for Wi-Fi as well and then resulted container image is created from that. These are the images that are available on QIO, which is the place where you can consume the Wi-Fi images for Docker builds. There are like three. The last one is Wi-Fi operator, which I will be talking later about. The first one is this S2I build image, Wi-Fi Center 7. The second one, Wi-Fi Runtime Center 7, is meant to be used for chain build, where the first one creates the S2I completion Docker image. The second one is meant to be used for stripping the size of the image to the smaller one. That's important to say that this Runtime Center 7 image does not contain any Wi-Fi server itself. That's just a prepared environment, just an anti wrapper that is expected to provide with built Wi-Fi server later on. So that S2I is like a tool. When you want to run it, you need to download it and run it. This is the command that first I will run it in my... I will copy it and run it in my... in my bash. I will try to make it a little bit bigger. And during that it will be downloading the Ripples and building the sources. I will talk what's this about. This is the S2I, which takes the source code for somewhere that could be folder, that could be GitHub repository. You can define what tech you go with, you can define what is the directory that you will be building your application from. And you choose the Wi-Fi centers as a S2I image that the S2I will work on. This is just a result image, just a tag in the Docker. And with the environmental variables you can provide some definition of some configuration of the process. With this MavenOps, I'm just saying that some specific repository should be used. What's interesting here is this Galan provision default FAT server, which says that I want the S2I to be as a final step of the whole building process that will be then just published. And with this the S2I creates for me the final standalone Wi-Fi server that's containing the build application. Why this Galan? I will just talk in a second what's the tool about and how to work with it. From here it's just important, I took the source build it, the S2I deploy it to the container and it's propagated as the standalone server somewhere to the image. With this, here I hope that it's already done. So you can see there are some Maven commands, everything is already downloaded and built. I will be here copying the commands because there are a lot of arguments here, but I will be saying what I'm doing. With this, I'm now going to run this chain build. I'm taking the S2I image and I will be, this build will create the final image, which is stripped based on the runtime image of the Wi-Fi. What's this about? I need to define a docker file. This docker build runs on this one docker file, which takes the runtime image as a base. Then I say this is my S2I hello world container, which I built just a second ago. It takes the data from this container and copies it under the home directory. This is defined in S2I runtime image, so it copies there with the deployment, which was already built, and then provides me the runtime image. With this, I can just check if this should be already run. I do have here four images already prepared, but these first two ones are what I want to mention right now. This is this hello world Wi-Fi centers one, which is this S2I image, which all the data that is needed for the build being done, and then this stripped one, this with the runtime one, which was provided with this, which was the empty wrapper, which I provided now with this chain build with the container. This is just some image, how this chain build works, how it takes the S2I build and provides the result artifact, that I already explained. S2I, as an image, and this process was chosen mainly because of the Wi-Fi, it's quite a bit in the feature set, and it's with coming of the docker, it's not usual that you will just take hold your server and deploy it, or take it and put it to the docker file. You would like to strip the size of the Wi-Fi as a server to just some smaller size than it's usual, when you download it. When I run just to check, for example, the size of the standard Wi-Fi currently in beta one, it's like about 100 megabytes, and it provides all the features that Java EE may provide you. This is not what I would like to have for docker, where I would like to just say, where I would like to build it and then run it. I don't expect that I will be deploying a different application to the docker file. I will just build and run, and if I need to provide some other feature, I will build again. So this all comes from the fact how the... OK, now I'm just... let me introduce this Wi-Fi module thing, which is connected to this, but the point is that Wi-Fi uses this... Java's modules is a library for class loading that does not provide this loading of all the class that are available in the Wi-Fi as a flag class part that all the classes will be loaded at the start, but it provides a way how to say just some of the jars that are really important which are going to be used during the application runtime, are loaded and used. Still, there is this module... if I just show you, this is the standard folder of when you download the Wi-Fi, it looks like there is some binaries and so on. What I'm talking about is now there's modules files, modules folder, which contains all the jars, all the features. And then this is the Java command, which really executes the start of the Wi-Fi. So here I say, OK, take this Jbos modules class loading library and I provide the path where the modules, all the features reside, and then I save what will be the startup module which boots all the Wi-Fi server. This is nice, but there is a lot of jars. So as I said, there would be good to have some way how to strip that number of jars to some smaller number that I will really need when I will run in the docker. This is what go on as a provisioning tool provides. This is the tooling that is capable to say just the capabilities which are needed for the application and build the application server for you just with the capabilities. Here this shows the command line, but this provides us with the API as well, which is in fact used during the Wi-Fi build itself. So there is some good definition of where the data should be taken from. Then I define the layers that will be used. So the definition of features and the place where the strip server will be created. The funny part is that this is already integrated in S2I, S2I built image. So what I can do here, I can just say, OK, I want to strip just the application server to the features that are just as big as I really need for my application. And with this, just only the CDI layer and web server layer will be used, because I know that in my application I use just some endpoint to HTTP call and CDI for injection. As now it's building, creating stuff, I think that's still the same as before. Just here for the comparison that could be seen, the stripped, and I think that I have some mistake here. There should be the difference. OK, maybe I will... The stripped server should be just smaller. That should be about 150 megabytes smaller than the original ones. If I take just this runtime image, which was built as a whole without the Galeon layers being defined to strip the feature set and again, in comparison with the Galeon as definition of feature set, which is taken. So, yeah, that's just that the Galeon is used during the standard Wi-Fi builds when the Wi-Fi is built and then Galeon is used as well. OK, so this is the way how you can build the kermigys for Wi-Fi. And now I would like to move to the part how to think, how to work with, how to put that to Kubernetes. I have, again, like a small demo, which is kind of just a small, simple, but uses more capabilities of Java EE. This demo is separated to two parts. There is a client server, which calls the second server. The client server just receives some incoming rest calls, so I do some EJB business work, which means here just to create some JMS message and send it and then call with the remoting to the second server, where the EJB, again, is used, receives the call and saves something to database. OK, just let me show you the application here just for you to understand. This is the client side, which has some rest endpoint that could be called. Then there is a remote bin, which provides this capability of sending JMS data to the JMS broker and then there is some remote call to the second server. On the second server, there is, again, a bin, which receives this call to persist something to database. OK, maybe I try to make it a little bit bigger. And there is some transaction magic to make me possible to fail the transaction or the server at the particle point at the time that I need for showing you. So this is the two applications and now I will just try to, I will show you how this will be working just in simple way without the Kubernetes. So I will again take here for help some commands. So I am taking Wi-Fi 19 beta and unpacking it and copying to three separate Wi-Fi distribution. And with this, now I need to configure it because that's just plain Wi-Fi. As I said, the first one is meant to be OK, that's something that I copied to the wrong directory. Oh, yeah, that's bad. Sorry once again because I am at the wrong place here and I need to fix it. OK, now it should be fine. This Wi-Fi one is meant to be this client server that calls these two other ones that will be deployed as a cluster of the servers and for me being able to call from the first server to the second servers to the callies I need to create a user and credentials for it that the first server the client can use to connect. So this is command which creates me for me the user that I will be used for this remote EJB calls later. Now I need to configure the first server with some of the information how to connect to the second server. This is the where I use these CI commands and this is just like simple CI commands set which creates kind of the definition how the first server connects to the second one. And yeah, that says OK, this is the credentials to use there is endpoints to connect with and this is what's happening here now. So now it's time to start the server I need to take this application and copy it to the server so now I compile it and deploy it by copying the bar archives to the particular servers and with this I can start the servers so one by one I'm really using to define the port offset for not having the trouble of binding to occupy ports on the same machine and I have started the first server, the client one the Wi-Fi 2 and Wi-Fi 3 connected to the clusters once there is information from the group the clusters were created just to point that I started the server with this command where I defined the node names for the clustering knows how to connect the servers while I as well defined the transaction node ID to be uniquely defined when the containers when the application server will be connecting to the same databases to the same resource to the transaction will be capable to differentiate so with this I will just quickly show the capability of EJB remoting here that in fact at the time when I have a stateless beam with the transaction being propagated from answer to other server then the transaction context is passed through the river and there will be defined the transaction affinities so all the calls from the client will be hit to the same server if I do the same with without the transaction then the client this is the stateful where the affinities as well defined so if I have stateful beam there is try to hit still the same server without transaction and with stateless beam there will be some load balancing of the pipeline calls of the EJB remote calls right so I just with this there is some every call there was created some user in the database with this I delete all of them because now I want to show the last example from this is some failure this is configured in way that there are the there is created a special resource during the transaction processing where the transaction is started on the client the second server there is some jms message pass to the message broker at the other side there is something safe to the database and then failure the crash of the server happens during the commit processing that means that the expectation is that everything is already finished successfully but crash happens then it's up to the transaction manager or periodic recovery to really fix the situation so when I call this I can see the failure that fail call passes with success that said that Wi-Fi 2 was hit but I can see here that the container itself is down and there was some errors in the log that connection was closed that's now on me as administrator to start the container again and when it started then as a transaction manager from the first server tries to finish all done finished works that was not finished yet for me to speed it up because this is the some process that takes some time I just invoke it manually now I am saying please transaction manager finish all unfinished work that you know about and the transaction manager from the first server checks was not finished on the second server and as there is some endpoints that can tell me about the safe users on the second server that's so me there is some ID with number 13 so that this is what's happened I missed showing that at the time at the start of the server that user was really not there yet now after the recovery process the data were really safe into the database that's what's happened now I have this which I would like to promote to Kubernetes I have the minicube here as a way how to work on my laptop with Kubernetes and I will here again I would need to build the docker mage that docker mage I can then push to the Kubernetes this is the same way how I did it before the only thing that I would like to highlight is this parameter S2I image source mount because S2I is kind of the configurable and you can provide different options via environmental properties with this I say there is some folder in the source in the source code which this folder is named extension and this folder please S2I take this folder and check what you can do with it it's said that when the S2I checks if there are some special named shell scripts which is here instala SH which is run during the build time and then post configure SH which is run during the startup of the docker container so what I am doing here is that during the build time I am saying please copy all the data that I have here in this directory and then during the runtime I have some post configuration time phase where the CI is executed and could be used to configure the server for sure you can provide directly standalone XML configuration but yeah so this is this is all should be done I have prepared the image I will skip this building part because we already seen it and with this I can just simply create a deployment with I hope that this will work seems working ok so now it's starting executing the executing the CLI commands I do the same with the second server as well I create the services that I need for being able to access the particle no just check what so the first server started that is the second server started as well now I would like to have this being in two replicas so I will change the replicant number to 2 so from the first with the two servers I want to scale up to 2 if I check the log again there should be trouble here that the jgroups the layer that manages the clustering and the Wi-Fi says that it's not capable to find out what are the bots available to connect with this is the point that the Wi-Fi uses jgroups to find the other applications around and the protocol which is used here is which scans via Kubernetes calls what are the bots with the same labels at the same namespace and because this is not defined by default there are no permissions for the bot can do that so I need to say to provide the permission for the possibility for the container to check what are the bots around so I provide here the binding to cluster all view that makes possible to check all this to list and get all information about the objects in the namespace so right now when I check the server there should be already somewhere that should be in some loop already it started it found each other and there is the set that rebalance the members and there are already found should be found to servers in the cluster kind of the same configuration as before the plate on Kubernetes I can execute the same commands here as before where I just here I say taking the URL of the service of the client and execute the command I can see here that again that goes to the same server and now I want to to check what what happens if one of the server fails that could be because of some parallels I simulated here maybe it's because of some risk scheduling of Kubernetes that is done automatically to some different node with deployment which I use as my first thing that came to my mind is that there is there is several things problematic which is first that there is no no guarantee for IP or DNS DNS persistence so next when the when the port is restarted then it could be bound to a different name which is trouble for the recovery or for any EJB remote call that goes from one server to other server so this is kind of the stateful stateful thing in the process the second thing is that problematic is there is no persistent storage Kubernetes does not provide deployment object does not provide it by default so anything that is saved during the time of the port is living it's just erased when the new port under the different name is started this is something that it's problematic for Wi-Fi because it's a stateful application especially because of the transaction manager saves data about transactions at some folder or maybe that could be safe not just persistent to disk but could be as well to database but still there is this statefulness in mind so deployment is not a great way how to how to really deploy the Wi-Fi the way how now I am just cleaned what I created now the way how is decided or the Wi-Fi sold this is to using the Wi-Fi operator which is like providing you with these things in mind and you will just take the to say that what resource how do what do you want to deploy for the Wi-Fi and all the object that I was now trying to create manually as deployment and services are created by operator for you if I am just looking down here and first I need to I will just start the postgre to find it I will just first start the postgre server just as for being there available because I want to show you how to connect to it with the Wi-Fi operator and now again I have I need to somehow provision some create docker images for now that for me not need to repeat this and I can just to checks what is needed for the operator would be prepared there is operator as it is is defined by some CRDs which defines what is the field that operator is capable to work with there is kind of things like replicas number of replicas that I want to deploy informable status et cetera this is something that I need to provide to Kubernetes to know what you can expect and then with that then I can create then I provide the dynamic definition for the operator itself which is deployed to the Kubernetes under this quiet Wi-Fi operator image which is again available in quiet with this for operator good work there needs some permissions again so there needs some role-binding et cetera so those things are here summarized in Wi-Fi operator here run minicube shell script which designs all those preparations that needs to be done before I can really start the application so I do it now as well and to let the Kubernetes know about those ok there is some trouble here I know it's true just a second there is some trouble ok I'm not sure what's wrong here but now I created all the necessary objects for Wi-Fi for operator good work that was like a prior quiz and with this I can say ok there is definition of my of my application and this is done here in this YAML file that could be that simple as here it is that I say ok I have some docker image that I already pushed that should be started as a one replica or I can to define some environmental variables pointing to some secrets and so on this is some environmental variables that are created by again by Gullion that that creates for me some templates for data sources and with this I'm providing these environmental variables I'm informing that data source should be like filled with this information and then my application may use it for its work and that way because I started the postgrad postgrad database I can use it right now with this definition ok let me again I will use these to push it to minicube with this there should be hopefully already containers creating and now the as you can see here the operator when I deploy operator there is a special pod which manages all the all the works of creation of the stateful set services all the objects that are needed for Wi-Fi to be running and there is all the information that is getting the information that is creating the stateful set and services and so on I hope that it will be already started so now I was booting as before and when this will be done I just want to check that it really works and hopefully to be started again and when I just to try to run the HTTP request I can see that was returned because deployed with the operator when I try to run the same command with failure now that I know that this hits the server one the server one would be down at the second for a while when the Kubernetes found that it was down and started again that you can see here the restart number one so that was crashed and now restarted again and it's there is the guaranteed that this will be started with the same DNS name that any unfinished remote calls could be finished after us because this is the persistent and guaranteed yeah so mostly what I have prepared just maybe two more points there are some possibilities for debugging with the Wi-Fi operator there is again the way how to define environment variables if I just try to set debug curve with this the client port should be restarted there should be first available for me to connect with the debugger at port 8787 as well there should be much more information and it's not know the case I don't know exactly what happens but there should be much more information about what's happening during the S2I build phase sorry, that's I'm right now not sure what's happening because I'm going out of time so I will skip the debugging of the failure and in summary the process of building application for Wi-Fi the approach we took is to using S2I you can use Galion to strip your the size of your server for you by definition of layers the features that you want to provide for the debugger and for the Kubernetes you should use Wi-Fi operator because it knows a lot of details how to what needs to be provided for the Wi-Fi it works correctly for the Kubernetes cluster if there are some questions I'm happy to reply if not, so sorry can you speak up? do you mean these ones? sure that there is in slides that I hope that it's as well there is over here that there is in my GitHub repository and this is linked here in the slides at the part where the where the the demo is presented and sorry that's wow that's a lot of slides here okay yeah this is this presentation and yeah then it's linked there and yeah there are all the nodes slides you can check it on your own if you want as well there is there are the blog posts on Wi-Fi with information about Galeon about S2I with some more details at some part yeah I forgot to repeat the question is if I plan to write a blog post for Wi-Fi I would like to create a blog post from the summary of this presentation how to get, for example, hello world to be deployed via with S2I to Docker and then how to put it with the operated to Wi-Fi I hope I will provide it okay, something else if not, I hope that you learn something new or at least interesting enjoy the rest of the conference