 Well, hello and welcome to another Dev Nation live. Thank you guys so much for showing up for us today We're only gonna spend about 30 minutes working through some things here with Rafael. So just you know I'm presently in New York City. That's actually why you see me in yet another hotel room seems like I'm all over the planet and Rafael he's down in Brazil. Yeah, is that right Rafael? Or you outside of Brazil? No, I'm outside of Brazil. I'm at Annapolis. It's a interior city of the state of Gueas here in down Brazil Okay, okay, and Rafael is originally from Brazil though. He now lives in Orlando, Florida But like me which I live in Wake Forest. We're all traveling all the time It seems like we have a really great session for all you guys today We're gonna deep dive into the world of Docker Linux containers Specifically how C groups will trash your JVM and the things you can do to get around it to make sure that your Java Virtual Machine behaves much more happily in a Docker world. So at this point Rafael take it away Okay, perfect. Let's start sharing my screen and I'm gonna second Okay Today as we're said we're gonna talk about why Docker fails in white Java fails inside Docker We all know how containers have tended the world the world But sometimes things can go wrong especially when it's related to Java in the The container format have helped us to place a lot of everything inside it our software our Operational system the file system everything and even our Java program Even our Java application can be inside a container, but the problem happens when we lock The JVM the Java application inside a container that doesn't make The software the Java application be happy and things can go totally wrong The idea is that Java the Java application like to live in an open Without any limits of memory or any constraints about memories or CPU and today we're gonna see What causes this fail and how to deal with that what we can do to make Java love containers To understand that let me give you a Brief history about containers containers is not something new most people think because we are talking a lot about containers in the last three or four years that containers are With it's a new thing in the IT industry and if we pay attention of the in this slide we can see that Containers really started in around the year 2000 with the j os added to free VSD and then we start we had more things coming like generic process in 2006 which lead it to control groups and From from control groups. We had the future inside the kernel and User spaces which lead led us to the Linux container project. Let's see and then finally Not by con lighting talk dot cloud introduced us the world of Docker containers and Then we are everything after that was a big improvement like Google Kubernetes which gave the origin of open shift and Now everything related to containers became became a standard via OCI and CMCF But there is a problem Java came before the year 2000 we We have the Java version one zero children released on May 3rd of 1996 and J to E which is now E for J Was released in the end of 1999 what happens is that? behind the scenes of a container We have a kernel feature The future of a container happens because of C groups and and namespace all of both of them are features of the kernel So the C groups creates And together with namespace creates an idea that you are running your piece of code inside restricted namespace and with a restricted limit and memory and device with all with its own devices and When it's related to memory Inside C group we have in the file system slash C's of the slash fs lesson group memory We have all those files here and these here in bold memory with limits in bytes Specifies them the limits of the container itself But there is a problem related to Java Let me give you an Idea on what happens to Java to other kind of software when it's not related to Java I have used it here Docker machine to create two kinds of much machine one This one called Docker one hundred a thousand twenty four. It's a virtual It uses virtual box just to run our demo With a memory of one gig a friend Okay, and I have another Docker machine that was created with eight gigs of friends So you can have an idea on how the the Java application behaves in two different configurations, okay Let me do the following Let me do a Docker machine ssh and This machine here Docker 124 Thousand and 24 if I run the command three Dash eight I can see that this machine has one week of memory of ran and one gig One get one point one gig of swap. Okay Running software on this machine so I'm pointing my Docker client to this specific machine and What I will do now is run a container a central container with the following With the following Switches, I mean I said I'm signing a terminal. I'm dropping the container after I run it And I'm limiting the memory. Let me show you something here doctor dash eight The doctor Run the shades help and said The dash M Specifies the memory limits, okay So I'm specifying here a limit of a hundred mags and a memory swap of a hundred mags as well I'm running the center OS image and I'm running the same command three dash eight When I run dash three that age on a container that has a memory of a hundred mags. I still see One gig of memory So that seems a problem right because I restricted my content I just gave one hundred mags of memory limits to this container and This is not a problem of the office the center West distribution I can take another distribution here like I'm running Ubuntu and if I run the same command I Can see one geek of memory so This is what happens with Java as well both They come both commands pre Java the jg or machine. They don't look inside that that file here They see file system group memory Okay, so you can think why does this uh Dash M What's the purpose of this dash M if my software can see the complete environment and to have an idea why? Where it can be used I will run here a container In the interactive mode with the name my wall fly, but they will restrict the memory with a low of 15 mags, okay, so I'm running a boss wall fly here Look here. What happens if the container will try to start with? Maximum heap size of 512 mags We can wait a couple of seconds and see what will happen to this container Of course at this moment. It's trying to do a dr. PS here Dr. That's Dr. That's we can see that around It's trying to limit and then the container died the boss process. We see the kill sign Okay, this is very important because what happens when we specify the memory limit We are saying we are saying to the kernel if something tries to use more than 50 50 mags Kill the process. So that's why the judge it give us as process receive the cues sign. We can even explore that By using the command dr. In fact The container name and I'm getting the form the format in case on so I want to see the state oops, let me close here here plus If we get the doctor's fact inspect and take a look at the state Of this container. We can see that it exited. It's not running running false. It's not paused It's not restarting, but it was OOM killed True, it means that it was killed because of out of memory So when we specify The memory limit like we did here We are saying to the kernel everything that tries to use more than 50 mags, please kill it The problem is that the Java itself Sees more than It doesn't look at this information here. It doesn't know that's running in a restrict memory And that's a problem not only with our soccer. Let me show you something else here with Java the problem have we will see why it fails when we are running from from the developer workstation to a six To a production server with 64 gigs that we'll see that later But why does Java fail? Java the JVM sees all the host resources. It doesn't it is not aware of C groups So if you'll get a if you implement a software that does a system out print land print a land memory Using the runtime get runtime max memory or and getting the all available process Processors it will see they all the CPUs and all the memory of the host And if you think that it's a problem just to your software look here This this same code is used by different servers Don't cat uses that in different parts of it of its servers of its of it It if you can see parts of that in Catalina It can be it's also used in at the ni all I think channel the status transformer even won't fly and when it's getting the processor info info it gets the Information vertex inside vertex options. So everything that's Supposed to run in a container it will will not be aware of its limits So now that we know that the container die the container gets killed. Let's do a trick thing An application that I built for this specific demo This application here is a spring boot application That when I access this endpoint called memory It will get the runtime Get them the the max memory and it will try to locate 80 percent of the memory With strings. We all know that strings are unmotable and then we append String over string until we reach 80 percent of the total memory So this application here. I have built an image and I will now run This container Switches I will erase the container. I will give the name my container 150 because I'm limiting the memory with 150 mags Holds in the port 8080 Okay, so let's and also let me show you something here in the software and the software here You can see that I in my Docker file Let me get the open in my Docker file that uses open JDK I Enable the print flags final and print garbage garbage collector details. That's why you are seeing This information the log Location failure. Okay. My application has started now. I will access this application endpoint. I will do a call at HP dot get doctor machine I IP Doctor port 8080 API memory Okay, let's access That endpoint so you can see that's tried to locate eight percent of the memory. So I located more than eight percent Of the max allowed memory size of two hundred and forty one mags So we have now a location of 2015 mags and Application still running. Okay, I can run that bet more than once. We can see that we have the same location size But here we can start to make questions. The first question is Remember that I run my I ran my container if I do a doctor at doctors stats My memory limit is a hundred and fifty mags, right? I Have allocated a hundred percent of RAM but the Application set that I allocated two hundred mags. So we have a first question is how is that possible? how I could allocate two hundred and fifty mags and If I have a limit of a hundred and fifty The answer to that is because when I run a container, I So I just killed this container I Need to specify also the if I don't specify the memory swap swap it will create a 105 mags Run and a hundred and fifty mags of swap. Okay, you cannot have a container with less or memory swap then The one that to then the memory that specified for example, I'm specifying here container of a hundred and fifty And the memory swap of ten mags. You can see that the container will Say you cannot have a minimum memory so Swap limit that the memory the minimum memory swap limit should be larger than the memory limit so when I'm running a container like I did here with a hundred and fifty Mags, I have one hundred and fifty mags of the container of the program and plus are 150 mags of Swap so that's what that's why I could locate 215 mags Okay, and the second question is why it said I'm gonna run the code command here again. So it's trying to locate The the first answer to 215 is because we have a hundred and fifty of ran and one hundred and fifty of swap Okay, and the second question is why the JVM said that I could the maximum allowed memory is 241 It said that's 241 because if we go to the to the documentation of the garbage collector ergonomics We will see that the maximum heap size Here is Smaller of one fourth One fourth of the physical memory of or one week so in our case since the this They hold here Get it if my condom and my Docker machine has one week of memory one fourth of The memory should be two hundred and fifty mags So that's why the the heap size. I'm the maximum heap sizes is close to two hundred and forty one bags. Okay so now that we know That the JVM Looks it doesn't look to see groups. It doesn't look to the memory limit. It uses to It uses the memory of the host Let's see what happens if we do just like here in the slides Suppose that you are running in our 8 gig score and Go to a 64 gigs eight gigs of Fran and go to a 64 gigs of Fran in my case I'm running the I was running the application here in Docker machine In a Docker machine with one gig of Fran now I will point my Docker client To my machine with a hundred with eight thousand and a hundred and ninety two Mags of friend. Let me show you that Docker machine SSH Docker eight thousand a hundred and ninety two Three dash eight it almost eight gigs of friend plus three almost three gigs of swap Okay, and I will run the same application here Docker run My companion with a hundred and fifty mags With a limit of a hundred and fifty mags That's with our application to load And now that the application is loaded. Let me change here and point application To the Docker with eight gigs of friend Let's take a look what's happening Location failure allocation failure. It's trying to locate memory Trying to location and the application was killed So this is very important. Remember that I when I ran the application Inside Docker with a hundred and twenty four. I could run the application several times. Let me start with that to start Even with these allocation failures from the garbage collector We can see that the application was able to load and I can involve invoke My endpoint that consumes eight percent of memory several times and see here. I'm running that the Endpoint several times and the application didn't die but if I use more memory and run the same application with the same parameter and parameters with a hundred and fifty mags the application crashes and that crashes because With eight gigs of RAM To the one fourth of eight gigs of RAM is Almost two gigs, right? Let's see it here. The application is this is started. Let's try to access the application died Okay, because it tried to Let me see the memory here It Max heapsize Then it tried to locate the maximum heapsize of more than than two gigs Okay, because this is a eight eight gigs machine and When I try to allocate the this the software it of course With eight a gig of RAM it was not possible, but now let's increase the amount Here in the container as well. Let's give more memory to this container. Let's give 800 Even with eight hundred of memory limit it will fail So it's not only when you will increase the memory amount on the whole But even if you increase the memory of the container The application it will also fail because 800 is not enough and why it's not enough because with 800 of memory limit It means that the container has one gig point six eight hundred of RAM and eight hundred of swap You can see that it takes more time to to die, but even so the container Will die As you saw here because it needs more than two gigs Okay, so increasing the memory is not the solution So what should be the solution the solution for that is that is to Specify yourself the Java options. So I have here another Docker file That I create an environment variable called Java options where I can set The maximum heapsize Memory myself, so I run that the the same container In the same holes with eight weeks with with a container limit with About 800 megs of limit and with a maximum heapsize of 300 megs Dr. I am my container eight weeks Hey, that's Dr. Logs My container eight weeks, let's wait the application to become available. So I have a container limit of 800 but the heapsize was specified to 300 so now I can run my application my application several times But the maximum heapsize is close to 300 and my application is not dying anymore So this is a good solution for that But the question is can we get better? Of course, yes, because The fabricate project already provides a support for For the for the JVM ergonomics Let me show you here if we use the fabricate the image provided by fabricate Which is this other Docker file here. You can see that it comes from fabricate I don't need to specify any JVM option. They fabricate itself the image reads the container limits it reads the number of CPUs from the C groups it reads the memory from From C groups as well and and then it this information is used to calculate the memory so if we use that image from fabricate I Can run for example the container with no limit if you do It will exact Java dash CP with the class path Local and if you run your jar if I specify a memory limit of a hundred and fifty mags It will automatically set the memory size of a hundred or seven seventy five mags If I specify the memory limit of three hundred It will also adjust itself to a hundred and fifty always the half of size of The memory limit if I specify two gigs The maximum hit size will be one week. So it's it's it automatically creates The proper x and x parameter with Respecting the limit of the container We can take a look here at the documentary at the documentation of this image that If you don't specify the max memory ratel The default is 50 which means that 50% of the available memory is used as a upper bound boundary Okay, let's just remember that the memory limit itself The hip itself it's not the total memory used by by the KVM we have we need memory to For this thread stack we we need the memory of the perm Prem size memory. So it's not only the heap. So that's why People recommend 50% of the memory limit Now that we talked about that we We can see that until the version one 121 of the JVM with the memory limit of a hundred The maximum hip size was close to two gigs because I'm running here on the that Docker machine with eight gigs of rent, right? but now starting from The updates 131 There is a experimental VM options that you need to unlock using this flag plus unlock experimental VM options and Plus use the C groups memory limits for hip If we we run both it will be If we read the memory limit and set the maximum hip size As a as a half of them of this container C group so in this case if I run I I My my container with 300 mags of memory limits It will give me a hundred and twenty one mags of maximum heat because now I enabled The JVM to use C groups C groups for our memory limit, okay? That gave give us an idea on why docker fail on Java. I Like to share with you the link for this For this slide back, I guess that work and share that with you in the chat box I can And we do have some questions and actually more statements people are like well Maybe it's just an operating system problem or not a JVM problem or maybe it's just you know Why are you using your docker with a limit to begin with you? Can you address either those points? Okay, sure, as you could we could see in the beginning. This is not related to OS problem, but let me get here my machine with 124 in the beginning. I ran Docker You get here Run I am I've tried a Memory limit of a hundred mags and I run sent OS with three Three exist before C groups. So that's why three also sees one week of memory. I Tried another line of distribution Like a bundle and three it still gives me one bit of memory the problem is not the OS itself, but Tools like JVM free and other Other Applications does not understand C groups the containers is total are totally different from virtual machines So it's the container limits are based on C groups and namespaces and What's the other question for? So, you know, there's a conversation around metaspace and Java 8 that could be very interesting the one point to think about that too though is the In in there's a 64-bit JVM issue and a 32-bit JVM issue one of the things we're considering is recommending 32 bit JVMs For those situations where you are going to constrain your container down to let's say under one core and under a gig of RAM Mice will use a 32-bit JVM for those scenarios and it will you actually will save a ton of metaspace as an example In 64 bit JVM you get double the metaspace Yeah, that's true. Yeah, and then so what there was a question about the number of cores and in the Presentation you will see the link to the presentation You'll see references to cores in a longer form of this presentation We talk about cores also, but at least the cores does not blow you up. So that's the thing to keep in mind Yeah Most most of the of the blow-offs are happens related to memory That's why we also decided to cover only memory here at the definition live mostly because we have a small Just a few minutes to to give you an idea and Also because CPUs are not a big deal Okay, and we're at a time I did provide the link to the slide deck where you guys can go follow up on that presentation and see all the links and embedded Information in there. There's a ton of good information in there. Also taking you to the various You also should read the notes associated slide because in some cases we put in there Here's the information for the JVM ergonomics or here's the information for our Kubernetes Calculations and what Docker API it's using and you want to make sure that you're configuring things appropriately in a Kubernetes setting Versus Docker setting are slightly different And just want to make sure you check those things they the big issue Of course the reason we're talking about this is when you actually build it in deployment Are sorry build it in development and then deploy it to a really large Let's say EC2 instance or a really large production level machine the Java virtual machine by default Even though it's inside a Docker container. We'll see all the memory of the Production server and all the CPUs of the production server It'll over report them to your application and of course your application might just blow up and in that case It silently fails. So that's what we're talking about it And it's super easy workarounds based on what Raphael provided today Raphael awesome job on the demonstration. Thank you so much for that Thanks so much for and thanks so much for everyone who's joined us and for everyone that we will watch the definition live Yeah, yeah, thank you guys so much for showing up today and keep watching developers on red calm damnation live for future updates We have more sessions to come and the session that was originally scheduled for today with James Rawlings We'll make sure to get him on deck again when he becomes available He has a really great talk on how to run continuous development or sorry Continuous delivery at scale because we're doing on a very interesting project. Well, thank you guys so much Thank you all