 Hi, my name is Severin Givolf and today I'm going to show you a short demo of OpenJDK container detection support on a CGroupSV2 system. The system I am demoing this on is a Red Hat Enterprise Linux 9 distribution, which is using CGroupSV2 or Unified Hierarchy by default. On this system I have installed Minicube using the Podman driver. Let's get started. First, I'm creating a simple Hello World HTTP deployment using OpenJDK 17, which has support for CGroupSV2 via its container detection code. This deployment is an undertow-based web application that prints Hello World once its endpoint is being requested by a browser. Here I'm using Curl to demonstrate this. Hello World! Looking at the deployment description, we see that it is using Kubernetes resource limit feature. For example, the number of CPUs is hard-limited to one, and the memory limit for the deployment is set out to be 2GB of RAM. There are also resource requests being specified, which Kubernetes uses for scheduling a deployment on a node. Resource requests are usually slightly lower than the actual enforced hard limits. But that Kubernetes resource limits directly translate to resource limits being imposed on the underlying container that is running on an undertow application. Container resource limits, in turn, are enforced on Linux via the CGroups pseudofast system. Knowing this, let's see how those resource limits are being detected by OpenJDK. When looking at the OpenJDK container trace logging output, we can see that it detects this correctly as a CGroups V2 system. We also see that the configured limits from Kubernetes are detected. Similar to OpenJDK's container trace logging, we can use the VMInfoJ command to see the CGroups config of a running JVM. In this case, the JVM is running as PID 1. As you can see, the active processor count is 1, and the detected limits match the Kubernetes settings. Note that detected resource limits affect how internal data structures within OpenJDK are being sized. For example, the garbage collection algorithm selected by default and not otherwise specified might be different for different sets of resource limits. How container resource limits can affect the GC algorithm being selected is being shown next. Remember, for this deployment, we specified one CPU core and two gigabytes of RAM. When we look at the GC trace logging of the pod, we notice that serialGC is being selected. What if we change the CPU settings of this deployment? First, we need to export the existing deployment and then change the CPU resource limit to 3. When we look at the deployment description again, we see the new 3 CPU cores limit is now in place at the Kubernetes level. If we now look at the GC trace logging of the application again, we notice that G1 GC is being selected. When looking at the container trace logging output of the changed deployment, we recognize that OpenJDK picked up the changed CPU core settings as well. Similarly, if we once again look at the running JVM via the VMInfoJ command, we see that the active processor count changed to 3, which had the effect that G1 GC is being used as the default garbage collector. This concludes the demo of OpenJDK's 17th Container Awareness on C Group's WITU system. Thanks for watching.