Both could be caused by poor docker support in Java 8 (which is still widely used).
Docker uses control groups (cgroups) to limit resources.
It’s definitely a good idea to limit memory and CPU while running applications in containers — it prevents an application from using the whole available memory and/or CPU, which makes other containers running on the same system unresponsive.
Limiting resources improves reliability and stability of applications.
It also allows to plan for the hardware capacity.
It’s especially important when running containers on an orchestration system, like Kubernetes or DC/OS.
The problemThe JVM “sees” the whole memory and all CPU cores available on the system and aligns its resources to it.
It sets — by default — the maximum heap size to 1/4 of system memory and sets some thread pools size (for example for GC) to the number of physical cores.
Let’s run an example.
We’ll run a simple application which consumes as much memory as it can (found on this site):We run it on a system with 64GB of memory, so let’s check the default maximum heap size:As said — it’s 1/4 of physical memory — 16GB.
What will happen if we limit the memory using docker cgroups?.Let’s check:The JVM process was killed.
Since it was a child process — the container itself survived, but normally when java is the only process inside a container (with PID 1) the container will crash.
Let’s look into system logs:Failures like these can be very difficult to debug — there is nothing in the application logs.
It can be especially difficult on managed systems like AWS ECS.
And how about CPUs?.Let’s check it again running a small program which displays the number of available processors:Let’s run it in a docker container with cpu number set to 1:Not good — there are 12 CPUs on this system indeed.
So even the number of available processors is limited to 1, the JVM will try to use 12 — for example the GC threads number is set by this formula:On a machine with N hardware threads where N is greater than 8, the parallel collector uses a fixed fraction of N as the number of garbage collector threads.
The fraction is approximately 5/8 for large values of N.
At values of N below 8, the number used is N.
In our case:The SolutionOK, we are now aware of the problem.
Is there a solution for it?.Fortunately — yes, there is!The new Java version (10 and above) docker support is already built-in.
But sometimes upgrading is not an option — for example if the application is incompatible with the new JVM.
The good news: Docker support was also backported to Java 8.
Let’s check the newest openjdk image tagged as 8u212.
We’ll limit the memory to 1G and use 1 CPU: docker run -ti –cpus 1 -m 1G openjdk:8u212-jdkThe memory:It’s 256M — exactly 1/4 of allocated memory.
The CPU:Exactly as we wanted.
Moreover, there are some new settings:-XX:InitialRAMPercentage-XX:MaxRAMPercentage-XX:MinRAMPercentageThey allow to fine-tune the heap size — the meaning of those settings is explained in this excellent answer on StackOverflow.
Please note — they set the percentage, not the fixed values.
Thanks to it changing Docker memory settings will not break anything.
If for some reason the new JVM behaviour is not desired it can be switched off using -XX:-UseContainerSupport.
SummaryIt’s extremely important to set correct heap size for JVM-based applications.
With the newest Java 8 version you can rely on the default settings which are safe (but quite conservative, though).
There is no need to use any hacky workarounds in a docker entrypoint, nor setting Xmx as fixed value anymore.
Happy JVM-ing! :).. More details