/ Java EE Support Patterns: JVM
Showing posts with label JVM. Show all posts
Showing posts with label JVM. Show all posts

9.21.2012

OutOfMemoryError: unable to create new native thread – Problem Demystified

As you may have seen from my previous tutorials and case studies, Java Heap Space OutOfMemoryError problems can be complex to pinpoint and resolve. One of the common problems I have observed from Java EE production systems is OutOfMemoryError: unable to create new native thread; error thrown when the HotSpot JVM is unable to further create a new Java thread. 

This article will revisit this HotSpot VM error and provide you with recommendations and resolution strategies. 

If you are not familiar with the HotSpot JVM, I first recommend that you look at a high level view of its internal HotSpot JVM memory spaces. This knowledge is important in order for you to understand OutOfMemoryError problems related to the native (C-Heap) memory space.


OutOfMemoryError: unable to create new native thread – what is it?

Let’s start with a basic explanation. This HotSpot JVM error is thrown when the internal JVM native code is unable to create a new Java thread. More precisely, it means that the JVM native code was unable to create a new “native” thread from the OS (Solaris, Linux, MAC, Windows...).

We can clearly see this logic from the OpenJDK 1.6 and 1.7 implementations as per below:



Unfortunately at this point you won’t get more detail than this error, with no indication of why the JVM is unable to create a new thread from the OS…


HotSpot JVM: 32-bit or 64-bit?

Before you go any further in the analysis, one fundamental fact that you must determine from your Java or Java EE environment is which version of HotSpot VM you are using e.g. 32-bit or 64-bit.

Why is it so important? What you will learn shortly is that this JVM problem is very often related to native memory depletion; either at the JVM process or OS level. For now please keep in mind that:

  • A 32-bit JVM process is in theory allowed to grow up to 4 GB (even much lower on some older 32-bit Windows versions).
  • For a 32-bit JVM process, the C-Heap is in a race with the Java Heap and PermGen space e.g. C-Heap capacity = 2-4 GBJava Heap size (-Xms, -Xmx) – PermGen size (-XX:MaxPermSize)
  • A 64-bit JVM process is in theory allowed to use most of the OS virtual memory available or up to 16 EB (16 million TB)
 As you can see, if you allocate a large Java Heap (2 GB+) for a 32-bit JVM process, the native memory space capacity will be reduced automatically, opening the door for JVM native memory allocation failures.

For a 64-bit JVM process, your main concern, from a JVM C-Heap perspective, is the capacity and availability of the OS physical, virtual and swap memory.


OK great but how does native memory affect Java threads creation?

Now back to our primary problem. Another fundamental JVM aspect to understand is that Java threads created from the JVM requires native memory from the OS. You should now start to understand the source of your problem…

The high level thread creation process is as per below:

  • A new Java thread is requested from the Java program & JDK
  • The JVM native code then attempt to create a new native thread from the OS
  • The OS then attempts to create a new native thread as per attributes which include the thread stack size. Native memory is then allocated (reserved) from the OS to the Java process native memory space; assuming the process has enough address space (e.g. 32-bit process) to honour the request
  • The OS will refuse any further native thread & memory allocation if the 32-bit Java process size has depleted its memory address space e.g. 2 GB, 3 GB or 4 GB process size limit
  • The OS will also refuse any further Thread & native memory allocation if the virtual memory of the OS is depleted (including Solaris swap space depletion since thread access to the stack can generate a SIGBUS error, crashing the JVM * http://bugs.sun.com/view_bug.do?bug_id=6302804

In summary:

  • Java threads creation require native memory available from the OS; for both 32-bit & 64-bit JVM processes
  • For a 32-bit JVM, Java thread creation also requires memory available from the C-Heap or process address space

Problem diagnostic

Now that you understand native memory and JVM thread creation a little better, is it now time to look at your problem. As a starting point, I suggest that your follow the analysis approach below:

  1. Determine if you are using HotSpot 32-bit or 64-bit JVM
  2. When problem is observed, take a JVM Thread Dump and determine how many Threads are active
  3. Monitor closely the Java process size utilization before and during the OOM problem replication
  4. Monitor closely the OS virtual memory utilization before and during the OOM problem replication; including the swap memory space utilization if using Solaris OS

Proper data gathering as per above will allow you to collect the proper data points, allowing you to perform the first level of investigation. The next step will be to look at the possible problem patterns and determine which one is applicable for your problem case.

Problem pattern #1 – C-Heap depletion (32-bit JVM)

From my experience, OutOfMemoryError: unable to create new native thread is quite common for 32-bit JVM processes. This problem is often observed when too many threads are created vs. C-Heap capacity.
JVM Thread Dump analysis and Java process size monitoring will allow you to determine if this is the cause.

Problem pattern #2 – OS virtual memory depletion (64-bit JVM)

In this scenario, the OS virtual memory is fully depleted. This could be due to a few 64-bit JVM processes taking lot memory e.g. 10 GB+ and / or other high memory footprint rogue processes. Again, Java process size & OS virtual memory monitoring will allow you to determine if this is the cause.

Also, please verify if you are not hitting OS related threshold such as ulimit -u or NPROC (max user processes). Default limits are usually low and will prevent you to create let's say more than 1024 threads per Java process.

Problem pattern #3 – OS virtual memory depletion (32-bit JVM)

The third scenario is less frequent but can still be observed. The diagnostic can be a bit more complex but the key analysis point will be to determine which processes are causing a full OS virtual memory depletion. Your 32-bit JVM processes could be either the source or the victim such as rogue processes using most of the OS virtual memory and preventing your 32-bit JVM processes to reserve more native memory for its thread creation process.

Please note that this problem can also manifest itself as a full JVM crash (as per below sample) when running out of OS virtual memory or swap space on Solaris.

#
# A fatal error has been detected by the Java Runtime Environment:
#
# java.lang.OutOfMemoryError: requested 32756 bytes for ChunkPool::allocate. Out of swap space?
#
#  Internal Error (allocation.cpp:166), pid=2290, tid=27
#  Error: ChunkPool::allocate
#
# JRE version: 6.0_24-b07
# Java VM: Java HotSpot(TM) Server VM (19.1-b02 mixed mode solaris-sparc )
# If you would like to submit a bug report, please visit:
#   http://java.sun.com/webapps/bugreport/crash.jsp
#

---------------  T H R E A D  ---------------

Current thread (0x003fa800):  JavaThread "CompilerThread1" daemon [_thread_in_native, id=27, stack(0x65380000,0x65400000)]

Stack: [0x65380000,0x65400000],  sp=0x653fd758,  free space=501k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
………………


Native memory depletion: symptom or root cause?

You now understand your problem and know which problem pattern you are dealing with. You are now ready to provide recommendations to address the problem…are you?

Your work is not done yet, please keep in mind that this JVM OOM event is often just a “symptom” of the actual root cause of the problem. The root cause is typically much deeper so before providing recommendations to your client I recommend that you really perform deeper analysis. The last thing you want to do is to simply address and mask the symptoms. Solutions such as increasing OS physical / virtual memory or upgrading all your JVM processes to 64-bit should only be considered once you have a good view on the root cause and production environment capacity requirements.

The next fundamental question to answer is how many threads were active at the time of the OutOfMemoryError? In my experience with Java EE production systems, the most common root cause is actually the application and / or Java EE container attempting to create too many threads at a given time when facing non happy paths such as thread stuck in a remote IO call, thread race conditions etc. In this scenario, the Java EE container can start creating too many threads when attempting to honour incoming client requests, leading to increase pressure point on the C-Heap and native memory allocation. Bottom line, before blaming the JVM, please perform your due diligence and determine if you are dealing with an application or Java EE container thread tuning problem as the root cause.

Once you understand and address the root cause (source of thread creations), you can then work on tuning your JVM and OS memory capacity in order to make it more fault tolerant and better “survive” these sudden thread surge scenarios.

Recommendations:

  • First, quickly rule out any obvious OS memory (physical & virtual memory) & process capacity (e.g. ulimit -u / NPROC) problem.
  • Perform a JVM Thread Dump analysis and determine the source of all the active threads vs. an established baseline. Determine what is causing your Java application or Java EE container to create so many threads at the time of the failure
  • Please ensure that your monitoring tools closely monitor both your Java VM processes size & OS virtual memory. This crucial data will be required in order to perform a full root cause analysis. Please remember that a 32-bit Java process size is limited between 2 GB - 4 GB depending of your OS
  • Look at all running processes and determine if your JVM processes are actually the source of the problem or victim of other processes consuming all the virtual memory
  • Revisit your Java EE container thread configuration & JVM thread stack size. Determine if the Java EE container is allowed to create more threads than your JVM process and / or OS can handle
  • Determine if the Java Heap size of your 32-bit JVM is too large, preventing the JVM to create enough threads to fulfill your client requests. In this scenario, you will have to consider reducing your Java Heap size (if possible), vertical scaling or upgrade to a 64-bit JVM
Capacity planning analysis to the rescue

As you may have seen from my past article on the Top 10 Causes of Java EE Enterprise Performance Problems, lack of capacity planning analysis is often the source of the problem. Any comprehensive load and performance testing exercise should also properly determine the Java EE container threads, JVM & OS native memory requirement for your production environment; including impact measurements of "non-happy" paths. This approach will allow your production environment to stay away from this type of problem and lead to better system scalability and stability in the long run.

Please provide any comment and share your experience with JVM native thread troubleshooting.

7.17.2012

5 tips for proper Java Heap size

Determination of proper Java Heap size for a production system is not a straightforward exercise. In my Java EE enterprise experience, I have seen multiple performance problem cases due to inadequate Java Heap capacity and tuning.

This article will provide you with 5 tips that can help you determine optimal Java heap size, as a starting point, for your current or new production environment. Some of these tips are also very useful regarding the prevention and resolution of java.lang.OutOfMemoryError problems; including memory leaks.

Please note that these tips are intended to “help you” determine proper Java Heap size. Since each IT environment is unique, you are actually in the best position to determine precisely the required Java Heap specifications of your client’s environment. Some of these tips may also not be applicable in the context of a very small Java standalone application but I still recommend you to read the entire article.

Future articles will include tips on how to choose the proper Java VM garbage collector type for your environment and applications.

#1 – JVM: you always fear what you don't understand

How can you expect to configure, tune and troubleshoot something that you don’t understand? You may never have the chance to write and improve Java VM specifications but you are still free to learn its foundation in order to improve your knowledge and troubleshooting skills. Some may disagree, but from my perspective, the thinking that Java programmers are not required to know the internal JVM memory management is an illusion.

Java Heap tuning and troubleshooting can especially be a challenge for Java & Java EE beginners. Find below a typical scenario:

-        Your client production environment is facing OutOfMemoryError on a regular basis and causing lot of business impact. Your support team is under pressure to resolve this problem
-        A quick Google search allows you to find examples of similar problems and you now believe (and assume) that you are facing the same problem
-        You then grab JVM -Xms and -Xmx values from another person OutOfMemoryError problem case, hoping to quickly resolve your client’s problem
-        You then proceed and implement the same tuning to your environment. 2 days later you realize problem is still happening (even worse or little better)…the struggle continues…

What went wrong?

-        You failed to first acquire proper understanding of the root cause of your problem
-        You may also have failed to properly understand your production environment at a deeper level (specifications, load situation etc.). Web searches is a great way to learn and share knowledge but you have to perform your own due diligence and root cause analysis
-        You may also be lacking some basic knowledge of the JVM and its internal memory management, preventing you to connect all the dots together

My #1 tip and recommendation to you is to learn and understand the basic JVM principles along with its different memory spaces. Such knowledge is critical as it will allow you to make valid recommendations to your clients and properly understand the possible impact and risk associated with future tuning considerations. Now find below a quick high level reference guide for the Java VM:

The Java VM memory is split up to 3 memory spaces:

  • The Java Heap. Applicable for all JVM vendors, usually split between YoungGen (nursery) & OldGen (tenured) spaces.
  • The PermGen (permanent generation). Applicable to the Sun HotSpot VM only (PermGen space will be removed in future Java 7 or Java 8 updates)
  • The Native Heap (C-Heap). Applicable for all JVM vendors.


I recommend that you review each article below, including Sun white paper on the HotSpot Java memory management. I also encourage you to download and look at the OpenJDK implementation.
## Sun HotSpot VM

## IBM VM

## Oracle JRockit VM

## Sun (Oracle) – Java memory management white paper

## OpenJDK – Open-source Java implementation

As you can see, the Java VM memory management is more complex than just setting up the biggest value possible via –Xmx. You have to look at all angles, including your native and PermGen space requirement along with physical memory availability (and # of CPU cores) from your physical host(s).

It can get especially tricky for 32-bit JVM since the Java Heap and native Heap are in a race. The bigger your Java Heap, smaller the native Heap. Attempting to setup a large Heap for a 32-bit VM e.g .2.5 GB+ increases risk of native OutOfMemoryError depending of your application(s) footprint, number of Threads etc. 64-bit JVM resolves this problem but you are still limited to physical resources availability and garbage collection overhead (cost of major GC collections go up with size). The bottom line is that the bigger is not always the better so please do not assume that you can run all your 20 Java EE applications on a single 16 GB 64-bit JVM process.

#2 – Data and application is king: review your static footprint requirement

Your application(s) along with its associated data will dictate the Java Heap footprint requirement. By static memory, I mean “predictable” memory requirements as per below.

-        Determine how many different applications you are planning to deploy to a single JVM process e.g. number of EAR files, WAR files, jar files etc. The more applications you deploy to a single JVM, higher demand on native Heap
-        Determine how many Java classes will be potentially loaded at runtime; including third part API’s. The more class loaders and classes that you load at runtime, higher demand on the HotSpot VM PermGen space and internal JIT related optimization objects
-        Determine data cache footprint e.g. internal cache data structures loaded by your application (and third party API’s) such as cached data from a database, data read from a file etc. The more data caching that you use, higher demand on the Java Heap OldGen space
-        Determine the number of Threads that your middleware is allowed to create. This is very important since Java threads require enough native memory or OutOfMemoryError will be thrown

For example, you will need much more native memory and PermGen space if you are planning to deploy 10 separate EAR applications on a single JVM process vs. only 2 or 3. Data caching not serialized to a disk or database will require extra memory from the OldGen space.

Try to come up with reasonable estimates of the static memory footprint requirement. This will be very useful to setup some starting point JVM capacity figures before your true measurement exercise (e.g. tip #4). For 32-bit JVM, I usually do not recommend a Java Heap size high than 2 GB (-Xms2048m, -Xmx2048m) since you need enough memory for PermGen and native Heap for your Java EE applications and threads.

This assessment is especially important since too many applications deployed in a single 32-bit JVM process can easily lead to native Heap depletion; especially in a multi threads environment.

For a 64-bit JVM, a Java Heap size of 3 GB or 4 GB per JVM process is usually my recommended starting point.

#3 – Business traffic set the rules: review your dynamic footprint requirement

Your business traffic will typically dictate your dynamic memory footprint. Concurrent users & requests generate the JVM GC “heartbeat” that you can observe from various monitoring tools due to very frequent creation and garbage collections of short & long lived objects. As you saw from the above JVM diagram, a typical ratio of YoungGen vs. OldGen is 1:3 or 33%.

For a typical 32-bit JVM, a Java Heap size setup at 2 GB (using generational & concurrent collector) will typically allocate 500 MB for YoungGen space and 1.5 GB for the OldGen space.

Minimizing the frequency of major GC collections is a key aspect for optimal performance so it is very important that you understand and estimate how much memory you need during your peak volume.

Again, your type of application and data will dictate how much memory you need. Shopping cart type of applications (long lived objects) involving large and non-serialized session data typically need large Java Heap and lot of OldGen space. Stateless and XML processing heavy applications (lot of short lived objects) require proper YoungGen space in order to minimize frequency of major collections.

Example:

-        You have 5 EAR applications (~2 thousands of Java classes) to deploy (which include middleware code as well…)
-        Your native heap requirement is estimated at 1 GB (has to be large enough to handle Threads creation etc.)
-        Your PermGen space is estimated at 512 MB
-        Your internal static data caching is estimated at 500 MB
-        Your total forecast traffic is 5000 concurrent users at peak hours
-        Each user session data footprint is estimated at 500 K
-        Total footprint requirement for session data alone is 2.5 GB under peak volume

As you can see, with such requirement, there is no way you can have all this traffic sent to a single JVM 32-bit process. A typical solution involves splitting (tip #5) traffic across a few JVM processes and / or physical host (assuming you have enough hardware and CPU cores available).

However, for this example, given the high demand on static memory and to ensure a scalable environment in the long run, I would also recommend 64-bit VM but with a smaller Java Heap as a starting point such as 3 GB to minimize the GC cost. You definitely want to have extra buffer for the OldGen space so I typically recommend up to 50% memory footprint post major collection in order to keep the frequency of Full GC low and enough buffer for fail-over scenarios.

Most of the time, your business traffic will drive most of your memory footprint, unless you need significant amount of data caching to achieve proper performance which is typical for portal (media) heavy applications. Too much data caching should raise a yellow flag that you may need to revisit some design elements sooner than later.

#4 – Don’t guess it, measure it!

At this point you should:

-        Understand the basic JVM principles and memory spaces
-        Have a deep view and understanding of all applications along with their characteristics (size, type, dynamic traffic, stateless vs. stateful objects, internal memory caches etc.)
-        Have a very good view or forecast on the business traffic (# of concurrent users etc.) and for each application
-        Some ideas if you need a 64-bit VM or not and which JVM settings to start with
-        Some ideas if you need more than one JVM (middleware) processes

But wait, your work is not done yet. While this above information is crucial and great for you to come up with “best guess” Java Heap settings, it is always best and recommended to simulate your application(s) behaviour and validate the Java Heap memory requirement via proper profiling, load & performance testing.

You can learn and take advantage of tools such as JProfiler (future articles will include tutorials on JProfiler). From my perspective, learning how to use a profiler is the best way to properly understand your application memory footprint. Another approach I use for existing production environments is heap dump analysis using the Eclipse MAT tool. Heap Dump analysis is very powerful and allow you to view and understand the entire memory footprint of the Java Heap, including class loader related data and is a must do exercise in any memory footprint analysis; especially memory leaks.



Java profilers and heap dump analysis tools allow you to understand and validate your application memory footprint, including detection and resolution of memory leaks. Load and performance testing is also a must since this will allow you to validate your earlier estimates by simulating your forecast concurrent users. It will also expose your application bottlenecks and allow you to further fine tune your JVM settings. You can use tools such as Apache JMeter which is very easy to learn and use or explore other commercial products.

Finally, I have seen quite often Java EE environments running perfectly fine until the day where one piece of the infrastructure start to fail e.g. hardware failure. Suddenly the environment is running at reduced capacity (reduced # of JVM processes) and the whole environment goes down. What happened?

There are many scenarios that can lead to domino effects but lack of JVM tuning and capacity to handle fail-over (short term extra load) is very common. If your JVM processes are running at 80%+ OldGen space capacity with frequent garbage collections, how can you expect to handle any fail-over scenario?

Your load and performance testing exercise performed earlier should simulate such scenario and you should adjust your tuning settings properly so your Java Heap has enough buffer to handle extra load (extra objects) at short term. This is mainly applicable for the dynamic memory footprint since fail-over means redirecting a certain % of your concurrent users to the available JVM processes (middleware instances).

#5 – Divide and conquer

At this point you have performed dozens of load testing iterations. You know that your JVM is not leaking memory. Your application memory footprint cannot be reduced any further. You tried several tuning strategies such as using a large 64-bit Java Heap space of 10 GB+, multiple GC policies but still not finding your performance level acceptable?

In my experience I found that, with current JVM specifications, proper vertical and horizontal scaling which involved creating a few JVM processes per physical host and across several hosts will give you the throughput and capacity that you are looking for. Your IT environment will also more fault tolerant if you break your application list in a few logical silos, with their own JVM process, Threads and tuning values.

This “divide and conquer” strategy involves splitting your application(s) traffic to multiple JVM processes and will provide you with:

-        Reduced Java Heap size per JVM process (both static & dynamic footprint)
-        Reduced complexity of JVM tuning
-        Reduced GC elapsed and pause time per JVM process
-        Increased redundancy and fail-over capabilities
-        Aligned with latest Cloud and IT virtualization strategies

The bottom line is that when you find yourself spending too much time in tuning that single elephant  64-bit JVM process, it is time to revisit your middleware and JVM deployment strategy and take advantage of vertical & horizontal scaling. This implementation strategy is more taxing for the hardware but will really pay off in the long run.

Please provide any comment and share your experience on JVM Heap sizing and tuning.

4.09.2012

JRockit jrcmd tutorial

This article will provide you an overview and tutorial on how you can perform an initial analysis and problem isolation of a JRockit Java Heap problem using the jrcmd tool. A deeper analysis and tutorial using JRockit Mission Control and Heap Dump analysis (JRockit version R28+ only) will be covered in future articles.

For a quick overview of the JRockit Java Heap Space, please consult the article below:

JRCMD tool overview

jrcmd is a free tool that is available out-of-the-box in the JRockit binaries. It allows you generate and collect crucial data from your runtime JRockit VM such as:

-        Java process memory space breakdown (Java Heap vs Native memory spaces)
-        Java Heap diagnostic (histogram)
-        Java loaded classes
-        On-demand JRockit Heap Dump generation (version R28+ only)
-        Thread Dump generation
-        More…

For this article, we created a simple Java program leaking internally. We will use this program to demonstrate how you can leverage jrcmd to perform your initial analysis.

Sample Java memory leak program

This simple Java program is simply adding String data to a static HashMap and slowly leaking to the point of the JVM running out of Java Heap memory. This program will allow you to visualize a slowly growing Java Heap leak via JRockit jrcmd. Please note that a Java Heap size of 128 MB (-Xms128m –Xmx128m) was used for this example.

/**
 * JavaHeapLeakSimulator
 * @author Pierre-Hugues Charbonneau
 * http://javaeesupportpatterns.blogspot.com
 */
public class JavaHeapLeakSimulator {

        private final static int NB_ITERATIONS = 500000000;

        // ~1 KB data footprint
        private final static String LEAKING_DATA_PREFIX = "datadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadatadata";

        // Map used to stored our leaking String instances
        private static Map<String, String> leakingMap;

        static {
               leakingMap = new HashMap<String, String>();
        }

        /**
         * @param args
         */
        public static void main(String[] args) {

               System.out.println("Java Heap Leak Simulator 1.0");
               System.out.println("Author: Pierre-Hugues Charbonneau");
               System.out.println("http://javaeesupportpatterns.blogspot.com/");

               try {

                       for (int i = 0; i < NB_ITERATIONS; i++) {

                              String data = LEAKING_DATA_PREFIX + i;
                             
                              // Add data to our leaking Map data structure...
                              leakingMap.put(data, data);
                             
                              // Slowdown the Java program so we can monitor the leak before the OutOfMemoryError condition
                              Thread.sleep(1);
                       }

               } catch (Throwable any) {
                       if (any instanceof java.lang.OutOfMemoryError) {
                              System.out.println("OutOfMemoryError triggered! "
                                             + any.getMessage() + " [" + any + "]");

                       } else {
                              System.out.println("Unexpected Exception! " + any.getMessage()
                                             + " [" + any + "]");
                       }
               }

               System.out.println("JavaHeapLeakSimulator done!");
        }
}


JRCMD - initial execution

JRCMD can be executed from the local server hosting the JVM that you want to monitor or remotely via JRockit Mission Control. The executable is located within the JRockit JDK that you are using:

<JRockit_JDK_HOME>/bin/jrcmd

The default jrcmd execution will return the list of active JRockit Java process Id that you can monitor:

C:\Apps\Weblogic1035\jrockit_160_24_D1.1.2-4\bin>jrcmd
5360 org.ph.javaee.tool.oom.JavaHeapLeakSimulator
5952
6852 jrockit.tools.jrcmd.JrCmd

JRCMD - Java Heap monitoring

The next step is to start monitoring the Java Heap memory usage and histogram. A Java Heap histogram is a snapshot of the biggest pools of Java Class instances. This allow you to pinpoint the leaking data type. Ple

You can either chose between print_object_summary (quick summary) or heap_diagnostics (complete breakdown).

C:\Apps\Weblogic1035\jrockit_160_24_D1.1.2-4\bin>jrcmd 5360 heap_diagnostics

Invoked from diagnosticcommand
======== BEGIN OF HEAPDIAGNOSTIC =========================

Total memory in system: 8465022976 bytes
Available physical memory in system: 5279170560 bytes
-Xmx (maximal heap size) is 134217728 bytes
Heapsize: 134217728 bytes
Free heap-memory: 123592704 bytes


--------- Detailed Heap Statistics: ---------
90.9% 3948k     5468  +3948k [C
 3.0% 128k     5490   +128k java/lang/String
 2.1% 92k     3941    +92k java/util/HashMap$Entry
 1.2% 50k      461    +50k java/lang/Class
 0.8% 35k       21    +35k [Ljava/util/HashMap$Entry;
 0.6% 24k        7    +24k [B
 0.3% 15k      305    +15k [Ljava/lang/Object;
 0.3% 14k      260    +14k java/net/URL
 0.2% 6k      213     +6k java/util/LinkedHashMap$Entry
 0.1% 4k      211     +4k java/io/ExpiringCache$Entry
 0.1% 2k        4     +2k [Ljrockit/vm/FCECache$FCE;
 0.0% 1k       50     +1k [Ljava/lang/String;
 0.0% 1k       10     +1k java/lang/Thread
 0.0% 1k       61     +1k java/util/Hashtable$Entry
 0.0% 1k        7     +1k [I
 0.0% 0k       19     +0k java/util/HashMap
 0.0% 0k       19     +0k java/lang/ref/WeakReference
 0.0% 0k        7     +0k [Ljava/util/Hashtable$Entry;
 0.0% 0k       19     +0k java/util/Locale
 0.0% 0k       11     +0k java/lang/ref/SoftReference
 0.0% 0k        1     +0k [S
…………………………………………………

- The first column correponds to the Class object type contribution to the Java Heap footprint in %
- The second column correponds to the Class object type memory footprint in K
- The third column correponds to the # of Class instances of a particular type
- The fourth column correponds to the delta - / + memory footprint of a particular type

As you can see from the above snapshot, the biggest data type is [C (char in our case) & java.lang.String. In order to see which data types are leaking, you will need to generate several snapshots. The frequency will depend of the leaking rate. In our example, find below another snapshot taken after 5 minutes:

# After 5 minutes
--------- Detailed Heap Statistics: ---------
93.9% 26169k    28746 +12032k [C
 2.4% 674k    28768   +295k java/lang/String
 2.3% 637k    27219   +295k java/util/HashMap$Entry
 0.9% 259k       21   +128k [Ljava/util/HashMap$Entry;
 0.2% 50k      462     +0k java/lang/Class
 0.1% 24k        7     +0k [B

# After 5 more minutes
--------- Detailed Heap Statistics: ---------
94.5% 46978k    50534 +20809k [C
 2.4% 1184k    50556   +510k java/lang/String
 2.3% 1148k    49007   +510k java/util/HashMap$Entry
 0.5% 259k       21     +0k [Ljava/util/HashMap$Entry;
 0.1% 50k      462     +0k java/lang/Class

The third & fourth column are showing a constant increase. As you can see, the leaking data in our case are [C, java.lang.String and java.util.HashMap$Entry which all increased from ~4 MB to 28 MB, 50 MB and growing…

It is easy to pinpoint the leaking data type(s) with this approach but what about the source (root cause) of the leaking data type(s)? This is where jrcmd is no longer useful. Deeper memory leak analysis will require you to use either JRockit Mission Control or Heap Dump analysis (JRockit R28+ only).

A final point, before you can conclude on a true Java Heap leak, please ensure that jrcmd snapshots are taken after at least one Full GC in between the captures (what you are interested in is OldGen leak e.g. Java objects surviving major GC collections). 

JRCMD Thread Dump generation

Thread Dump analysis is crucial for stuck Thread related problems but can also be useful to troubleshoot certain types of Java Heap problem. For example, it can pinpoint the source of a sudden Java Heap increase by exposing the culprit Thread(s) allocating a large amount of memory on the Java Heap in a short amount of time. Thread Dump can be generated using jrcmd print_threads option.

** Thread Dump captured from our sample Java program after removing the Thread.sleep() and increasing the Java Heap capacity **

C:\Apps\Weblogic1035\jrockit_160_24_D1.1.2-4\bin>jrcmd 5808 print_threads
5808:

===== FULL THREAD DUMP ===============
Mon Apr 09 09:08:08 2012
Oracle JRockit(R) R28.1.3-11-141760-1.6.0_24-20110301-1429-windows-ia32

"Main Thread" id=1 idx=0x4 tid=6076 prio=5 alive, native_blocked
    at jrockit/vm/Allocator.getNewTla(II)V(Native Method)
    at jrockit/vm/Allocator.allocObjectOrArray(Allocator.java:354)[optimized]
    at java/util/Arrays.copyOfRange(Arrays.java:3209)[inlined]
    at java/lang/String.<init>(String.java:215)[inlined]
    at java/lang/StringBuilder.toString(StringBuilder.java:430)[optimized]
    at org/ph/javaee/tool/oom/JavaHeapLeakSimulator.main(JavaHeapLeakSimulator.java:38)
    at jrockit/vm/RNI.c2java(IIIII)V(Native Method)
    -- end of trace
……………………………………….

We can see that our sample Java program is creating a lot of java.lang.String objects from the “Main Thread” executing our JavaHeapLeakSimulator program.

Conclusion

I hope this article has helped you understand you can leverage the JRockit jrcmd tool for quick Java Heap analysis. I’m looking forward for your comments and questions.

Future articles will include a deeper JRockit Java Heap and Heap Dump analysis tutorial.