/ Java EE Support Patterns: PermGen
Showing posts with label PermGen. Show all posts
Showing posts with label PermGen. Show all posts

8.10.2011

OutOfMemoryError Help Guide

This is the part 1 of a series of posts that will provide you with a root cause analysis approach and resolution guide for Java EE OutOfMemoryError problems.

The part 1 will focus on how you can first isolate the problem and identify which JVM memory space ran out of memory.

OutOfMemoryError: What is it?

This is one of the most common problem you can face when supporting or developing Java EE production systems or a standalone Java application.

An OutOfMemoryError is thrown by the Java VM when it cannot allocate an object because it is out of memory, and no more memory could be made available by the garbage collector.

The actual Java Exception returned by the JVM is java.lang.OutOfMemoryError which is part of the java.lang.Error Exception list. The error message provided by the VM will be different depending of your JVM vendor and version and depending of which memory space is depleted.

Find below an example of OutOfMemoryError thrown by a Java HotSpot VM 1.6 following the depletion of the Java Heap space.

Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
       at java.util.Arrays.copyOf(Arrays.java:2882)
       at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
       at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:390)
       at java.lang.StringBuilder.append(StringBuilder.java:119)
       at sun.security.util.ManifestDigester.<init>(ManifestDigester.java:117)
       at java.util.jar.JarVerifier.processEntry(JarVerifier.java:250)
       at java.util.jar.JarVerifier.update(JarVerifier.java:188)
       at java.util.jar.JarFile.initializeVerifier(JarFile.java:321)
       at java.util.jar.JarFile.getInputStream(JarFile.java:386)
       at org.jboss.virtual.plugins.context.zip.ZipFileWrapper.openStream(ZipFileWrapper.java:215)
       at org.jboss.virtual.plugins.context.zip.ZipEntryContext.openStream(ZipEntryContext.java:1084)
       at org.jboss.virtual.plugins.context.zip.ZipEntryHandler.openStream(ZipEntryHandler.java:154)
       at org.jboss.virtual.VirtualFile.openStream(VirtualFile.java:241)


Analysis step #1 - Identify which JVM memory space ran out of memory

The first step is to determine which memory space is depleted. The analysis approach will depend on which JVM vendor and version you are using. I have built a quick matrix guide to help with this task. Please simply review each of the affected memory spaces below and determine which one is applicable in your situation.

Please feel free to post any comment or question if you are still having doubts with these problem isolation steps.


JVM Memory Space: Java Heap


Applicable JVM vendors

  • ·        Oracle Java HotSpot (any version)
  • ·        IBMJava J9 (any version)
  • ·        Oracle JRockit (any version)

Analysis Approach
  1.  Review the OutOfMemoryError message. It should give you information such as java.lang.OutOfMemoryError: Java heap space.
  2. If you are not seeing any explicit error message then you need to analyze the OutOfMemoryError Stack Trace. Look at the first 5 lines; it will give you the type of Java operation the Thread was executing that lead to the OOM error. Java Heap depletion will be triggered by Java operations such as population of a StringBuffer, adding objects in a HashMap data structure etc.
  3. If you are not sure about either #1 or #2 approaches then you will need to enable the JVM verbose GC (-verbose:gc) in order to identify and confirm if Java Heap depletion (Young / Old Gen) is your problem.

Problem Patterns


The Java Heap is the most common memory space that you will face OutOfMemoryError since it is storing your Java program short and long term Object instances.

The most common problem is a lack of proper maximum capacity (via -Xmx argument). Java Heap memory leaks are also quite common and will require you to analyze the JVM Heap Dump to pinpoint the root cause.


JVM Memory Space: PermGen

** NOTE: PermGen was replaced by the Metaspace starting with JDK 1 .8


Applicable JVM vendors

  • ·        Oracle Java HotSpot (JDK 1.7 and lower)

Analysis Approach


  1. Review the OutOfMemoryError message. It should give you information such as java.lang.OutOfMemoryError: PermGen full.
  2. If you are not seeing any explicit error message then you need to analyze the OutOfMemoryError Stack Trace. Look at the first 5 lines; it will give you the type of Java operation the Thread was executing that lead to the PermGen depletion. Java PermGen space depletion will be triggered by JVM operations such loading a class to a class loader as per below example. 
  3. If you are not sure about either #1 or #2 approaches then you will need to enable the JVM verbose GC (-verbose:gc) in order to identify if PermGen space depletion is your problem.
java.lang.OutOfMemoryError: PermGen space
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(Unknown Source)
   ..........................................................................................


Problem Patterns

The Java HotSpot PermGen space is the second most common memory space that you will face OutOfMemoryError since it is storing your Java program Class descriptor related objects.

Configuring a PermGen space too small vs. your Java / Java EE program size if the most common problem. Other scenarios include class loader leak that can be triggered for example by too many deploy / undeploy without JVM restart.


JVM Memory Space: Native Heap (C-Heap)

Applicable JVM vendors

  • ·        Oracle Java HotSpot (any version)
  • ·        IBM Java J9 (any version)
  • ·        Oracle JRockit (any version)

Analysis Approach

Native OutOfMemoryError error messages are normally not very informative. A deeper analysis of the OutOfMemoryError Stack Trace is required.

Native Heap depletion will be triggered by Java operation such as loading a JAR file (MMAP file), trying to create a new Java Thread etc. which all requires enough native memory available to the C-Heap. Find below an example of OutOfMemoryError due to Native memory depletion of a 32-bit VM:

java.lang.OutOfMemoryError
       at java.util.zip.ZipFile.open(Native Method)
       at java.util.zip.ZipFile.<init>(ZipFile.java:203)
       at java.util.jar.JarFile.<init>(JarFile.java:132)
       at java.util.jar.JarFile.<init>(JarFile.java:97)
       at weblogic.utils.jars.JarFileDelegate.<init>(JarFileDelegate.java:32)
       .......................................................................


Problem Patterns

An OutOfMemoryError resulting from a Native Heap depletion is less common but can happen for example if you physical server is running out of virtual memory.

Other scenarios include memory leak of third party native libraries such as a monitoring agent or trying to deploy too many applications (EAR files) or Java classes to a single 32-bit JVM.


7.26.2011

Weblogic PermGen space

This article will provide you some background on the Java HotSpot PermGen space and a 4 steps process on how you can monitor and tune your PermGen space for your Weblogic Java EE server.

Java HotSpot PermGen overview

The HotSpot PermGen (Permanent Generation) is a memory space specific to the Sun Java HotSpot VM and also applicable for any other Java EE server using the HotSpot VM such as Red Hat JBoss or IBM WAS. It is a special memory space for Class descriptors and other internal JVM objects.

Please refer to my other Blog article on the Java HotSpot VM memory space breakdown for a quick visual overview of all HotSpot memory spaces.

If you are facing any PermGen related problem such as OutOfMemoryError, please refer to my Blog article OutOfMemoryError PermGen patterns for description and troubleshooting of the most common Weblogic PermGen issues.

PermGen tuning step #1 – Identify your JVM vendor

First, please make sure that PermGen space is actually applicable for your Java EE environment. JVM’s such as IBM SDK and Oracle JRockit do not have an actual PermGen space. For those VM’s, the memory is allocated either to the Java Heap or the native C Heap.

** PermGen space is only applicable for the Sun Java HotSpot VM 1.4, 1.5 & 1.6. There are actually some requests for enhancements for JDK 1.7 that may actually eliminate the PermGen and use the native C memory instead  for the VM's representation of the Java program (Class descriptors / code).  **

PermGen tuning step #2 – Review your current PermGen space settings

The minimum and maximum PermGen space configuration is achieved via the following parameters:

## Example: PermGen space with a minimum size of 256 MB
-XX:PermSize=256m

## Example: PermGen space with a maximum size of 256 MB
-XX:MaxPermSize=512m

The default Weblogic PermGen space settings are normally found from the commEnv.sh / commEnv.cmd scripts and located under:

<WL11g HOME>/wlserver_10.3/common/bin/




When using Weblogic Nodemanager to start and stop your managed servers (used by most Weblogic production systems), PermGen settings can be updated via the Weblogic console under >>> Home / Managed server 1...n / Server Start / Arguments field:


PermGen tuning step #3 – Review your current PermGen runtime footprint

PermGen space capacity and footprint can be analyzed by turning on verbose GC and monitoring the Full GC collections on a daily basis. This will allow you to determine your current PermGen footprint and identify any potential issue such as PermGen memory leak.

Verbose GC for the HotSpot VM can be enabled by simply adding the parameters below to your start-up VM parameters:

## Example: Enable verbose GC and print the output to a custom log
-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:/apps/domains/app/gc/gc_s1.log


You can also use the free open source tool GC Viewer to graphically review your Java Heap memory footprint from the GC output data.

A bigger PermGen space requirement is to be expected for applications using a significant amount of code and class-loader trees e.g. many EAR files and / or applications using frameworks that depend heavily on dynamic class-loading / Refection API.

PermGen tuning step #4 – Tune your PermGen space, if required

Increasing your maximum capacity of the PermGen space is often required but you first need to ensure that you are not facing a memory leak such as class-loader leak. Please review your verbose GC output very closely and ensure that it is not growing constantly on a daily basis. You may need several days to confirm a PermGen memory leak since applications such Weblogic portal using dynamic JSP reload may increase the PermGen utilization over time (until all your JSP’s are compiled; loaded to their class-loader).

In order to increase your PermGen space to let’s say 512 MB, simply edit the Weblogic commEnv.sh or commEnv.cmd script and change the default from 128m to 512m or add / modify the MaxPermSize to 512m (-XX:MaxPermSize=512m) via the Server Start arguments (as per step #2 Weblogic console snapshot) if using Nodemanager.

** A Weblogic managed server(s) restart will be required in order to activate your new PermGen space settings. **

2.21.2011

Java HotSpot VM PermGen space

I often get a lot of questions from other application support colleagues regarding the permanent generation space. This short post will provide you with both a high level summary and graphical view of this special Java HotSpot VM memory block.
 
PermGen space

The Java HotSpot VM permanent generation space is the JVM storage used mainly to store your Java Class objects.  The Java Heap is the primary storage that is storing the actual short and long term instances of your PermGen Class objects.

The PermGen space is fairly static by nature unless using third party tool and/or Java Reflection API which relies heavily on dynamic class loading.
It is important to note that this memory storage is applicable only for a Java HotSpot VM; other JVM vendors such as IBM and Oracle JRockit do not have such fixed and configurable PermGen storage and are using other techniques to manage the non Java Heap memory (native memory).

Find below a graphical view of a JVM HotSpot Java Heap vs. PermGen space breakdown along with its associated attributes and capacity tuning arguments.

2.17.2011

java.lang.OutOfMemoryError: PermGen space patterns

A Java memory problem such as java.lang.OutOfMemoryError: PermGen space is one of the most frequent and complex problems a Java EE application support person can face with a production system. This article is part #1 of a series of posts that will focus on a particular OOM flavour: PermGen space depletion of a Java HotSpot VM.
Part #1 primary objective is to revisit the fundamentals of the permanent generation space and to teach you how to identify a particular pattern of PermGen space problem and possible causes of PermGen memory leak.

Please refer to HotSpot PermGen space for a high level overview of this special Java HotSpot VM memory storage.

A PermGen Troubleshooting Guide video is also available from our YouTube channel.

java.lang.OutOfMemoryError: PermGen space patterns

Find below some of the most common patterns of OutOfMemoryError due to the depletion of the PermGen space.

Pattern
Symptoms
Possible root cause scenarios
Resolution
OOM observed during or after a migration of a Java EE server to newer version
- OOM may be observed on server start-up at deployment time
- OOM may be observed very shortly after server start-up and after 1 or 2+ hours of production traffic
- Higher PermGen capacity is often required due to increased Java EE server vendor code and libraries
- Increase your PermGen space capacity via
-XX:MaxPermSize
OOM observed after a certain period of time
- OOM observed after a longer but consistent period of time (days)
- PermGen space monitoring will show hourly or daily increase during your application business hours
- There are many possible causes of PermGen space memory leak. The most common is a class loader leak: increasing number of Class objects overtime
- Improper JVM arguments like usage of the Xnoclassgc flag (turn OFF Class garbage collection)
- Review your JVM HotSpot start-up arguments for any obvious problem like Xnoclassgc flag
- Analyse the JVM HotSpot Heap Dump as it can provides some hints on the source of a class loader leak
- Investigate any third party API you are using for any potential class loader leak defect
- Investigate your application code for any improper use of Reflection API and / or dynamic class loading
OOM observed following a redeploy of your application code (EAR, WAR files...)
- OOM may be observed during or shortly after your application redeploy process
- Unloading and reloading of your application code can lead to PermGen leak (class loader leak) and deplete your PermGen space fairly quickly
- Open a ticket with your Java EE vendor for any known class loader leak issue
- Shutdown and restart your server (JVM) post deployment to cleanup any class loader leak

Conclusion

I hope this short article has helped you understand the role of the JVM HotSpot permanent generation space and recognize some common OutOfMemoryError patterns.

The part #2 will focus on the diagnostic and deep dive analysis of your PermGen space problem. You can also refer to  OutOfMemoryError PermGen space for a real case study on that problem.

2.06.2011

OutOfMemoryError PermGen using Weblogic 10.0 and Sun JDK 1.5

 This case study describes the complete steps from root cause analysis to resolution of a native memory leak (PermGen space) problem experienced with a Weblogic 10.0 environment using Sun JDK 1.5.0.

This case study will show how a simple tuning mistake of the JVM settings can have severe consequences as well as how the Java Heap Dump analysis can sometimes help pinpoint root cause of native memory leak involving class loaders.

Environment specifications

·         Java EE server: Weblogic 10.0
·         OS: Solaris 10
·         JDK: Sun JDK 1.5.0_14 (-XX:PermSize=512m -XX:MaxPermSize=512m -XX:+UseParallelGC  -Xms1536m -Xmx1536m -Xnoclassgc -XX:+HeapDumpOnOutOfMemoryError)
·         Platform type: Ordering Portal

Troubleshooting tools

·         JVM Heap Dump (hprof format)
·         VisualGC 3.0
·         Eclipse Memory Analyser 0.6.0.2 (via IBM Support Assistant 4.1)

Problem overview

·         Problem type: java.lang.OutOfMemoryError: PermGen space

This problem was brought to our attention because it was becoming unmanageable for the support team to support this production environment as it was constantly failing with this OutOfMemoryError. A task force was assigned to deep dive into the root cause and provide a permanent solution.

Problem mitigation did involve restarting all the Weblogic servers every 2 days.

Gathering and validation of facts

As usual, a Java EE problem investigation requires gathering of technical and non technical facts so we can either derived other facts and/or conclude on the root cause. Before applying a corrective measure, the facts below were verified in order to conclude on the root cause:

·         Recent change of the affected platform? No
·         Any recent traffic increase to the affected platform? Yes, the traffic of the environment has increased over the last few months
·         Since how long this problem has been observed?  This problem is affecting the platform since about 1 year
·         Is the permanent generation space growing suddenly or over time? It was observed using VisualGC tool that the PermGen space is growing on a regular / daily basis
·         Did a restart of the Weblogic server resolve the problem? No, restarting the Weblogic server is only used as a mitigation strategy to prevent the OutOfMemoryError

·         Conclusion #1: The problem is related to a memory leak of the Sun JVM permanent generation space (PermGen)
·         Conclusion #2: The recent load increase seems responsible (trigger) of the increased need to restart the Weblogic servers


PermGen memory monitoring (via VisualGC)

PermGen space monitoring was performed via the Sun VisualGC 3.0 monitoring tool.
http://java.sun.com/performance/jvmstat/visualgc.html

You can see below the evolution the of permanent space leak over a 24 hours period.


-          December 24, 2010 @00:00 AM : PermGen @255MB



-          December 24, 2010 @9:00 PM : PermGen @325MB



-          December 25, 2010 @5:00 AM : PermGen @330MB





The review of the VisualGC data was quite conclusive on the fact that our application was leaking the PermGen space on a regular basis. Now the question was why.

JVM Heap Dump

A few JVM Heap Dumps (java_pid<xyz>.hprof format) were generated by the Sun JVM following some occurrences of the OutOfMemoryError.

Please note that a Heap Dump represents a snapshot of the Java Heap. PermGen is not normally part of a JVM heap dump but it still provide some information on how many classes were loaded, how many class loaders etc. as a pointer (stub) to the real native memory object stored in the native memory space.

Heap Dump analysis

The heap dump was analysed using Eclipse Memory Analyser in order to try to get information on the loaded classes and classloaders.


The analysis did reveal:

·         High amount of instances of java.lang.Class loaded by the system class loader (leak suspect #2). These classes did not appear to be referenced anymore.

JVM memory arguments review

Following the findings from the Heap Dump analysis, a review of the JVM start-up arguments was done in order to determine any configuration problem. We found immediately a suspected problem with one parameter: -Xnoclassgc.

By default the JVM unloads a class from the PermGen space when there are no live instances of that class left, but this can degrade performance in some scenarios. Turning off class garbage collection eliminates the overhead of loading and unloading the same class multiple times.

If a class is no longer needed, the space that it occupies on the heap is normally used for the creation of new objects. However, for an application that handles requests by creating a new instance of a class, the normal class garbage collection will clean up this class by freeing the PermGen space it occupied, only to have to re-instantiate the class when the next request comes along. In this situation you might want to use this option to disable the garbage collection of classes.

However, in the Java EE world, this is normally a bad idea since many applications creates classes dynamically, or uses reflection, because for this type of application, the use of this option can lead to native memory leak and exhaustion.

Root cause

The analysis of the Heap Dump and JVM arguments review did confirm the following root cause:

·         The addition of the –Xnoclassgc flag did disable the Sun JVM PermGen garbage collection and was leaking the PermGen space since our application rely on frameworks using frequent dynamic class loading and Java Reflection.
·         It was determined that a human error did introduce this parameter during the load testing phase (performed about 1 year ago) during an attempt to tune the garbage collection process of the Java Heap.

Solution and tuning

2 areas were looked at during the resolution phase:

·         Removal of the –Xnoclassgc flag from one of our production servers along with monitoring.
·         Ensure that re-enabling the Class garbage collector does not create any major overhead on the garbage collection and/or CPU utilization.

The removal of this JVM argument did provide an instant relief with no negative side effect. The VisualGC snapshots did confirm that the PermGen space was now quite stable and stabilizing around the 200 MB figures and no longer leaking.

Recommendations and best pratices

·         When facing OutOfMemoryError problems, always start with the basics e.g. first review your JVM start-up arguments for any obvious tuning problem before moving to deep dive analysis.
·         Take advantage of the Elipse Memory Analyser heap dump analysis tool as it may be able to help you pinpoint certain types of native memory leak