Valgrind Manual
Valgrind Manual
Valgrind Documentation
Table of Contents
The Valgrind Quick Start Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Valgrind User Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
Valgrind FAQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . clxxvi
Valgrind Technical Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
Valgrind Distribution Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
GNU Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . cxiii
Table of Contents
The Valgrind Quick Start Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2. Preparing your program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3. Running your program under Memcheck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4. Interpreting Memchecks output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5. Caveats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6. More information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1
1
1
1
3
3
iv
Memcheck is the default tool. The --leak-check option turns on the detailed memory leak detector.
Your program will run much slower (eg. 20 to 30 times) than normal, and use a lot more memory. Memcheck will
issue messages about memory errors and leaks that it detects.
#include <stdlib.h>
void f(void)
{
int* x = malloc(10 * sizeof(int));
x[10] = 0;
// problem 1: heap block overrun
}
// problem 2: memory leak -- x not freed
int main(void)
{
f();
return 0;
}
Most error messages look like the following, which describes problem 1, the heap block overrun:
==19182== Invalid write of size 4
==19182==
at 0x804838F: f (example.c:6)
==19182==
by 0x80483AB: main (example.c:11)
==19182== Address 0x1BA45050 is 0 bytes after a block of size 40 allocd
==19182==
at 0x1B8FF5CD: malloc (vg_replace_malloc.c:130)
==19182==
by 0x8048385: f (example.c:5)
==19182==
by 0x80483AB: main (example.c:11)
Things to notice:
There is a lot of information in each error message; read it carefully.
The 19182 is the process ID; its usually unimportant.
The first line ("Invalid write...") tells you what kind of error it is.
should not have due to a heap block overrun.
Below the first line is a stack trace telling you where the problem occurred. Stack traces can get quite large, and be
confusing, especially if you are using the C++ STL. Reading them from the bottom up can help. If the stack trace
is not big enough, use the --num-callers option to make it bigger.
The code addresses (eg. 0x804838F) are usually unimportant, but occasionally crucial for tracking down weirder
bugs.
Some error messages have a second component which describes the memory address involved. This one shows
that the written memory is just past the end of a block allocated with malloc() on line 5 of example.c.
Its worth fixing errors in the order they are reported, as later errors can be caused by earlier errors. Failing to do this
is a common cause of difficulty with Memcheck.
Memory leak messages look like this:
==19182== 40 bytes in 1 blocks are definitely lost in loss record 1 of 1
==19182==
at 0x1B8FF5CD: malloc (vg_replace_malloc.c:130)
==19182==
by 0x8048385: f (a.c:5)
==19182==
by 0x80483AB: main (a.c:11)
The stack trace tells you where the leaked memory was allocated. Memcheck cannot tell you why the memory leaked,
unfortunately. (Ignore the "vg_replace_malloc.c", thats an implementation detail.)
There are several kinds of leaks; the two most important categories are:
"definitely lost": your program is leaking memory -- fix it!
"probably lost": your program is leaking memory, unless youre doing funny things with pointers (such as moving
them to point to the middle of a heap block).
Memcheck also reports uses of uninitialised values, most commonly with the message "Conditional jump or move
depends on uninitialised value(s)". It can be difficult to determine the root cause of these errors. Try using the
--track-origins=yes to get extra information. This makes Memcheck run slower, but the extra information
you get often saves a lot of time figuring out where the uninitialised values are coming from.
If you dont understand an error message, please consult Explanation of error messages from Memcheck in the
Valgrind User Manual which has examples of all the error messages Memcheck produces.
5. Caveats
Memcheck is not perfect; it occasionally produces false positives, and there are mechanisms for suppressing these
(see Suppressing errors in the Valgrind User Manual). However, it is typically right 99% of the time, so you should be
wary of ignoring its error messages. After all, you wouldnt ignore warning messages produced by a compiler, right?
The suppression mechanism is also useful if Memcheck is reporting errors in library code that you cannot change.
The default suppression set hides a lot of these, but you may come across more.
Memcheck cannot detect every memory error your program has. For example, it cant detect out-of-range reads or
writes to arrays that are allocated statically or on the stack. But it should detect many errors that could crash your
program (eg. cause a segmentation fault).
Try to make your program so clean that Memcheck reports no errors. Once you achieve this state, it is much easier to
see when changes to the program cause Memcheck to report new errors. Experience from several years of Memcheck
use shows that it is possible to make even huge programs run Memcheck-clean. For example, large parts of KDE,
OpenOffice.org and Firefox are Memcheck-clean, or very close to it.
6. More information
Please consult the Valgrind FAQ and the Valgrind User Manual, which have much more information. Note that the
other tools in the Valgrind distribution can be invoked with the --tool option.
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1. An Overview of Valgrind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2. How to navigate this manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2. Using and understanding the Valgrind core . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1. What Valgrind does with your program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2. Getting started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3. The Commentary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.4. Reporting of errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.5. Suppressing errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.6. Core Command-line Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.6.1. Tool-selection Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.6.2. Basic Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.6.3. Error-related Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.6.4. malloc-related Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.6.5. Uncommon Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.6.6. Debugging Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.6.7. Setting Default Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.7. Support for Threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.7.1. Scheduling and Multi-Thread Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.8. Handling of Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.9. Building and Installing Valgrind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.10. If You Have Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.11. Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.12. An Example Run . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.13. Warning Messages You Might See . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3. Using and understanding the Valgrind core: Advanced Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.1. The Client Request mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.2. Debugging your program using Valgrind gdbserver and GDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.2.1. Quick Start: debugging in 3 steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.2.2. Valgrind gdbserver overall organisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2.3. Connecting GDB to a Valgrind gdbserver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2.4. Connecting to an Android gdbserver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.2.5. Monitor command handling by the Valgrind gdbserver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.2.6. Valgrind gdbserver thread information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.2.7. Examining and modifying Valgrind shadow registers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.2.8. Limitations of the Valgrind gdbserver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2.9. vgdb command line options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.2.10. Valgrind monitor commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.3. Function wrapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.3.1. A Simple Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.3.2. Wrapping Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.3.3. Wrapping Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.3.4. Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.3.5. Limitations - control flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.3.6. Limitations - original function signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.3.7. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4. Memcheck: a memory error detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.2. Explanation of error messages from Memcheck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.2.1. Illegal read / Illegal write errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.2.2. Use of uninitialised values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.2.3. Use of uninitialised or unaddressable values in system calls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
v
105
106
107
107
108
108
108
109
109
110
111
111
112
113
113
114
116
116
116
117
119
119
121
123
125
128
130
130
131
132
132
132
132
133
133
134
134
136
138
138
139
142
142
142
143
144
144
145
145
146
146
146
146
147
vii
8.5. Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9. Massif: a heap profiler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2. Using Massif and ms_print . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2.1. An Example Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2.2. Running Massif . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2.3. Running ms_print . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2.4. The Output Preamble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2.5. The Output Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2.6. The Snapshot Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2.7. Forking Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2.8. Measuring All Memory in a Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2.9. Acting on Massifs Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.3. Massif Command-line Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.4. Massif Monitor Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.5. Massif Client Requests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.6. ms_print Command-line Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.7. Massifs Output File Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10. DHAT: a dynamic heap analysis tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.2. Understanding DHATs output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.2.1. Interpreting the max-live, tot-alloc and deaths fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.2.2. Interpreting the acc-ratios fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.2.3. Interpreting "Aggregated access counts by offset" data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.3. DHAT Command-line Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11. SGCheck: an experimental stack and global array overrun detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.2. SGCheck Command-line Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.3. How SGCheck Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.4. Comparison with Memcheck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.5. Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.6. Still To Do: User-visible Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.7. Still To Do: Implementation Tidying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12. BBV: an experimental basic block vector generation tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.2. Using Basic Block Vectors to create SimPoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.3. BBV Command-line Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.4. Basic Block Vector File Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.5. Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.6. Threaded Executable Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.7. Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.8. Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13. Lackey: an example tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.2. Lackey Command-line Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14. Nulgrind: the minimal Valgrind tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
147
148
175
148
148
149
149
150
150
153
157
157
157
158
160
160
160
160
161
161
162
162
163
164
165
167
167
167
167
167
168
169
169
170
170
170
171
171
172
172
172
173
174
174
174
175
175
viii
1. Introduction
1.1. An Overview of Valgrind
Valgrind is an instrumentation framework for building dynamic analysis tools. It comes with a set of tools each of
which performs some kind of debugging, profiling, or similar task that helps you improve your programs. Valgrinds
architecture is modular, so new tools can be created easily and without disturbing the existing structure.
A number of useful tools are supplied as standard.
1. Memcheck is a memory error detector. It helps you make your programs, particularly those written in C and C++,
more correct.
2. Cachegrind is a cache and branch-prediction profiler. It helps you make your programs run faster.
3. Callgrind is a call-graph generating cache profiler. It has some overlap with Cachegrind, but also gathers some
information that Cachegrind does not.
4. Helgrind is a thread error detector. It helps you make your multi-threaded programs more correct.
5. DRD is also a thread error detector. It is similar to Helgrind but uses different analysis techniques and so may find
different problems.
6. Massif is a heap profiler. It helps you make your programs use less memory.
7. DHAT is a different kind of heap profiler. It helps you understand issues of block lifetimes, block utilisation, and
layout inefficiencies.
8. SGcheck is an experimental tool that can detect overruns of stack and global arrays.
Its functionality is
complementary to that of Memcheck: SGcheck finds problems that Memcheck cant, and vice versa..
9. BBV is an experimental SimPoint basic block vector generator. It is useful to people doing computer architecture
research and development.
There are also a couple of minor tools that arent useful to most users: Lackey is an example tool that illustrates
some instrumentation basics; and Nulgrind is the minimal Valgrind tool that does no analysis or instrumentation, and
is only useful for testing purposes.
Valgrind is closely tied to details of the CPU and operating system, and to a lesser extent, the compiler and basic C
libraries. Nonetheless, it supports a number of widely-used platforms, listed in full at http://www.valgrind.org/.
Valgrind is built via the standard Unix ./configure, make, make install process; full details are given in
the README file in the distribution.
Valgrind is licensed under the The GNU General Public License, version 2. The valgrind/*.h headers that
you may wish to include in your code (eg. valgrind.h, memcheck.h, helgrind.h, etc.) are distributed under
a BSD-style license, so you may include them in your code without worrying about license conflicts. Some of
the PThreads test cases, pth_*.c, are taken from "Pthreads Programming" by Bradford Nichols, Dick Buttlar &
Jacqueline Proulx Farrell, ISBN 1-56592-115-1, published by OReilly & Associates, Inc.
If you contribute code to Valgrind, please ensure your contributions are licensed as "GPLv2, or (at your option) any
later version." This is so as to allow the possibility of easily upgrading the license to GPLv3 in future. If you want to
modify code in the VEX subdirectory, please also see the file VEX/HACKING.README in the distribution.
Introduction
The most important option is --tool which dictates which Valgrind tool to run. For example, if want to run the
command ls -l using the memory-checking tool Memcheck, issue this command:
valgrind --tool=memcheck ls -l
However, Memcheck is the default, so if you want to use it you can omit the --tool option.
Regardless of which tool is in use, Valgrind takes control of your program before it starts. Debugging information is
read from the executable and associated libraries, so that error messages and other outputs can be phrased in terms of
source code locations, when appropriate.
Your program is then run on a synthetic CPU provided by the Valgrind core. As new code is executed for the first
time, the core hands the code to the selected tool. The tool adds its own instrumentation code to this and hands the
result back to the core, which coordinates the continued execution of this instrumented code.
The amount of instrumentation code added varies widely between tools. At one end of the scale, Memcheck adds
code to check every memory access and every value computed, making it run 10-50 times slower than natively. At the
other end of the spectrum, the minimal tool, called Nulgrind, adds no instrumentation at all and causes in total "only"
about a 4 times slowdown.
Valgrind simulates every single instruction your program executes. Because of this, the active tool checks, or profiles,
not only the code in your application but also in all supporting dynamically-linked libraries, including the C library,
graphical libraries, and so on.
If youre using an error-detection tool, Valgrind may detect errors in system libraries, for example the GNU C or X11
libraries, which you have to use. You might not be interested in these errors, since you probably have no control
over that code. Therefore, Valgrind allows you to selectively suppress errors, by recording them in a suppressions
file which is read when Valgrind starts up. The build mechanism selects default suppressions which give reasonable
behaviour for the OS and libraries detected on your machine. To make it easier to write suppressions, you can use the
--gen-suppressions=yes option. This tells Valgrind to print out a suppression for each reported error, which
you can then copy into a suppressions file.
3
Different error-checking tools report different kinds of errors. The suppression mechanism therefore allows you to say
which tool or tool(s) each suppression applies to.
The 12345 is the process ID. This scheme makes it easy to distinguish program output from Valgrind commentary,
and also easy to differentiate commentaries from different processes which have become merged together, for whatever
reason.
By default, Valgrind tools write only essential messages to the commentary, so as to avoid flooding you with
information of secondary importance. If you want more information about what is happening, re-run, passing the -v
option to Valgrind. A second -v gives yet more detail.
You can direct the commentary to three different places:
1. The default: send it to a file descriptor, which is by default 2 (stderr). So, if you give the core no options, it will
write commentary to the standard error stream. If you want to send it to some other file descriptor, for example
number 9, you can specify --log-fd=9.
This is the simplest and most common arrangement, but can cause problems when Valgrinding entire trees of
processes which expect specific file descriptors, particularly stdin/stdout/stderr, to be available for their own use.
2. A less intrusive option is to write the commentary to a file, which you specify by --log-file=filename.
There are special format specifiers that can be used to use a process ID or an environment variable name in the log
file name. These are useful/necessary if your program invokes multiple processes (especially for MPI programs).
See the basic options section for more details.
3. The least intrusive option is to send the commentary to a network socket. The socket is specified as an IP address
and port number pair, like this: --log-socket=192.168.0.1:12345 if you want to send the output to host
IP 192.168.0.1 port 12345 (note: we have no idea if 12345 is a port of pre-existing significance). You can also omit
the port number: --log-socket=192.168.0.1, in which case a default port of 1500 is used. This default is
defined by the constant VG_CLO_DEFAULT_LOGPORT in the sources.
Note, unfortunately, that you have to use an IP address here, rather than a hostname.
Writing to a network socket is pointless if you dont have something listening at the other end. We provide a simple
listener program, valgrind-listener, which accepts connections on the specified port and copies whatever
it is sent to stdout. Probably someone will tell us this is a horrible security risk. It seems likely that people will
write more sophisticated listeners in the fullness of time.
valgrind-listener can accept simultaneous connections from up to 50 Valgrinded processes.
each line of output it prints the current number of active connections in round brackets.
In front of
portnumber
Changes the port it listens on from the default (1500). The specified port must be in the range 1024 to 65535. The
same restriction applies to port numbers specified by a --log-socket to Valgrind itself.
If a Valgrinded process fails to connect to a listener, for whatever reason (the listener isnt running, invalid or
unreachable host or port, etc), Valgrind switches back to writing the commentary to stderr. The same goes for
any process which loses an established connection to a listener. In other words, killing the listener doesnt kill the
processes sending data to it.
Here is an important point about the relationship between the commentary and profiling output from tools. The
commentary contains a mix of messages from the Valgrind core and the selected tool. If the tool reports errors, it will
report them to the commentary. However, if the tool does profiling, the profile data will be written to a file of some
kind, depending on the tool, and independent of what --log-* options are in force. The commentary is intended
to be a low-bandwidth, human-readable channel. Profiling data, on the other hand, is usually voluminous and not
meaningful without further processing, which is why we have chosen this arrangement.
This message says that the program did an illegal 4-byte read of address 0xBFFFF74C, which, as far as Memcheck
can tell, is not a valid stack address, nor corresponds to any current heap blocks or recently freed heap blocks. The
read is happening at line 45 of bogon.cpp, called from line 66 of the same file, etc. For errors associated with
an identified (current or freed) heap block, for example reading freed memory, Valgrind reports not only the location
where the error happened, but also where the associated heap block was allocated/freed.
Valgrind remembers all error reports. When an error is detected, it is compared against old reports, to see if it is a
duplicate. If so, the error is noted, but no further commentary is emitted. This avoids you being swamped with
bazillions of duplicate error reports.
If you want to know how many times each error occurred, run with the -v option. When execution finishes, all the
reports are printed out, along with, and sorted by, their occurrence counts. This makes it easy to see which errors have
occurred most frequently.
Errors are reported before the associated operation actually happens. For example, if youre using Memcheck and
your program attempts to read from address zero, Memcheck will emit a message to this effect, and your program will
then likely die with a segmentation fault.
In general, you should try and fix errors in the order that they are reported. Not doing so can be confusing. For
example, a program which copies uninitialised values to several memory locations, and later uses them, will generate
several error messages, when run on Memcheck. The first such error message may well give the most direct clue to
the root cause of the problem.
The process of detecting duplicate errors is quite an expensive one and can become a significant performance overhead
if your program generates huge quantities of errors. To avoid serious problems, Valgrind will simply stop collecting
errors after 1,000 different errors have been seen, or 10,000,000 errors in total have been seen. In this situation you
might as well stop your program and fix it, because Valgrind wont tell you anything else useful after this. Note that
6
the 1,000/10,000,000 limits apply after suppressed errors are removed. These limits are defined in m_errormgr.c
and can be increased if necessary.
To avoid this cutoff you can use the --error-limit=no option. Then Valgrind will always show errors, regardless
of how many there are. Use this option carefully, since it may have a bad effect on performance.
2 dl-hack3-cond-1 /usr/lib/valgrind/default.supp:1234
--1610-- used_suppression:
2 glibc-2.5.x-on-SUSE-10.2-(PPC)-2a /usr/lib/valgrind/default
Multiple suppressions files are allowed. Valgrind loads suppression patterns from $PREFIX/lib/valgrind/default.supp
unless --default-suppressions=no has been specified. You can ask to add suppressions from additional
files by specifying --suppressions=/path/to/file.supp one or more times.
If you want to understand more about suppressions, look at an existing suppressions file whilst reading the following
documentation. The file glibc-2.3.supp, in the source distribution, provides some good examples.
Each suppression has the following components:
First line: its name. This merely gives a handy name to the suppression, by which it is referred to in the summary
of used suppressions printed out when a program finishes. Its not important what the name is; any identifying
string will do.
Second line: name of the tool(s) that the suppression is for (if more than one, comma-separated), and the name of
the suppression itself, separated by a colon (n.b.: no spaces are allowed), eg:
tool_name1,tool_name2:suppression_name
Recall that Valgrind is a modular system, in which different instrumentation tools can observe your program whilst it
is running. Since different tools detect different kinds of errors, it is necessary to say which tool(s) the suppression
is meaningful to.
Tools will complain, at startup, if a tool does not understand any suppression directed to it.
Tools ignore
suppressions which are not directed to them. As a result, it is quite practical to put suppressions for all tools
into the same suppression file.
Next line: a small number of suppression types have extra information after the second line (eg. the Param
suppression for Memcheck)
Remaining lines: This is the calling context for the error -- the chain of function calls that led to it. There can be
up to 24 of these lines.
Locations may be names of either shared objects or functions. They begin obj: and fun: respectively. Function
and object names to match against may use the wildcard characters * and ?.
Important note: C++ function names must be mangled. If you are writing suppressions by hand, use the
--demangle=no option to get the mangled names in your error messages. An example of a mangled C++ name
is _ZN9QListView4showEv. This is the form that the GNU C++ compiler uses internally, and the form that
must be used in suppression files. The equivalent demangled name, QListView::show(), is what you see at
the C++ source code level.
A location line may also be simply "..." (three dots). This is a frame-level wildcard, which matches zero or more
frames. Frame level wildcards are useful because they make it easy to ignore varying numbers of uninteresting
frames in between frames of interest. That is often important when writing suppressions which are intended to be
robust against variations in the amount of function inlining done by compilers.
Finally, the entire suppression must be between curly braces. Each brace must be the first character on its own line.
A suppression only suppresses an error when the error matches all the details in the suppression. Heres an example:
{
__gconv_transform_ascii_internal/__mbrtowc/mbtowc
Memcheck:Value4
fun:__gconv_transform_ascii_internal
fun:__mbr*toc
fun:mbtowc
}
What it means is: for Memcheck only, suppress a use-of-uninitialised-value error, when
is 4, when it occurs in the function __gconv_transform_ascii_internal, when
from any function of name matching __mbr*toc, when that is called from mbtowc.
ply under any other circumstances.
The string by which this suppression is identified
__gconv_transform_ascii_internal/__mbrtowc/mbtowc.
(See Writing suppression files for more details on the specifics of Memchecks suppression kinds.)
Another example, again for the Memcheck tool:
8
{
libX11.so.6.2/libX11.so.6.2/libXaw.so.7.0
Memcheck:Value4
obj:/usr/X11R6/lib/libX11.so.6.2
obj:/usr/X11R6/lib/libX11.so.6.2
obj:/usr/X11R6/lib/libXaw.so.7.0
}
This suppresses any size 4 uninitialised-value error which occurs anywhere in libX11.so.6.2, when called from
anywhere in the same library, when called from anywhere in libXaw.so.7.0. The inexact specification of
locations is regrettable, but is about all you can hope for, given that the X11 libraries shipped on the Linux distro on
which this example was made have had their symbol tables removed.
Although the above two examples do not make this clear, you can freely mix obj: and fun: lines in a suppression.
Finally, heres an example using three frame-level wildcards:
{
a-contrived-example
Memcheck:Leak
fun:malloc
...
fun:ddd
...
fun:ccc
...
fun:main
}
This suppresses Memcheck memory-leak errors, in the case where the allocation was done by main calling (though
any number of intermediaries, including zero) ccc, calling onwards via ddd and eventually to malloc..
Note that Valgrind does trace into the child of a fork (it would be difficult not to, since fork makes an identical
copy of a process), so this option is arguably badly named. However, most children of fork calls immediately call
exec anyway.
--trace-children-skip=patt1,patt2,...
This option only has an effect when --trace-children=yes is specified. It allows for some children to be
skipped. The option takes a comma separated list of patterns for the names of child executables that Valgrind should
not trace into. Patterns may include the metacharacters ? and *, which have the usual meaning.
This can be useful for pruning uninteresting branches from a tree of processes being run on Valgrind. But you should
be careful when using it. When Valgrind skips tracing into an executable, it doesnt just skip tracing that executable,
it also skips tracing any of that executables child processes. In other words, the flag doesnt merely cause tracing to
stop at the specified executables -- it skips tracing of entire process subtrees rooted at any of the specified executables.
--trace-children-skip-by-arg=patt1,patt2,...
This is the same as --trace-children-skip, with one difference: the decision as to whether to trace into a
child process is made by examining the arguments to the child process, rather than the name of its executable.
10
11
--log-file=<filename>
Specifies that Valgrind should send all of its messages to the specified file. If the file name is empty, it causes an
abort. There are three special format specifiers that can be used in the file name.
%p is replaced with the current process ID. This is very useful for program that invoke multiple processes. WARNING:
If you use --trace-children=yes and your program invokes multiple processes OR your program forks without
calling exec afterwards, and you dont use this specifier (or the %q specifier below), the Valgrind output from all those
processes will go into one file, possibly jumbled up, and possibly incomplete.
%q{FOO} is replaced with the contents of the environment variable FOO. If the {FOO} part is malformed, it causes an
abort. This specifier is rarely needed, but very useful in certain circumstances (eg. when running MPI programs). The
idea is that you specify a variable which will be set differently for each process in the job, for example BPROC_RANK
or whatever is applicable in your MPI setup. If the named environment variable is not set, it causes an abort. Note
that in some shells, the { and } characters may need to be escaped with a backslash.
%% is replaced with %.
If an % is followed by any other character, it causes an abort.
If the file name specifies a relative file name, it is put in the programs initial working directory : this is the current
directory when the program started its execution after the fork or after the exec. If it specifies an absolute file name
(ie. starts with /) then it is put there.
--log-socket=<ip-address:port-number>
Specifies that Valgrind should send all of its messages to the specified port at the specified IP address. The port
may be omitted, in which case port 1500 is used. If a connection cannot be made to the specified socket, Valgrind
falls back to writing output to the standard error (stderr). This option is intended to be used in conjunction with the
valgrind-listener program. For further details, see the commentary in the manual.
12
--xml-file=<filename>
Specifies that Valgrind should send its XML output to the specified file.
It must be used in conjunction with
--xml=yes. Any %p or %q sequences appearing in the filename are expanded in exactly the same way as they
are for --log-file. See the description of --log-file for details.
--xml-socket=<ip-address:port-number>
Specifies that Valgrind should send its XML output the specified port at the specified IP address. It must be used in
conjunction with --xml=yes. The form of the argument is the same as that used by --log-socket. See the
description of --log-socket for further details.
--xml-user-comment=<string>
Embeds an extra user comment string at the start of the XML output. Only works when --xml=yes is specified;
ignored otherwise.
--demangle=<yes|no> [default: yes]
Enable/disable automatic demangling (decoding) of C++ names. Enabled by default. When enabled, Valgrind will
attempt to translate encoded C++ names back to something approaching the original. The demangler handles symbols
mangled by g++ versions 2.X, 3.X and 4.X.
An important fact about demangling is that function names mentioned in suppressions files should be in their mangled
form. Valgrind does not demangle function names when searching for applicable suppressions, because to do otherwise
would make suppression file contents dependent on the state of Valgrinds demangling machinery, and also slow down
suppression matching.
--num-callers=<number> [default: 12]
Specifies the maximum number of entries shown in stack traces that identify program locations. Note that errors
are commoned up using only the top four function locations (the place in the current function, and that of its three
immediate callers). So this doesnt affect the total number of errors reported.
The maximum value for this is 500. Note that higher settings will make Valgrind run a bit more slowly and take a bit
more memory, but can be useful when working with programs with deeply-nested call chains.
--unw-stack-scan-thresh=<number> [default:
[default: 5]
Stack-scanning support is available only on ARM targets.
0] , --unw-stack-scan-frames=<number>
These flags enable and control stack unwinding by stack scanning. When the normal stack unwinding mechanisms -usage of Dwarf CFI records, and frame-pointer following -- fail, stack scanning may be able to recover a stack trace.
Note that stack scanning is an imprecise, heuristic mechanism that may give very misleading results, or none at all.
It should be used only in emergencies, when normal unwinding fails, and it is important to nevertheless have stack
traces.
Stack scanning is a simple technique: the unwinder reads words from the stack, and tries to guess which of them might
be return addresses, by checking to see if they point just after ARM or Thumb call instructions. If so, the word is
added to the backtrace.
The main danger occurs when a function call returns, leaving its return address exposed, and a new function is called,
but the new function does not overwrite the old address. The result of this is that the backtrace may contain entries for
functions which have already returned, and so be very confusing.
13
A second limitation of this implementation is that it will scan only the page (4KB, normally) containing the starting
stack pointer. If the stack frames are large, this may result in only a few (or not even any) being present in the trace.
Also, if you are unlucky and have an initial stack pointer near the end of its containing page, the scan may miss all
interesting frames.
By default stack scanning is disabled. The normal use case is to ask for it when a stack trace would otherwise be very
short. So, to enable it, use --unw-stack-scan-thresh=number. This requests Valgrind to try using stack
scanning to "extend" stack traces which contain fewer than number frames.
If stack scanning does take place, it will only generate at most the number of frames specified by
--unw-stack-scan-frames. Typically, stack scanning generates so many garbage entries that this value
is set to a low value (5) by default. In no case will a stack trace larger than the value specified by --num-callers
be created.
--error-limit=<yes|no> [default: yes]
When enabled, Valgrind stops reporting errors after 10,000,000 in total, or 1,000 different ones, have been seen. This
is to stop the error tracking machinery from becoming a huge performance overhead in programs with many errors.
--error-exitcode=<number> [default: 0]
Specifies an alternative exit code to return if Valgrind reported any errors in the run. When set to the default value
(zero), the return value from Valgrind will always be the return value of the process being simulated. When set to a
nonzero value, that value is returned instead, if Valgrind detects any errors. This is useful for using Valgrind as part
of an automated test suite, since it makes it easy to detect test cases for which Valgrind has reported errors, just by
inspecting return codes.
--sigill-diagnostics=<yes|no> [default: yes]
Enable/disable printing of illegal instruction diagnostics. Enabled by default, but defaults to disabled when --quiet
is given. The default can always be explicitly overridden by giving this option.
When enabled, a warning message will be printed, along with some diagnostics, whenever an instruction is encountered that Valgrind cannot decode or translate, before the program is given a SIGILL signal. Often an illegal instruction
indicates a bug in the program or missing support for the particular instruction in Valgrind. But some programs do
deliberately try to execute an instruction that might be missing and trap the SIGILL signal to detect processor features.
Using this flag makes it possible to avoid the diagnostic output that you would otherwise get in such cases.
--show-below-main=<yes|no> [default: no]
By default, stack traces for errors do not show any functions that appear beneath main because most of the time its
uninteresting C library stuff and/or gobbledygook. Alternatively, if main is not present in the stack trace, stack traces
will not show any functions below main-like functions such as glibcs __libc_start_main. Furthermore, if
main-like functions are present in the trace, they are normalised as (below main), in order to make the output
more deterministic.
If this option is enabled, all stack trace entries will be shown and main-like functions will not be normalised.
14
15
16
17
18
--main-stacksize=<number> [default:
Specifies the size of the main threads stack.
To simplify its memory management, Valgrind reserves all required space for the main threads stack at startup. That
means it needs to know the required stack size at startup.
By default, Valgrind uses the current "ulimit" value for the stack size, or 16 MB, whichever is lower. In many cases
this gives a stack size in the range 8 to 16 MB, which almost never overflows for most applications.
If you need a larger total stack size, use --main-stacksize to specify it. Only set it as high as you need, since
reserving far more space than you need (that is, hundreds of megabytes more than you need) constrains Valgrinds
memory allocators and may reduce the total amount of memory that Valgrind can use.
This is only really of
significance on 32-bit machines.
On Linux, you may request a stack of size up to 2GB. Valgrind will stop with a diagnostic message if the stack cannot
be allocated.
--main-stacksize only affects the stack size for the programs initial thread. It has no bearing on the size of
thread stacks, as Valgrind does not allocate those.
You may need to use both --main-stacksize and --max-stackframe together. It is important to understand
that --main-stacksize sets the maximum total stack size, whilst --max-stackframe specifies the largest size
of any one stack frame. You will have to work out the --main-stacksize value for yourself (usually, if your
applications segfaults). But Valgrind will tell you the needed --max-stackframe size, if necessary.
As discussed further in the description of --max-stackframe, a requirement for a large stack is a sign of potential
portability problems. You are best advised to place all large data in heap-allocated memory.
20
21
22
Enable some special magic needed when the program being run is itself Valgrind.
no-inner-prefix:
Disable printing a prefix > in front of each stdout or stderr output line in an inner
Valgrind being run by an outer Valgrind. This is useful when running Valgrind regression tests in an outer/inner
setup. Note that the prefix > will always be printed in front of the inner debug logging lines.
no-nptl-pthread-stackcache:
The GNU glibc pthread library (libpthread.so), which is used by pthread programs, maintains a cache of
pthread stacks. When a pthread terminates, the memory used for the pthread stack and some thread local storage
related data structure are not always directly released. This memory is kept in a cache (up to a certain size), and is
re-used if a new thread is started.
This cache causes the helgrind tool to report some false positive race condition errors on this cached memory, as
helgrind does not understand the internal glibc cache synchronisation primitives. So, when using helgrind, disabling
the cache helps to avoid false positive race conditions, in particular when using thread local storage variables (e.g.
variables using the __thread qualifier).
When using the memcheck tool, disabling the cache ensures the memory used by glibc to handle __thread variables
is directly released when a thread terminates.
Note: Valgrind disables the cache using some internal knowledge of the glibc stack cache implementation and by
examining the debug information of the pthread library. This technique is thus somewhat fragile and might not work
for all glibc versions. This has been succesfully tested with various glibc versions (e.g. 2.11, 2.16, 2.18) on various
platforms.
23
--fair-sched=<no|yes|try>
[default: no]
The --fair-sched option controls the locking mechanism used by Valgrind to serialise thread execution. The
locking mechanism controls the way the threads are scheduled, and different settings give different trade-offs between
fairness and performance. For more details about the Valgrind thread serialisation scheme and its impact on
performance and thread scheduling, see Scheduling and Multi-Thread Performance.
The value --fair-sched=yes activates a fair scheduler. In short, if multiple threads are ready to run, the
threads will be scheduled in a round robin fashion. This mechanism is not available on all platforms or Linux
versions. If not available, using --fair-sched=yes will cause Valgrind to terminate with an error.
You may find this setting improves overall responsiveness if you are running an interactive multithreaded program,
for example a web browser, on Valgrind.
The value --fair-sched=try activates fair scheduling if available on the platform.
automatically fall back to --fair-sched=no.
Otherwise, it will
The value --fair-sched=no activates a scheduler which does not guarantee fairness between threads ready to
run, but which in general gives the highest performance.
--kernel-variant=variant1,variant2,...
Handle system calls and ioctls arising from minor variants of the default kernel for this platform. This is useful for
running on hacked kernels or with kernel modules which support nonstandard ioctls, for example. Use with caution.
If you dont understand what this option does then you almost certainly dont need it. Currently known variants are:
bproc: support the sys_broc system call on x86. This is for running on BProc, which is a minor variant of
standard Linux which is sometimes used for building clusters.
android-no-hw-tls: some versions of the Android emulator for ARM do not provide a hardware TLS (threadlocal state) register, and Valgrind crashes at startup. Use this variant to select software support for TLS.
android-gpu-sgx5xx: use this to support handling of proprietary ioctls for the PowerVR SGX 5XX series of
GPUs on Android devices. Failure to select this does not cause stability problems, but may cause Memcheck to
report false errors after the program performs GPU-specific ioctls.
android-gpu-adreno3xx: similarly, use this to support handling of proprietary ioctls for the Qualcomm
Adreno 3XX series of GPUs on Android devices.
--merge-recursive-frames=<number> [default: 0]
Some recursive algorithms, for example balanced binary tree implementations, create many different stack traces, each
containing cycles of calls. A cycle is defined as two identical program counter values separated by zero or more other
program counter values. Valgrind may then use a lot of memory to store all these stack traces. This is a poor use
of memory considering that such stack traces contain repeated uninteresting recursive calls instead of more interesting
information such as the function that has initiated the recursive call.
The option --merge-recursive-frames=<number> instructs Valgrind to detect and merge recursive call
cycles having a size of up to <number> frames. When such a cycle is detected, Valgrind records the cycle in
the stack trace as a unique program counter.
The value 0 (the default) causes no recursive call merging. A value of 1 will cause stack traces of simple recursive
algorithms (for example, a factorial implementation) to be collapsed. A value of 2 will usually be needed to collapse
stack traces produced by recursive algorithms such as binary trees, quick sort, etc. Higher values might be needed for
more complex recursive algorithms.
Note: recursive calls are detected by analysis of program counter values. They are not detected by looking at function
names.
24
--require-text-symbol=:sonamepatt:fnnamepatt
When a shared object whose soname matches sonamepatt is loaded into the process, examine all the text symbols
it exports. If none of those match fnnamepatt, print an error message and abandon the run. This makes it possible
to ensure that the run does not continue unless a given shared object contains a particular function name.
Both sonamepatt and fnnamepatt can be written using the usual ? and * wildcards.
For example:
":*libc.so*:foo?bar". You may use characters other than a colon to separate the two patterns. It is
only important that the first character and the separator character are the same. For example, the above example could
also be written "Q*libc.so*Qfoo?bar". Multiple --require-text-symbol flags are allowed, in which
case shared objects that are loaded into the process will be checked against all of them.
The purpose of this is to support reliable usage of marked-up libraries.
For example, suppose we have a
version of GCCs libgomp.so which has been marked up with annotations to support Helgrind.
It is only
too easy and confusing to load the wrong, un-annotated libgomp.so into the application.
So the idea is:
add a text symbol in the marked-up library, for example annotated_for_helgrind_3_6, and then give
the flag --require-text-symbol=:*libgomp*so*:annotated_for_helgrind_3_6 so that when
libgomp.so is loaded, Valgrind scans its symbol table, and if the symbol isnt present the run is aborted, rather
than continuing silently with the un-marked-up library. Note that you should put the entire flag in quotes to stop
shells expanding up the * and ? wildcards.
25
--soname-synonyms=syn1=pattern1,syn2=pattern2,...
When a shared library is loaded, Valgrind checks for functions in the library that must be replaced or wrapped. For
example, Memcheck replaces all malloc related functions (malloc, free, calloc, ...) with its own versions. Such
replacements are done by default only in shared libraries whose soname matches a predefined soname pattern (e.g.
libc.so* on linux). By default, no replacement is done for a statically linked library or for alternative libraries
such as tcmalloc. In some cases, the replacements allow --soname-synonyms to specify one additional synonym
pattern, giving flexibility in the replacement.
Currently, this flexibility is only allowed for the malloc related functions, using the synonym somalloc. This
synonym is usable for all tools doing standard replacement of malloc related functions (e.g. memcheck, massif, drd,
helgrind, exp-dhat, exp-sgcheck).
Alternate malloc library: to replace the malloc related functions in an alternate library with soname
mymalloclib.so, give the option --soname-synonyms=somalloc=mymalloclib.so. A pattern can
be used to match multiple libraries sonames. For example, --soname-synonyms=somalloc=*tcmalloc*
will match the soname of all variants of the tcmalloc library (native, debug, profiled, ... tcmalloc variants).
Note: the soname of a elf shared library can be retrieved using the readelf utility.
Replacements in a statically linked library are done by using the NONE pattern.
For example, if you link with libtcmalloc.a, memcheck will properly work when you give the option
--soname-synonyms=somalloc=NONE. Note that a NONE pattern will match the main executable
and any shared library having no soname.
To run a "default" Firefox build for Linux, in which JEMalloc is linked in to the main executable, use
--soname-synonyms=somalloc=NONE.
26
These are processed in the given order, before the command-line options. Options processed later override those
processed earlier; for example, options in ./.valgrindrc will take precedence over those in ~/.valgrindrc.
Please note that the ./.valgrindrc file is ignored if it is marked as world writeable or not owned by the current
user. This is because the ./.valgrindrc can contain options that are potentially harmful or can be used by a local
attacker to execute code under your user account.
Any tool-specific options put in $VALGRIND_OPTS or the .valgrindrc files should be prefixed with the tool
name and a colon. For example, if you want Memcheck to always do leak checking, you can put the following entry
in ~/.valgrindrc:
--memcheck:leak-check=yes
This will be ignored if any tool other than Memcheck is run. Without the memcheck: part, this will cause problems
if you select other tools that dont understand --leak-check=yes.
The fairness of the futex based locking produces better reproducibility of thread scheduling for different executions of
a multithreaded application. This better reproducibility is particularly helpful when using Helgrind or DRD.
Valgrinds use of thread serialisation implies that only one thread at a time may run. On a multiprocessor/multicore
system, the running thread is assigned to one of the CPUs by the OS kernel scheduler. When a thread acquires the
lock, sometimes the thread will be assigned to the same CPU as the thread that just released the lock. Sometimes, the
thread will be assigned to another CPU. When using pipe based locking, the thread that just acquired the lock will
usually be scheduled on the same CPU as the thread that just released the lock. With the futex based mechanism, the
thread that just acquired the lock will more often be scheduled on another CPU.
Valgrinds thread serialisation and CPU assignment by the OS kernel scheduler can interact badly with the CPU
frequency scaling available on many modern CPUs. To decrease power consumption, the frequency of a CPU or
core is automatically decreased if the CPU/core has not been used recently. If the OS kernel often assigns the thread
which just acquired the lock to another CPU/core, it is quite likely that this CPU/core is currently at a low frequency.
The frequency of this CPU will be increased after some time. However, during this time, the (only) running thread
will have run at the low frequency. Once this thread has run for some time, it will release the lock. Another thread
will acquire this lock, and might be scheduled again on another CPU whose clock frequency was decreased in the
meantime.
The futex based locking causes threads to change CPUs/cores more often. So, if CPU frequency scaling is activated,
the futex based locking might decrease significantly the performance of a multithreaded app running under Valgrind.
Performance losses of up to 50% degradation have been observed, as compared to running on a machine for which
CPU frequency scaling has been disabled. The pipe based locking locking scheme also interacts badly with CPU
frequency scaling, with performance losses in the range 10..20% having been observed.
To avoid such performance degradation, you should indicate to the kernel that all CPUs/cores should always run at
maximum clock speed. Depending on your Linux distribution, CPU frequency scaling may be controlled using a
graphical interface or using command line such as cpufreq-selector or cpufreq-set.
An alternative way to avoid these problems is to tell the OS scheduler to tie a Valgrind process to a specific (fixed)
CPU using the taskset command. This should ensure that the selected CPU does not fall below its maximum
frequency setting so long as any thread of the program has work to do.
In addition to the usual --prefix=/path/to/install/tree, there are three options which affect how Valgrind
is built:
--enable-inner
This builds Valgrind with some special magic hacks which make it possible to run it on a standard build of Valgrind
(what the developers call "self-hosting"). Ordinarily you should not use this option as various kinds of safety
checks are disabled.
--enable-only64bit
--enable-only32bit
On 64-bit platforms (amd64-linux, ppc64-linux, amd64-darwin), Valgrind is by default built in such a way that both
32-bit and 64-bit executables can be run. Sometimes this cleverness is a problem for a variety of reasons. These
two options allow for single-target builds in this situation. If you issue both, the configure script will complain.
Note they are ignored on 32-bit-only platforms (x86-linux, ppc32-linux, arm-linux, x86-darwin).
The configure script tests the version of the X server currently indicated by the current $DISPLAY. This is a
known bug. The intention was to detect the version of the current X client libraries, so that correct suppressions could
be selected for them, but instead the test checks the server version. This is just plain wrong.
If you are building a binary package of Valgrind for distribution, please read README_PACKAGERS Readme
Packagers. It contains some important information.
Apart from that, theres not much excitement here. Let us know if you have build problems.
2.11. Limitations
The following list of limitations seems long. However, most programs actually work fine.
Valgrind will run programs on the supported platforms subject to the following constraints:
On x86 and amd64, there is no support for 3DNow! instructions. If the translator encounters these, Valgrind will
generate a SIGILL when the instruction is executed. Apart from that, on x86 and amd64, essentially all instructions
are supported, up to and including AVX and AES in 64-bit mode and SSSE3 in 32-bit mode. 32-bit mode does in
fact support the bare minimum SSE4 instructions to needed to run programs on MacOSX 10.6 on 32-bit targets.
29
On ppc32 and ppc64, almost all integer, floating point and Altivec instructions are supported. Specifically: integer
and FP insns that are mandatory for PowerPC, the "General-purpose optional" group (fsqrt, fsqrts, stfiwx), the
"Graphics optional" group (fre, fres, frsqrte, frsqrtes), and the Altivec (also known as VMX) SIMD instruction
set, are supported. Also, instructions from the Power ISA 2.05 specification, as present in POWER6 CPUs, are
supported.
On ARM, essentially the entire ARMv7-A instruction set is supported, in both ARM and Thumb mode. ThumbEE
and Jazelle are not supported. NEON, VFPv3 and ARMv6 media support is fairly complete.
If your program does its own memory management, rather than using malloc/new/free/delete, it should still work,
but Memchecks error checking wont be so effective. If you describe your programs memory management
scheme using "client requests" (see The Client Request mechanism), Memcheck can do better. Nevertheless, using
malloc/new and free/delete is still the best approach.
Valgrinds signal simulation is not as robust as it could be. Basic POSIX-compliant sigaction and sigprocmask
functionality is supplied, but its conceivable that things could go badly awry if you do weird things with signals.
Workaround: dont. Programs that do non-POSIX signal tricks are in any case inherently unportable, so should be
avoided if possible.
Machine instructions, and system calls, have been implemented on demand. So its possible, although unlikely,
that a program will fall over with a message to that effect. If this happens, please report all the details printed out,
so we can try and implement the missing feature.
Memory consumption of your program is majorly increased whilst running under Valgrinds Memcheck tool. This
is due to the large amount of administrative information maintained behind the scenes. Another cause is that
Valgrind dynamically translates the original executable. Translated, instrumented code is 12-18 times larger than
the original so you can easily end up with 150+ MB of translations when running (eg) a web browser.
Valgrind can handle dynamically-generated code just fine. If you regenerate code over the top of old code (ie.
at the same memory addresses), if the code is on the stack Valgrind will realise the code has changed, and work
correctly. This is necessary to handle the trampolines GCC uses to implemented nested functions. If you regenerate
code somewhere other than the stack, and you are running on an 32- or 64-bit x86 CPU, you will need to use the
--smc-check=all option, and Valgrind will run more slowly than normal. Or you can add client requests that
tell Valgrind when your program has overwritten code.
On other platforms (ARM, PowerPC) Valgrind observes and honours the cache invalidation hints that programs are
obliged to emit to notify new code, and so self-modifying-code support should work automatically, without the need
for --smc-check=all.
Valgrind has the following limitations in its implementation of x86/AMD64 floating point relative to IEEE754.
Precision: There is no support for 80 bit arithmetic. Internally, Valgrind represents all such "long double" numbers
in 64 bits, and so there may be some differences in results. Whether or not this is critical remains to be seen. Note,
the x86/amd64 fldt/fstpt instructions (read/write 80-bit numbers) are correctly simulated, using conversions to/from
64 bits, so that in-memory images of 80-bit numbers look correct if anyone wants to see.
The impression observed from many FP regression tests is that the accuracy differences arent significant. Generally
speaking, if a program relies on 80-bit precision, there may be difficulties porting it to non x86/amd64 platforms
which only support 64-bit FP precision. Even on x86/amd64, the program may get different results depending on
whether it is compiled to use SSE2 instructions (64-bits only), or x87 instructions (80-bit). The net effect is to
make FP programs behave as if they had been run on a machine with 64-bit IEEE floats, for example PowerPC.
On amd64 FP arithmetic is done by default on SSE2, so amd64 looks more like PowerPC than x86 from an FP
perspective, and there are far fewer noticeable accuracy differences than with x86.
30
Rounding: Valgrind does observe the 4 IEEE-mandated rounding modes (to nearest, to +infinity, to -infinity, to
zero) for the following conversions: float to integer, integer to float where there is a possibility of loss of precision,
and float-to-float rounding. For all other FP operations, only the IEEE default mode (round to nearest) is supported.
Numeric exceptions in FP code: IEEE754 defines five types of numeric exception that can happen: invalid operation
(sqrt of negative number, etc), division by zero, overflow, underflow, inexact (loss of precision).
For each exception, two courses of action are defined by IEEE754: either (1) a user-defined exception handler may
be called, or (2) a default action is defined, which "fixes things up" and allows the computation to proceed without
throwing an exception.
Currently Valgrind only supports the default fixup actions. Again, feedback on the importance of exception support
would be appreciated.
When Valgrind detects that the program is trying to exceed any of these limitations (setting exception handlers,
rounding mode, or precision control), it can print a message giving a traceback of where this has happened, and
continue execution. This behaviour used to be the default, but the messages are annoying and so showing them is
now disabled by default. Use --show-emwarns=yes to see them.
The above limitations define precisely the IEEE754 default behaviour: default fixup on all exceptions, round-tonearest operations, and 64-bit precision.
Valgrind has the following limitations in its implementation of x86/AMD64 SSE2 FP arithmetic, relative to
IEEE754.
Essentially the same: no exceptions, and limited observance of rounding mode. Also, SSE2 has control bits which
make it treat denormalised numbers as zero (DAZ) and a related action, flush denormals to zero (FTZ). Both of
these cause SSE2 arithmetic to be less accurate than IEEE requires. Valgrind detects, ignores, and can warn about,
attempts to enable either mode.
Valgrind has the following limitations in its implementation of ARM VFPv3 arithmetic, relative to IEEE754.
Essentially the same: no exceptions, and limited observance of rounding mode. Also, switching the VFP unit into
vector mode will cause Valgrind to abort the program -- it has no way to emulate vector uses of VFP at a reasonable
performance level. This is no big deal given that non-scalar uses of VFP instructions are in any case deprecated.
Valgrind has the following limitations in its implementation of PPC32 and PPC64 floating point arithmetic, relative
to IEEE754.
Scalar (non-Altivec): Valgrind provides a bit-exact emulation of all floating point instructions, except for "fre" and
"fres", which are done more precisely than required by the PowerPC architecture specification. All floating point
operations observe the current rounding mode.
However, fpscr[FPRF] is not set after each operation. That could be done but would give measurable performance
overheads, and so far no need for it has been found.
As on x86/AMD64, IEEE754 exceptions are not supported: all floating point exceptions are handled using the
default IEEE fixup actions. Valgrind detects, ignores, and can warn about, attempts to unmask the 5 IEEE FP
exception kinds by writing to the floating-point status and control register (fpscr).
Vector (Altivec, VMX): essentially as with x86/AMD64 SSE/SSE2: no exceptions, and limited observance of
rounding mode. For Altivec, FP arithmetic is done in IEEE/Java mode, which is more accurate than the Linux
default setting. "More accurate" means that denormals are handled properly, rather than simply being flushed to
zero.
Programs which are known not to work are:
31
emacs starts up but immediately concludes it is out of memory and aborts. It may be that Memcheck does not
provide a good enough emulation of the mallinfo function. Emacs works fine if you build it to use the standard
malloc/free routines.
The GCC folks fixed this about a week before GCC 3.0 shipped.
After 100 different errors have been shown, Valgrind becomes more conservative about collecting them. It then
requires only the program counters in the top two stack frames to match when deciding whether or not two errors
are really the same one. Prior to this point, the PCs in the top four frames are required to match. This hack has
the effect of slowing down the appearance of new errors after the first 100. The 100 constant can be changed by
recompiling Valgrind.
32
Final
After 1000 different errors have been detected, Valgrind ignores any more. It seems unlikely that collecting even
more different ones would be of practical help to anybody, and it avoids the danger that Valgrind spends more
and more of its time comparing new errors against an ever-growing collection. As above, the 1000 number is a
compile-time constant.
Warning:
Valgrind spotted such a large change in the stack pointer that it guesses the client is switching to a different stack.
At this point it makes a kludgey guess where the base of the new stack is, and sets memory permissions accordingly.
At the moment "large change" is defined as a change of more that 2000000 in the value of the stack pointer register.
If Valgrind guesses wrong, you may get many bogus error messages following this and/or have crashes in the
stack trace recording code. You might avoid these problems by informing Valgrind about the stack bounds using
VALGRIND_STACK_REGISTER client request.
Warning:
Valgrind doesnt allow the client to close the logfile, because youd never see any diagnostic information after that
point. If you see this message, you may want to use the --log-fd=<number> option to specify a different
logfile file-descriptor number.
Warning:
Valgrind observed a call to one of the vast family of ioctl system calls, but did not modify its memory status
info (because nobody has yet written a suitable wrapper). The call will still have gone through, but you may get
spurious errors after this as a result of the non-update of the memory info.
Warning:
Diagnostic message, mostly for benefit of the Valgrind developers, to do with memory permissions.
33
34
VALGRIND_DISCARD_TRANSLATIONS:
Discards translations of code in the specified address range. Useful if you are debugging a JIT compiler or some
other dynamic code generation system. After this call, attempts to execute code in the invalidated address range will
cause Valgrind to make new translations of that code, which is probably the semantics you want. Note that code
invalidations are expensive because finding all the relevant translations quickly is very difficult, so try not to call it
often. Note that you can be clever about this: you only need to call it when an area which previously contained code is
overwritten with new code. You can choose to write code into fresh memory, and just call this occasionally to discard
large chunks of old code all at once.
Alternatively, for transparent self-modifying-code support, use--smc-check=all, or run on ppc32/Linux,
ppc64/Linux or ARM/Linux.
VALGRIND_COUNT_ERRORS:
Returns the number of errors found so far by Valgrind. Can be useful in test harness code when combined with
the --log-fd=-1 option; this runs Valgrind silently, but the client program can detect when errors occur. Only
useful for tools that report errors, e.g. its useful for Memcheck, but for Cachegrind it will always return zero because
Cachegrind doesnt report errors.
VALGRIND_MALLOCLIKE_BLOCK:
If your program manages its own memory instead of using the standard malloc / new / new[], tools that track
information about heap blocks will not do nearly as good a job. For example, Memcheck wont detect nearly as
many errors, and the error messages wont be as informative. To improve this situation, use this macro just after your
custom allocator allocates some new memory. See the comments in valgrind.h for information on how to use it.
VALGRIND_FREELIKE_BLOCK:
This should be used in conjunction with VALGRIND_MALLOCLIKE_BLOCK. Again, see valgrind.h for information on how to use it.
VALGRIND_RESIZEINPLACE_BLOCK:
Informs a Valgrind tool that the size of an allocated block has been modified but not its address. See valgrind.h
for more information on how to use it.
VALGRIND_CREATE_MEMPOOL,
VALGRIND_DESTROY_MEMPOOL,
VALGRIND_MEMPOOL_ALLOC,
VALGRIND_MEMPOOL_FREE,
VALGRIND_MOVE_MEMPOOL,
VALGRIND_MEMPOOL_CHANGE,
VALGRIND_MEMPOOL_EXISTS:
These are similar to VALGRIND_MALLOCLIKE_BLOCK and VALGRIND_FREELIKE_BLOCK but are tailored
towards code that uses memory pools. See Memory Pools for a detailed description.
VALGRIND_NON_SIMD_CALL[0123]:
Executes a function in the client program on the real CPU, not the virtual CPU that Valgrind normally runs code on.
The function must take an integer (holding a thread ID) as the first argument and then 0, 1, 2 or 3 more arguments
(depending on which client request is used). These are used in various ways internally to Valgrind. They might be
useful to client programs.
Warning: Only use these if you really know what you are doing. They arent entirely reliable, and can cause Valgrind
to crash. See valgrind.h for more details.
VALGRIND_PRINTF(format, ...):
Print a printf-style message to the Valgrind log file. The message is prefixed with the PID between a pair of **
markers. (Like all client requests, nothing is output if the client program is not running under Valgrind.) Output is not
produced until a newline is encountered, or subsequent Valgrind output is printed; this allows you to build up a single
line of output over multiple calls. Returns the number of characters output, excluding the PID prefix.
VALGRIND_PRINTF_BACKTRACE(format, ...):
Like VALGRIND_PRINTF (in particular, the return value is identical), but prints a stack backtrace immediately
afterwards.
35
VALGRIND_MONITOR_COMMAND(command):
Execute the given monitor command (a string). Returns 0 if command is recognised. Returns 1 if command
is not recognised. Note that some monitor commands provide access to a functionality also accessible via a
specific client request. For example, memcheck leak search can be requested from the client program using
VALGRIND_DO_LEAK_CHECK or via the monitor command "leak_search". Note that the syntax of the command
string is only verified at run-time. So, if it exists, it is preferrable to use a specific client request to have better compile
time verifications of the arguments.
VALGRIND_STACK_REGISTER(start, end):
Registers a new stack. Informs Valgrind that the memory range between start and end is a unique stack. Returns a
stack identifier that can be used with other VALGRIND_STACK_* calls.
Valgrind will use this information to determine if a change to the stack pointer is an item pushed onto the stack or a
change over to a new stack. Use this if youre using a user-level thread package and are noticing crashes in stack trace
recording or spurious errors from Valgrind about uninitialized memory reads.
Warning: Unfortunately, this client request is unreliable and best avoided.
VALGRIND_STACK_DEREGISTER(id):
Deregisters a previously registered stack. Informs Valgrind that previously registered memory range with stack id id
is no longer a stack.
Warning: Unfortunately, this client request is unreliable and best avoided.
VALGRIND_STACK_CHANGE(id, start, end):
Changes a previously registered stack. Informs Valgrind that the previously registered stack with stack id id has
changed its start and end values. Use this if your user-level thread package implements stack growth.
Warning: Unfortunately, this client request is unreliable and best avoided.
36
You can now debug your program e.g. by inserting a breakpoint and then using the GDB continue command.
This quick start information is enough for basic usage of the Valgrind gdbserver. The sections below describe
more advanced functionality provided by the combination of Valgrind and GDB. Note that the command line flag
--vgdb=yes can be omitted, as this is the default value.
37
The Valgrind gdbserver is invoked at startup and indicates it is waiting for a connection from a GDB:
==2418==
==2418==
==2418==
==2418==
==2418==
==2418==
GDB (in another shell) can then be connected to the Valgrind gdbserver. For this, GDB must be started on the program
prog:
gdb ./prog
You then indicate to GDB that you want to debug a remote target:
(gdb) target remote | vgdb
GDB then starts a vgdb relay application to communicate with the Valgrind embedded gdbserver:
(gdb) target remote | vgdb
Remote debugging using | vgdb
relaying data between gdb and process 2418
Reading symbols from /lib/ld-linux.so.2...done.
Reading symbols from /usr/lib/debug/lib/ld-2.11.2.so.debug...done.
Loaded symbols for /lib/ld-linux.so.2
[Switching to Thread 2418]
0x001f2850 in _start () from /lib/ld-linux.so.2
(gdb)
Note that vgdb is provided as part of the Valgrind distribution. You do not need to install it separately.
If vgdb detects that there are multiple Valgrind gdbservers that can be connected to, it will list all such servers and
their PIDs, and then exit. You can then reissue the GDB "target" command, but specifying the PID of the process you
want to debug:
38
Once GDB is connected to the Valgrind gdbserver, it can be used in the same way as if you were debugging the
program natively:
Breakpoints can be inserted or deleted.
Variables and register values can be examined or modified.
Signal handling can be configured (printing, ignoring).
Execution can be controlled (continue, step, next, stepi, etc).
Program execution can be interrupted using Control-C.
And so on. Refer to the GDB user manual for a complete description of GDBs functionality.
39
GDB will use a local tcp/ip connection to connect to the Android adb forwarder. Adb will establish a relay connection
between the host system and the Android target system. Be sure to use the GDB delivered in the Android NDK system
(typically, arm-linux-androideabi-gdb), as the host GDB is probably not able to debug Android arm applications. Note
that the local port nr (used by GDB) must not necessarily be equal to the port number used by vgdb: adb can forward
tcp/ip between different port numbers.
In the current release, the GDB server is not enabled by default for Android, due to problems in establishing a suitable
directory in which Valgrind can create the necessary FIFOs (named pipes) for communication purposes. You can stil
try to use the GDB server, but you will need to explicitly enable it using the flag --vgdb=yes or --vgdb=full.
Additionally, you will need to select a temporary directory which is (a) writable by Valgrind, and (b) supports FIFOs.
This is the main difficult point. Often, /sdcard satisfies requirement (a), but fails for (b) because it is a VFAT file
system and VFAT does not support pipes. Possibilities you could try are /data/local, /data/local/Inst (if
you installed Valgrind there), or /data/data/name.of.my.app, if you are running a specific application and it
has its own directory of that form. This last possibility may have the highest probability of success.
You can specify the temporary directory to use either via the --with-tmpdir= configure time flag, or by setting
environment variable TMPDIR when running Valgrind (on the Android device, not on the Android NDK development
host). Another alternative is to specify the directory for the FIFOs using the --vgdb-prefix= Valgrind command
line option.
We hope to have a better story for temporary directory handling on Android in the future. The difficulty is that, unlike
in standard Unixes, there is no single temporary file directory that reliably works across all devices and scenarios.
40
An example of a tool specific monitor command is the Memcheck monitor command leak_check full
reachable any. This requests a full reporting of the allocated memory blocks. To have this leak check executed,
use the GDB command:
(gdb) monitor leak_check full reachable any
GDB will send the leak_check command to the Valgrind gdbserver. The Valgrind gdbserver will execute the
monitor command itself, if it recognises it to be a Valgrind core monitor command. If it is not recognised as such, it
is assumed to be tool-specific and is handed to the tool for execution. For example:
(gdb) monitor leak_check full reachable any
==2418== 100 bytes in 1 blocks are still reachable in loss record 1 of 1
==2418==
at 0x4006E9E: malloc (vg_replace_malloc.c:236)
==2418==
by 0x804884F: main (prog.c:88)
==2418==
==2418== LEAK SUMMARY:
==2418==
definitely lost: 0 bytes in 0 blocks
==2418==
indirectly lost: 0 bytes in 0 blocks
==2418==
possibly lost: 0 bytes in 0 blocks
==2418==
still reachable: 100 bytes in 1 blocks
==2418==
suppressed: 0 bytes in 0 blocks
==2418==
(gdb)
As with other GDB commands, the Valgrind gdbserver will accept abbreviated monitor command names and
arguments, as long as the given abbreviation is unambiguous. For example, the above leak_check command
can also be typed as:
(gdb) mo l f r a
The letters mo are recognised by GDB as being an abbreviation for monitor. So GDB sends the string l f r a to
the Valgrind gdbserver. The letters provided in this string are unambiguous for the Valgrind gdbserver. This therefore
gives the same output as the unabbreviated command and arguments. If the provided abbreviation is ambiguous, the
Valgrind gdbserver will report the list of commands (or argument values) that can match:
(gdb) mo v. n
v. can match v.set v.info v.wait v.kill v.translate v.do
(gdb) mo v.i n
n_errs_found 0 n_errs_shown 0 (vgdb-error 0)
(gdb)
Instead of sending a monitor command from GDB, you can also send these from a shell command line. For example,
the following command lines, when given in a shell, will cause the same leak search to be executed by the process
3145:
41
Note that the Valgrind gdbserver automatically continues the execution of the program after a standalone invocation of
vgdb. Monitor commands sent from GDB do not cause the program to continue: the program execution is controlled
explicitly using GDB commands such as "continue" or "next".
* 3 Thread 6238 (tid 3 VgTs_Runnable) make_error (s=0x8048b76 "called from London") at prog.c:
42
Float shadow registers are shown by GDB as unsigned integer values instead of float values, as it is expected that these
shadow values are mostly used for memcheck validity bits.
Intel/amd64 AVX registers ymm0 to ymm15 have also their shadow registers. However, GDB presents the shadow
values using two "half" registers. For example, the half shadow registers for ymm9 are xmm9s1 (lower half for set 1),
ymm9hs1 (upper half for set 1), xmm9s2 (lower half for set 2), ymm9hs2 (upper half for set 2). Note the inconsistent
notation for the names of the half registers: the lower part starts with an x, the upper part starts with an y and has an
h before the shadow postfix.
The special presentation of the AVX shadow registers is due to the fact that GDB independently retrieves the lower
and upper half of the ymm registers. GDB does not however know that the shadow half registers have to be shown
combined.
43
Note that --vgdb=full (+500%, see above Precision of "stop-at" commands) automatically activates
--vex-iropt-register-updates=allregs-at-each-insn.
Hardware watchpoint support by the Valgrind gdbserver.
The Valgrind gdbserver can simulate hardware watchpoints if the selected tool provides support for it. Currently,
only Memcheck provides hardware watchpoint simulation. The hardware watchpoint simulation provided by
Memcheck is much faster that GDB software watchpoints, which are implemented by GDB checking the value
of the watched zone(s) after each instruction. Hardware watchpoint simulation also provides read watchpoints.
The hardware watchpoint simulation by Memcheck has some limitations compared to real hardware watchpoints.
However, the number and length of simulated watchpoints are not limited.
Typically, the number of (real) hardware watchpoints is limited. For example, the x86 architecture supports a
maximum of 4 hardware watchpoints, each watchpoint watching 1, 2, 4 or 8 bytes. The Valgrind gdbserver does
not have any limitation on the number of simulated hardware watchpoints. It also has no limitation on the length of
the memory zone being watched. Using GDB version 7.4 or later allow full use of the flexibility of the Valgrind
gdbservers simulated hardware watchpoints. Previous GDB versions do not understand that Valgrind gdbserver
watchpoints have no length limit.
Memcheck implements hardware watchpoint simulation by marking the watched address ranges as being unaddressable. When a hardware watchpoint is removed, the range is marked as addressable and defined. Hardware
watchpoint simulation of addressable-but-undefined memory zones works properly, but has the undesirable side
effect of marking the zone as defined when the watchpoint is removed.
Write watchpoints might not be reported at the exact instruction that writes the monitored area, unless option
--vgdb=full is given. Read watchpoints will always be reported at the exact instruction reading the watched
memory.
It is better to avoid using hardware watchpoint of not addressable (yet) memory: in such a case, GDB will fall
back to extremely slow software watchpoints. Also, if you do not quit GDB between two debugging sessions, the
hardware watchpoints of the previous sessions will be re-inserted as software watchpoints if the watched memory
zone is not addressable at program startup.
Stepping inside shared libraries on ARM.
For unknown reasons, stepping inside shared libraries on ARM may fail. A workaround is to use the ldd command
to find the list of shared libraries and their loading address and inform GDB of the loading address using the GDB
command "add-symbol-file". Example:
44
The Valgrind gdbserver supports inferior function calls. Whilst an inferior call is running, the Valgrind tool will
report errors as usual. If you do not want to have such errors stop the execution of the inferior call, you can use
v.set vgdb-error to set a big value before the call, then manually reset it to its original value when the call is
complete.
To execute inferior calls, GDB changes registers such as the program counter, and then continues the execution
of the program. In a multithreaded program, all threads are continued, not just the thread instructed to make the
inferior call. If another thread reports an error or encounters a breakpoint, the evaluation of the inferior call is
abandoned.
Note that inferior function calls are a powerful GDB feature, but should be used with caution. For example, if the
program being debugged is stopped inside the function "printf", forcing a recursive call to printf via an inferior call
will very probably create problems. The Valgrind tool might also add another level of complexity to inferior calls,
e.g. by reporting tool errors during the Inferior call or due to the instrumentation done.
45
46
[[-c] COMMAND]...
vgdb ("Valgrind to GDB") is a small program that is used as an intermediary between Valgrind and GDB or a shell.
Therefore, it has two usage modes:
1. As a standalone utility, it is used from a shell command line to send monitor commands to a process running under
Valgrind. For this usage, the vgdb OPTION(s) must be followed by the monitor command to send. To send more
than one command, separate them with the -c option.
2. In combination with GDB "target remote |" command, it is used as the relay application between GDB and the
Valgrind gdbserver. For this usage, only OPTION(s) can be given, but no COMMAND can be given.
vgdb accepts the following options:
--pid=<number>
Specifies the PID of the process to which vgdb must connect to. This option is useful in case more than one Valgrind
gdbserver can be connected to. If the --pid argument is not given and multiple Valgrind gdbserver processes are
running, vgdb will report the list of such processes and then exit.
--vgdb-prefix
Must be given to both Valgrind and vgdb if you want to change the default prefix for the FIFOs (named pipes) used
for communication between the Valgrind gdbserver and vgdb.
--wait=<number>
Instructs vgdb to search for available Valgrind gdbservers for the specified number of seconds. This makes it possible
start a vgdb process before starting the Valgrind gdbserver with which you intend the vgdb to communicate. This
option is useful when used in conjunction with a --vgdb-prefix that is unique to the process you want to wait for.
Also, if you use the --wait argument in the GDB "target remote" command, you must set the GDB remotetimeout
to a value bigger than the --wait argument value. See option --max-invoke-ms (just below) for an example of
setting the remotetimeout value.
--max-invoke-ms=<number>
Gives the number of milliseconds after which vgdb will force the invocation of gdbserver embedded in Valgrind. The
default value is 100 milliseconds. A value of 0 disables forced invocation. The forced invocation is used when vgdb is
connected to a Valgrind gdbserver, and the Valgrind process has all its threads blocked in a system call.
If you specify a large value, you might need to increase the GDB "remotetimeout" value from its default value of
2 seconds. You should ensure that the timeout (in seconds) is bigger than the --max-invoke-ms value. For
example, for --max-invoke-ms=5000, the following GDB command is suitable:
(gdb) set remotetimeout 6
--cmd-time-out=<number>
Instructs a standalone vgdb to exit if the Valgrind gdbserver it is connected to does not process a command in the
specified number of seconds. The default value is to never time out.
47
--port=<portnr>
Instructs vgdb to use tcp/ip and listen for GDB on the specified port nr rather than to use a pipe to communicate
with GDB. Using tcp/ip allows to have GDB running on one computer and debugging a Valgrind process running on
another target computer. Example:
# On the target computer, start your program under valgrind using
valgrind --vgdb-error=0 prog
# and then in another shell, run:
vgdb --port=1234
-l
Instructs a standalone vgdb to report the list of the Valgrind gdbserver processes running and then exit.
-D
Instructs a standalone vgdb to show the state of the shared memory used by the Valgrind gdbserver. vgdb will exit
after having shown the Valgrind gdbserver shared memory state.
-d
Instructs vgdb to produce debugging output. Give multiple -d args to increase the verbosity. When giving -d to a
relay vgdb, you better redirect the standard error (stderr) of vgdb to a file to avoid interaction between GDB and vgdb
debugging output.
48
help [debug] instructs Valgrinds gdbserver to give the list of all monitor commands of the Valgrind core and
of the tool. The optional "debug" argument tells to also give help for the monitor commands aimed at Valgrind
internals debugging.
v.info all_errors shows all errors found so far.
v.info last_error shows the last error found.
v.info location <addr> outputs information about the location <addr>. Possibly, the following are
described: global variables, local (stack) variables, allocated or freed blocks, ... The information produced depends
on the tool and on the options given to valgrind. Some tools (e.g. memcheck and helgrind) produce more detailed
information for client heap blocks. For example, these tools show the stacktrace where the heap block was allocated.
If a tool does not replace the malloc/free/... functions, then client heap blocks will not be described. Use the option
--read-var-info=yes to obtain more detailed information about global or local (stack) variables.
v.info n_errs_found [msg] shows the number of errors found so far, the nr of errors shown so far and the
current value of the --vgdb-error argument. The optional msg (one or more words) is appended. Typically,
this can be used to insert markers in a process output file between several tests executed in sequence by a process
started only once. This allows to associate the errors reported by Valgrind with the specific test that produced these
errors.
v.info open_fds shows the list of open file descriptors and details related to the file descriptor. This only
works if --track-fds=yes was given at Valgrind startup.
v.set {gdb_output | log_output | mixed_output} allows redirection of the Valgrind output (e.g.
the errors detected by the tool). The default setting is mixed_output.
With mixed_output, the Valgrind output goes to the Valgrind log (typically stderr) while the output of the
interactive GDB monitor commands (e.g. v.info last_error) is displayed by GDB.
With gdb_output, both the Valgrind output and the interactive GDB monitor commands output are displayed by
GDB.
With log_output, both the Valgrind output and the interactive GDB monitor commands output go to the Valgrind
log.
v.wait [ms (default 0)] instructs Valgrind gdbserver to sleep "ms" milli-seconds and then continue.
When sent from a standalone vgdb, if this is the last command, the Valgrind process will continue the execution of
the guest process. The typical usage of this is to use vgdb to send a "no-op" command to a Valgrind gdbserver so as
to continue the execution of the guest process.
v.kill requests the gdbserver to kill the process. This can be used from a standalone vgdb to properly kill a
Valgrind process which is currently expecting a vgdb connection.
49
v.set vgdb-error <errornr> dynamically changes the value of the --vgdb-error argument. A typical
usage of this is to start with --vgdb-error=0 on the command line, then set a few breakpoints, set the vgdb-error
value to a huge value and continue execution.
The following Valgrind monitor commands are useful for investigating the behaviour of Valgrind or its gdbserver in
case of problems or bugs.
v.do expensive_sanity_check_general executes various sanity checks. In particular, the sanity of the
Valgrind heap is verified. This can be useful if you suspect that your program and/or Valgrind has a bug corrupting
Valgrind data structure. It can also be used when a Valgrind tool reports a client error to the connected GDB, in
order to verify the sanity of Valgrind before continuing the execution.
v.info gdbserver_status shows the gdbserver status. In case of problems (e.g. of communications),
this shows the values of some relevant Valgrind gdbserver internal variables. Note that the variables related to
breakpoints and watchpoints (e.g. the number of breakpoint addresses and the number of watchpoints) will be
zero, as GDB by default removes all watchpoints and breakpoints when execution stops, and re-inserts them when
resuming the execution of the debugged process. You can change this GDB behaviour by using the GDB command
set breakpoint always-inserted on.
v.info memory [aspacemgr] shows the statistics of Valgrinds internal heap management. If option
--profile-heap=yes was given, detailed statistics will be output. With the optional argument aspacemgr.
the segment list maintained by valgrind address space manager will be output. Note that this list of segments is
always output on the Valgrind log.
v.info exectxt shows informations about the "executable contexts" (i.e. the stack traces) recorded by
Valgrind. For some programs, Valgrind can record a very high number of such stack traces, causing a high
memory usage. This monitor command shows all the recorded stack traces, followed by some statistics. This can
be used to analyse the reason for having a big number of stack traces. Typically, you will use this command if
v.info memory has shown significant memory usage by the "exectxt" arena.
v.info scheduler shows various information about threads. First, it outputs the host stack trace, i.e. the
Valgrind code being executed. Then, for each thread, it outputs the thread state. For non terminated threads, the
state is followed by the guest (client) stack trace. Finally, for each active thread or for each terminated thread slot
not yet re-used, it shows the max usage of the valgrind stack.
Showing the client stack traces allows to compare the stack traces produced by the Valgrind unwinder with the stack
traces produced by GDB+Valgrind gdbserver. Pay attention that GDB and Valgrind scheduler status have their
own thread numbering scheme. To make the link between the GDB thread number and the corresponding Valgrind
scheduler thread number, use the GDB command info threads. The output of this command shows the GDB
thread number and the valgrind tid. The tid is the thread number output by v.info scheduler. When
using the callgrind tool, the callgrind monitor command status outputs internal callgrind information about the
stack/call graph it maintains.
v.info stats shows various valgrind core and tool statistics. With this, Valgrind and tool statistics can be
examined while running, even without option --stats=yes.
v.set debuglog <intvalue> sets the Valgrind debug log level to <intvalue>. This allows to dynamically
change the log level of Valgrind e.g. when a problem is detected.
50
v.set hostvisibility [yes*|no] The value "yes" indicates to gdbserver that GDB can look at the
Valgrind host (internal) status/memory. "no" disables this access. When hostvisibility is activated, GDB can
e.g. look at Valgrind global variables. As an example, to examine a Valgrind global variable of the memcheck tool
on an x86, do the following setup:
(gdb) p /x vgPlain_threads[1].os_state
$3 = {lwpid = 0x4688, threadgroup = 0x4688, parent = 0x0,
valgrind_stack_base = 0x62e78000, valgrind_stack_init_SP = 0x62f79fe0,
exitcode = 0x0, fatalsig = 0x0}
(gdb) p vex_control
$5 = {iropt_verbosity = 0, iropt_level = 2,
iropt_register_updates = VexRegUpdUnwindregsAtMemAccess,
iropt_unroll_thresh = 120, guest_max_insns = 60, guest_chase_thresh = 10,
guest_chase_cond = 0 \000}
(gdb)
v.translate <address> [<traceflags>] shows the translation of the block containing address with
the given trace flags. The traceflags value bit patterns have similar meaning to Valgrinds --trace-flags
option. It can be given in hexadecimal (e.g. 0x20) or decimal (e.g. 32) or in binary 1s and 0s bit (e.g. 0b00100000).
The default value of the traceflags is 0b00100000, corresponding to "show after instrumentation". The output of
this command always goes to the Valgrind log.
The additional bit flag 0b100000000 (bit 8) has no equivalent in the --trace-flags option. It enables tracing of
the gdbserver specific instrumentation. Note that this bit 8 can only enable the addition of gdbserver instrumentation
in the trace. Setting it to 0 will not disable the tracing of the gdbserver instrumentation if it is active for some other
reason, for example because there is a breakpoint at this address or because gdbserver is in single stepping mode.
A wrapper is a function of identical type, but with a special name which identifies it as the wrapper for foo. Wrappers
need to include supporting macros from valgrind.h. Here is a simple wrapper which prints the arguments and
return value:
#include <stdio.h>
#include "valgrind.h"
int I_WRAP_SONAME_FNNAME_ZU(NONE,foo)( int x, int y )
{
int
result;
OrigFn fn;
VALGRIND_GET_ORIG_FN(fn);
printf("foos wrapper: args %d %d\n", x, y);
CALL_FN_W_WW(result, fn, x,y);
printf("foos wrapper: result %d\n", result);
return result;
}
To become active, the wrapper merely needs to be present in a text section somewhere in the same process address
space as the function it wraps, and for its ELF symbol name to be visible to Valgrind. In practice, this means either
compiling to a .o and linking it in, or compiling to a .so and LD_PRELOADing it in. The latter is more convenient
in that it doesnt require relinking.
All wrappers have approximately the above form. There are three crucial macros:
I_WRAP_SONAME_FNNAME_ZU: this generates the real name of the wrapper. This is an encoded name which
Valgrind notices when reading symbol table information. What it says is: I am the wrapper for any function named
foo which is found in an ELF shared object with an empty ("NONE") soname field. The specification mechanism is
powerful in that wildcards are allowed for both sonames and function names. The details are discussed below.
VALGRIND_GET_ORIG_FN: once in the wrapper, the first priority is to get hold of the address of the original (and
any other supporting information needed). This is stored in a value of opaque type OrigFn. The information is
acquired using VALGRIND_GET_ORIG_FN. It is crucial to make this macro call before calling any other wrapped
function in the same thread.
CALL_FN_W_WW: eventually we will want to call the function being wrapped. Calling it directly does not work, since
that just gets us back to the wrapper and leads to an infinite loop. Instead, the result lvalue, OrigFn and arguments
are handed to one of a family of macros of the form CALL_FN_*. These cause Valgrind to call the original and avoid
recursion back to the wrapper.
52
Each wrapper has a name which, in the most general case says: I am the wrapper for any function whose name matches
FNPATT and whose ELF "soname" matches SOPATT. Both FNPATT and SOPATT may contain wildcards (asterisks)
and other characters (spaces, dots, @, etc) which are not generally regarded as valid C identifier names.
This flexibility is needed to write robust wrappers for POSIX pthread functions, where typically we are not completely
sure of either the function name or the soname, or alternatively we want to wrap a whole set of functions at once.
For example, pthread_create in GNU libpthread is usually a versioned symbol - one whose name ends in, eg,
@GLIBC_2.3. Hence we are not sure what its real name is. We also want to cover any soname of the form
libpthread.so*. So the header of the wrapper will be
int I_WRAP_SONAME_FNNAME_ZZ(libpthreadZdsoZd0,pthreadZucreateZAZa)
( ... formals ... )
{ ... body ... }
In order to write unusual characters as valid C function names, a Z-encoding scheme is used.
literally, except that a capital Z acts as an escape character, with the following encoding:
Za
Zp
Zc
Zd
Zu
Zh
Zs
ZA
ZZ
ZL
ZR
encodes
*
+
:
.
_
(space)
@
Z
(
# only in valgrind 3.3.0 and later
)
# only in valgrind 3.3.0 and later
The ability for a wrapper to replace an infinite family of functions is powerful but brings complications in situations
where ELF objects appear and disappear (are dlopend and dlclosed) on the fly. Valgrind tries to maintain sensible
behaviour in such situations.
For example, suppose a process has dlopened (an ELF object with soname) object1.so, which contains
function1. It starts to use function1 immediately.
After a while it dlopens wrappers.so, which contains a wrapper for function1 in (soname) object1.so. All
subsequent calls to function1 are rerouted to the wrapper.
If wrappers.so is later dlclosed, calls to function1 are naturally routed back to the original.
Alternatively, if object1.so is dlclosed but wrappers.so remains, then the wrapper exported by
wrappers.so becomes inactive, since there is no way to get to it - there is no original to call any more.
However, Valgrind remembers that the wrapper is still present. If object1.so is eventually dlopend again, the
wrapper will become active again.
In short, valgrind inspects all code loading/unloading events to ensure that the set of currently active wrappers remains
consistent.
A second possible problem is that of conflicting wrappers. It is easily possible to load two or more wrappers, both of
which claim to be wrappers for some third function. In such cases Valgrind will complain about conflicting wrappers
when the second one appears, and will honour only the first one.
3.3.4. Debugging
Figuring out whats going on given the dynamic nature of wrapping can be difficult. The --trace-redir=yes
option makes this possible by showing the complete state of the redirection subsystem after every mmap/munmap
event affecting code (text).
There are two central concepts:
A "redirection specification" is a binding of a (soname pattern, fnname pattern) pair to a code address. These
bindings are created by writing functions with names made with the I_WRAP_SONAME_FNNAME_{ZZ,_ZU}
macros.
An "active redirection" is a code-address to code-address binding currently in effect.
The state of the wrapping-and-redirection subsystem comprises a set of specifications and a set of active bindings.
The specifications are acquired/discarded by watching all mmap/munmap events on code (text) sections. The active
binding set is (conceptually) recomputed from the specifications, and all known symbol names, following any change
to the specification set.
--trace-redir=yes shows the contents of both sets following any such event.
-v prints a line of text each time an active specification is used for the first time.
Hence for maximum debugging effectiveness you will need to use both options.
One final comment. The function-wrapping facility is closely tied to Valgrinds ability to replace (redirect) specified
functions, for example to redirect calls to malloc to its own implementation. Indeed, a replacement function can be
regarded as a wrapper function which does not call the original. However, to make the implementation more robust,
the two kinds of interception (wrapping vs replacement) are treated differently.
--trace-redir=yes shows specifications and bindings for both replacement and wrapper functions.
differentiate the two, replacement bindings are printed using R-> whereas wraps are printed using W->.
To
54
CALL_FN_v_W
CALL_FN_W_W
CALL_FN_v_WW
CALL_FN_W_WW
-- call an original of type long fn ( long, long, long, long, long, long )
and so on, up to
CALL_FN_W_12W
The set of supported types can be expanded as needed. It is regrettable that this limitation exists. Function
wrapping has proven difficult to implement, with a certain apparently unavoidable level of ickiness. After several
implementation attempts, the present arrangement appears to be the least-worst tradeoff. At least it works reliably in
the presence of dynamic linking and dynamic code loading/unloading.
55
You should not attempt to wrap a function of one type signature with a wrapper of a different type signature.
Such trickery will surely lead to crashes or strange behaviour. This is not a limitation of the function wrapping
implementation, merely a reflection of the fact that it gives you sweeping powers to shoot yourself in the foot if you
are not careful. Imagine the instant havoc you could wreak by writing a wrapper which matched any function name
in any soname - in effect, one which claimed to be a wrapper for all functions in the process.
3.3.7. Examples
In the source tree, memcheck/tests/wrap[1-8].c provide a series of examples, ranging from very simple to
quite advanced.
mpi/libmpiwrap.c is an example of wrapping a big, complex API (the MPI-2 interface). This file defines almost
300 different wrappers.
56
4.1. Overview
Memcheck is a memory error detector. It can detect the following problems that are common in C and C++ programs.
Accessing memory you shouldnt, e.g. overrunning and underrunning heap blocks, overrunning the top of the stack,
and accessing memory after it has been freed.
Using undefined values, i.e. values that have not been initialised, or that have been derived from other undefined
values.
Incorrect freeing of heap memory, such as double-freeing heap blocks, or mismatched use of malloc/new/new[]
versus free/delete/delete[]
Overlapping src and dst pointers in memcpy and related functions.
Passing a fishy (presumably negative) value to the size parameter of a memory allocation function.
Memory leaks.
Problems like these can be difficult to find by other means, often remaining undetected for long periods, then causing
occasional, difficult-to-diagnose crashes.
This happens when your program reads or writes memory at a place which Memcheck reckons it shouldnt. In
this example, the program did a 4-byte read at address 0xBFFFF0E0, somewhere within the system-supplied library
libpng.so.2.1.0.9, which was called from somewhere else in the same library, called from line 326 of qpngio.cpp,
and so on.
57
Memcheck tries to establish what the illegal address might relate to, since thats often useful.
So, if it points
into a block of memory which has already been freed, youll be informed of this, and also where the block was
freed.
Likewise, if it should turn out to be just off the end of a heap block, a common result of off-by-oneerrors in array subscripting, youll be informed of this fact, and also where the block was allocated. If you use
the --read-var-info option Memcheck will run more slowly but may give a more detailed description of any
illegal address.
In this example, Memcheck cant identify the address. Actually the address is on the stack, but, for some reason, this
is not a valid stack address -- it is below the stack pointer and that isnt allowed. In this particular case its probably
caused by GCC generating invalid code, a known bug in some ancient versions of GCC.
Note that Memcheck only tells you that your program is about to access memory at an illegal address. It cant stop the
access from happening. So, if your program makes an access which normally would result in a segmentation fault,
you program will still suffer the same fate -- but you will get a message from Memcheck immediately prior to this. In
this particular example, reading junk on the stack is non-fatal, and the program stays alive.
An uninitialised-value use error is reported when your program uses a value which hasnt been initialised -- in other
words, is undefined. Here, the undefined value is used somewhere inside the printf machinery of the C library.
This error was reported when running the following small program:
int main()
{
int x;
printf ("x = %d\n", x);
}
It is important to understand that your program can copy around junk (uninitialised) data as much as it likes.
Memcheck observes this and keeps track of the data, but does not complain. A complaint is issued only when
your program attempts to make use of uninitialised data in a way that might affect your programs externally-visible
behaviour. In this example, x is uninitialised. Memcheck observes the value being passed to _IO_printf and
thence to _IO_vfprintf, but makes no comment. However, _IO_vfprintf has to examine the value of x so it
can turn it into the corresponding ASCII string, and it is at this point that Memcheck complains.
Sources of uninitialised data tend to be:
Local variables in procedures which have not been initialised, as in the example above.
The contents of heap blocks (allocated with malloc, new, or a similar function) before you (or a constructor) write
something there.
58
To see information on the sources of uninitialised data in your program, use the --track-origins=yes option.
This makes Memcheck run more slowly, but can make it much easier to track down the root causes of uninitialised
value errors.
... because the program has (a) written uninitialised junk from the heap block to the standard output, and (b) passed an
uninitialised value to exit. Note that the first error refers to the memory pointed to by buf (not buf itself), but the
second error refers directly to exits argument arr2[0].
59
Memcheck keeps track of the blocks allocated by your program with malloc/new, so it can know exactly whether
or not the argument to free/delete is legitimate or not. Here, this test program has freed the same block twice.
As with the illegal read/write errors, Memcheck attempts to make sense of the address freed. If, as here, the address
is one which has previously been freed, you wil be told that -- making duplicate frees of the same block easy to spot.
You will also get this message if you try to free a pointer that doesnt point to the start of a heap block.
In C++ its important to deallocate memory in a way compatible with how it was allocated. The deal is:
If allocated with malloc, calloc, realloc, valloc or memalign, you must deallocate with free.
If allocated with new, you must deallocate with delete.
If allocated with new[], you must deallocate with delete[].
60
The worst thing is that on Linux apparently it doesnt matter if you do mix these up, but the same program may then
crash on a different platform, Solaris for example. So its best to fix it properly. According to the KDE folks "its
amazing how many C++ programmers dont know this".
The reason behind the requirement is as follows. In some C++ implementations, delete[] must be used for objects
allocated by new[] because the compiler stores the size of the array and the pointer-to-member to the destructor of
the arrays content just before the pointer actually returned. delete doesnt account for this and will get confused,
possibly corrupting the heap.
You dont want the two blocks to overlap because one of them could get partially overwritten by the copying.
You might think that Memcheck is being overly pedantic reporting this in the case where dst is less than src.
For example, the obvious way to implement memcpy is by copying from the first byte to the last. However, the
optimisation guides of some architectures recommend copying from the last byte down to the first. Also, some
implementations of memcpy zero dst before copying, because zeroing the destinations cache line(s) can improve
performance.
The moral of the story is: if you want to write truly portable code, dont make any assumptions about the language
implementation.
61
==32233== Argument size of function malloc has a fishy (possibly negative) value: -3
==32233==
==32233==
==32233==
In earlier Valgrind versions those values were being referred to as "silly arguments" and no back-trace was included.
62
You can optionally activate heuristics to use during the leak search to detect the interior pointers corresponding to the
stdstring, length64, newarray and multipleinheritance cases. If the heuristic detects that an interior
pointer corresponds to such a case, the block will be considered as reachable by the interior pointer. In other words,
the interior pointer will be treated as if it were a start pointer.
With that in mind, consider the nine possible cases described by the following figure.
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
Pointer chain
AAA Leak Case BBB Leak Case
------------------------- ------------RRR ------------> BBB
DR
RRR ---> AAA ---> BBB
DR
IR
RRR
BBB
DL
RRR
AAA ---> BBB
DL
IL
RRR ------?-----> BBB
(y)DR, (n)DL
RRR ---> AAA -?-> BBB
DR
(y)IR, (n)DL
RRR -?-> AAA ---> BBB
(y)DR, (n)DL
(y)IR, (n)IL
RRR -?-> AAA -?-> BBB
(y)DR, (n)DL
(y,y)IR, (n,y)IL, (_,n)DL
RRR
AAA -?-> BBB
DL
(y)IL, (n)DL
Every possible case can be reduced to one of the above nine. Memcheck merges some of these cases in its output,
resulting in the following four leak kinds.
"Still reachable". This covers cases 1 and 2 (for the BBB blocks) above. A start-pointer or chain of start-pointers
to the block is found. Since the block is still pointed at, the programmer could, at least in principle, have freed
it before program exit. "Still reachable" blocks are very common and arguably not a problem. So, by default,
Memcheck wont report such blocks individually.
"Definitely lost". This covers case 3 (for the BBB blocks) above. This means that no pointer to the block can be
found. The block is classified as "lost", because the programmer could not possibly have freed it at program exit,
since no pointer to it exists. This is likely a symptom of having lost the pointer at some earlier point in the program.
Such cases should be fixed by the programmer.
63
"Indirectly lost". This covers cases 4 and 9 (for the BBB blocks) above. This means that the block is lost, not
because there are no pointers to it, but rather because all the blocks that point to it are themselves lost. For example,
if you have a binary tree and the root node is lost, all its children nodes will be indirectly lost. Because the problem
will disappear if the definitely lost block that caused the indirect leak is fixed, Memcheck wont report such blocks
individually by default.
"Possibly lost". This covers cases 5--8 (for the BBB blocks) above. This means that a chain of one or more
pointers to the block has been found, but at least one of the pointers is an interior-pointer. This could just be a
random value in memory that happens to point into a block, and so you shouldnt consider this ok unless you know
you have interior-pointers.
(Note: This mapping of the nine possible cases onto four leak kinds is not necessarily the best way that leaks could be
reported; in particular, interior-pointers are treated inconsistently. It is possible the categorisation may be improved
in the future.)
Furthermore, if suppressions exists for a block, it will be reported as "suppressed" no matter what which of the above
four kinds it belongs to.
The following is an example leak summary.
LEAK SUMMARY:
definitely lost: 48 bytes in 3 blocks.
indirectly lost: 32 bytes in 2 blocks.
possibly lost: 96 bytes in 6 blocks.
still reachable: 64 bytes in 4 blocks.
suppressed: 0 bytes in 0 blocks.
If heuristics have been used to consider some blocks as reachable, the leak summary details the heuristically reachable
subset of still reachable: per heuristic. In the below example, of the 95 bytes still reachable, 87 bytes (56+7+8+16)
have been considered heuristically reachable.
LEAK SUMMARY:
definitely lost: 4 bytes in 1 blocks
indirectly lost: 0 bytes in 0 blocks
possibly lost: 0 bytes in 0 blocks
still reachable: 95 bytes in 6 blocks
of which reachable via heuristic:
stdstring
: 56 bytes in 2 blocks
length64
: 16 bytes in 1 blocks
newarray
: 7 bytes in 1 blocks
multipleinheritance: 8 bytes in 1 blocks
suppressed: 0 bytes in 0 blocks
If --leak-check=full is specified, Memcheck will give details for each definitely lost or possibly lost block,
including where it was allocated.
(Actually, it merges results for all blocks that have the same leak kind and
sufficiently similar stack traces into a single "loss record". The --leak-resolution lets you control the meaning
of "sufficiently similar".) It cannot tell you when or how or why the pointer to a leaked block was lost; you have to
work that out for yourself. In general, you should attempt to ensure your programs do not have any definitely lost or
possibly lost blocks at exit.
For example:
64
The first message describes a simple case of a single 8 byte block that has been definitely lost. The second case
mentions another 8 byte block that has been definitely lost; the difference is that a further 80 bytes in other blocks are
indirectly lost because of this lost block. The loss records are not presented in any notable order, so the loss record
numbers arent particularly meaningful. The loss record numbers can be used in the Valgrind gdbserver to list the
addresses of the leaked blocks and/or give more details about how a block is still reachable.
The option --show-leak-kinds=<set> controls the set of leak kinds to show when --leak-check=full
is specified.
The <set> of leak kinds is specified in one of the following ways:
a comma separated list of one or more of definite indirect possible reachable.
all to specify the complete set (all leak kinds).
none for the empty set.
The default value for the leak kinds to show is --show-leak-kinds=definite,possible.
To also show the reachable and indirectly lost blocks in addition to the definitely and possibly lost blocks,
you can use --show-leak-kinds=all.
To only show the reachable and indirectly lost blocks, use
--show-leak-kinds=indirect,reachable. The reachable and indirectly lost blocks will then be presented as shown in the following two examples.
64 bytes in 4 blocks are still reachable in loss record 2 of 4
at 0x........: malloc (vg_replace_malloc.c:177)
by 0x........: mk (leak-cases.c:52)
by 0x........: main (leak-cases.c:74)
32 bytes in 2 blocks are indirectly lost in loss record 1 of 4
at 0x........: malloc (vg_replace_malloc.c:177)
by 0x........: mk (leak-cases.c:52)
by 0x........: main (leak-cases.c:80)
Because there are different kinds of leaks with different severities, an interesting question is: which leaks should be
counted as true "errors" and which should not?
65
The answer to this question affects the numbers printed in the ERROR SUMMARY line, and also the effect of the
--error-exitcode option. First, a leak is only counted as a true "error" if --leak-check=full is specified.
Then, the option --errors-for-leak-kinds=<set> controls the set of leak kinds to consider as errors. The
default value is --errors-for-leak-kinds=definite,possible
all to specify the complete set (all leak kinds). It is equivalent to --show-leak-kinds=definite,indirect,possible,r
none for the empty set.
--errors-for-leak-kinds=<set> [default: definite,possible]
Specifies the leak kinds to count as errors in a full leak search. The <set> is specified similarly to
--show-leak-kinds
--leak-check-heuristics=<set> [default: none]
Specifies the set of leak check heuristics to be used during leak searches. The heuristics control which interior pointers
to a block cause it to be considered as reachable. The heuristic set is specified in one of the following ways:
a comma separated list of one or more of stdstring length64 newarray multipleinheritance.
67
--keep-stacktraces=alloc|free|alloc-and-free|alloc-then-free|none [default:
alloc-then-free]
Controls which stack trace(s) to keep for mallocd and/or freed blocks.
With alloc-then-free, a stack trace is recorded at allocation time, and is associated with the block. When the
block is freed, a second stack trace is recorded, and this replaces the allocation stack trace. As a result, any "use after
free" errors relating to this block can only show a stack trace for where the block was freed.
With alloc-and-free, both allocation and the deallocation stack traces for the block are stored. Hence a "use
after free" error will show both, which may make the error easier to diagnose. Compared to alloc-then-free,
this setting slightly increases Valgrinds memory use as the block contains two references instead of one.
With alloc, only the allocation stack trace is recorded (and reported). With free, only the deallocation stack trace
is recorded (and reported). These values somewhat decrease Valgrinds memory and cpu usage. They can be useful
depending on the error types you are searching for and the level of detail you need to analyse them. For example, if
you are only interested in memory leak errors, it is sufficient to record the allocation stack traces.
With none, no stack traces are recorded for malloc and free operations. If your program allocates a lot of blocks
and/or allocates/frees from many different stack traces, this can significantly decrease cpu and/or memory required.
Of course, few details will be reported for errors related to heap blocks.
Note that once a stack trace is recorded, Valgrind keeps the stack trace in memory even if it is not referenced
by any block. Some programs (for example, recursive algorithms) can generate a huge number of stack traces.
If Valgrind uses too much memory in such circumstances, you can reduce the memory required with the options
--keep-stacktraces and/or by using a smaller value for the option --num-callers.
--freelist-vol=<number> [default: 20000000]
When the client program releases memory using free (in C) or delete (C++), that memory is not immediately made
available for re-allocation. Instead, it is marked inaccessible and placed in a queue of freed blocks. The purpose
is to defer as long as possible the point at which freed-up memory comes back into circulation. This increases the
chance that Memcheck will be able to detect invalid accesses to blocks for some significant period of time after they
have been freed.
This option specifies the maximum total size, in bytes, of the blocks in the queue. The default value is twenty million
bytes. Increasing this increases the total amount of memory used by Memcheck but may detect invalid uses of freed
blocks which would otherwise go undetected.
--freelist-big-blocks=<number> [default: 1000000]
When making blocks from the queue of freed blocks available for re-allocation, Memcheck will in priority re-circulate
the blocks with a size greater or equal to --freelist-big-blocks. This ensures that freeing big blocks (in
particular freeing blocks bigger than --freelist-vol) does not immediately lead to a re-circulation of all (or a lot
of) the small blocks in the free list. In other words, this option increases the likelihood to discover dangling pointers
for the "small" blocks, even when big blocks are freed.
Setting a value of 0 means that all the blocks are re-circulated in a FIFO order.
--workaround-gcc296-bugs=<yes|no> [default: no]
When enabled, assume that reads and writes some small distance below the stack pointer are due to bugs in GCC 2.96,
and does not report them. The "small distance" is 256 bytes by default. Note that GCC 2.96 is the default compiler
on some ancient Linux distributions (RedHat 7.X) and so you may need to use this option. Do not use it if you do not
have to, as it can cause real errors to be overlooked. A better alternative is to use a more recent GCC in which this
bug is fixed.
You may also need to use this option when working with GCC 3.X or 4.X on 32-bit PowerPC Linux. This is because
GCC generates code which occasionally accesses below the stack pointer, particularly for floating-point to/from integer
conversions. This is in violation of the 32-bit PowerPC ELF specification, which makes no provision for locations
below the stack pointer to be accessible.
68
where <set> specifies which leak kinds are matched by this suppression entry. <set> is specified in the same way
as with the option --show-leak-kinds, that is, one of the following:
a comma separated list of one or more of definite indirect possible reachable.
all to specify the complete set (all leak kinds).
none for the empty set.
If this optional extra line is not present, the suppression entry will match all leak kinds.
Be aware that leak suppressions that are created using --gen-suppressions will contain this optional extra line,
and therefore may match fewer leaks than you expect. You may want to remove the line before using the generated
suppressions.
The other Memcheck error kinds do not have extra lines.
If you give the -v option, Valgrind will print the list of used suppressions at the end of execution. For a leak
suppression, this output gives the number of different loss records that match the suppression, and the number of
bytes and blocks suppressed by the suppression. If the run contains multiple leak checks, the number of bytes and
blocks are reset to zero before each new leak check. Note that the number of different loss records is not reset to zero.
In the example below, in the last leak search, 7 blocks and 96 bytes have been suppressed by a suppression with the
name some_leak_suppression:
--21041-- used_suppression:
--21041-- used_suppression:
For ValueN and AddrN errors, the first line of the calling context is either the name of the function in which the error
occurred, or, failing that, the full path of the .so file or executable containing the error location. For Free errors, the
first line is the name of the function doing the freeing (eg, free, __builtin_vec_delete, etc). For Overlap
errors, the first line is the name of the function with the overlapping arguments (eg. memcpy, strcpy, etc).
The last part of any suppression specifies the rest of the calling context that needs to be matched.
Read this section if you want to know, in detail, exactly what and how Memcheck is checking.
Memcheck emits no complaints about this, since it merely copies uninitialised values from a[] into b[], and doesnt
use them in a way which could affect the behaviour of the program. However, if the loop is changed to:
for ( i = 0; i < 10; i++ ) {
j += a[i];
}
if ( j == 77 )
printf("hello there\n");
then Memcheck will complain, at the if, that the condition depends on uninitialised values. Note that it doesnt
complain at the j += a[i];, since at that point the undefinedness is not "observable". Its only when a decision
has to be made as to whether or not to do the printf -- an observable action of your program -- that Memcheck
complains.
Most low level operations, such as adds, cause Memcheck to use the V bits for the operands to calculate the V bits for
the result. Even if the result is partially or wholly undefined, it does not complain.
Checks on definedness only occur in three places: when a value is used to generate a memory address, when control
flow decision needs to be made, and when a system call is detected, Memcheck checks definedness of parameters as
required.
71
If a check should detect undefinedness, an error message is issued. The resulting value is subsequently regarded as
well-defined. To do otherwise would give long chains of error messages. In other words, once Memcheck reports an
undefined value error, it tries to avoid reporting further errors derived from that same undefined value.
This sounds overcomplicated. Why not just check all reads from memory, and complain if an undefined value is
loaded into a CPU register? Well, that doesnt work well, because perfectly legitimate C programs routinely copy
uninitialised values around in memory, and we dont want endless complaints about that. Heres the canonical
example. Consider a struct like this:
struct S { int x; char c; };
struct S s1, s2;
s1.x = 42;
s1.c = z;
s2 = s1;
The question to ask is: how large is struct S, in bytes? An int is 4 bytes and a char one byte, so perhaps a
struct S occupies 5 bytes? Wrong. All non-toy compilers we know of will round the size of struct S up to
a whole number of words, in this case 8 bytes. Not doing this forces compilers to generate truly appalling code for
accessing arrays of struct Ss on some architectures.
So s1 occupies 8 bytes, yet only 5 of them will be initialised. For the assignment s2 = s1, GCC generates code
to copy all 8 bytes wholesale into s2 without regard for their meaning. If Memcheck simply checked values as they
came out of memory, it would yelp every time a structure assignment like this happened. So the more complicated
behaviour described above is necessary. This allows GCC to copy s1 into s2 any way it likes, and a warning will
only be emitted if the uninitialised values are later used.
72
When the stack pointer register (SP) moves up or down, A bits are set. The rule is that the area from SP up to
the base of the stack is marked as accessible, and below SP is inaccessible. (If that sounds illogical, bear in mind
that the stack grows down, not up, on almost all Unix systems, including GNU/Linux.) Tracking SP like this has
the useful side-effect that the section of stack used by a function for local variables etc is automatically marked
accessible on function entry and inaccessible on exit.
When doing system calls, A bits are changed appropriately. For example, mmap magically makes files appear in the
process address space, so the A bits must be updated if mmap succeeds.
Optionally, your program can tell Memcheck about such changes explicitly, using the client request mechanism
described above.
calloc: returned memory is marked both addressable and valid, since calloc clears the area to zero.
realloc: if the new size is larger than the old, the new section is addressable but invalid, as with malloc. If the
new size is smaller, the dropped-off section is marked as unaddressable. You may only pass to realloc a pointer
previously issued to you by malloc/calloc/realloc.
free/delete/delete[]: you may only pass to these functions a pointer previously issued to you by the
corresponding allocation function. Otherwise, Memcheck complains. If the pointer is indeed valid, Memcheck
marks the entire area it points at as unaddressable, and places the block in the freed-blocks-queue. The aim is
to defer as long as possible reallocation of this block. Until that happens, all attempts to access it will elicit an
invalid-address error, as you would hope.
(gdb) p &string10
$4 = (char (*)[10]) 0x8049e28
(gdb) monitor get_vbits 0x8049e28 10
ff00ff00 ff__ff00 ff00
(gdb)
The command get_vbits cannot be used with registers. To get the validity bits of a register, you must start Valgrind
with the option --vgdb-shadow-registers=yes. The validity bits of a register can be obtained by printing
the shadow 1 corresponding register. In the below x86 example, the register eax has all its bits undefined, while
the register ebx is fully defined.
(gdb) p /x $eaxs1
$9 = 0xffffffff
(gdb) p /x $ebxs1
$10 = 0x0
(gdb)
74
check_memory [addressable|defined] <addr> [<len>] checks that the range of <len> (default 1)
bytes at <addr> has the specified accessibility. It then outputs a description of <addr>. In the following example, a
detailed description is available because the option --read-var-info=yes was given at Valgrind startup:
should be shown, regardless of any increase or decrease. When If increased or changed are specified, the
leak report entries will show the delta relative to the previous leak report.
The following example shows usage of the leak_check monitor command on the memcheck/tests/leak-cases.c
regression test. The first command outputs one entry having an increase in the leaked bytes. The second command
is the same as the first command, but uses the abbreviated forms accepted by GDB and the Valgrind gdbserver. It
only outputs the summary information, as there was no increase since the previous leak search.
Note that when using Valgrinds gdbserver, it is not necessary to rerun with --leak-check=full
--show-reachable=yes to see the reachable blocks. You can obtain the same information without rerunning
by using the GDB command monitor leak_check full reachable any (or, using abbreviation: mo
l f r a).
block_list <loss_record_nr> shows the list of blocks belonging to <loss_record_nr>.
A leak search merges the allocated blocks in loss records : a loss record re-groups all blocks having the same state
(for example, Definitely Lost) and the same allocation backtrace. Each loss record is identified in the leak search
result by a loss record number. The block_list command shows the loss record information followed by the
addresses and sizes of the blocks which have been merged in the loss record.
If a directly lost block causes some other blocks to be indirectly lost, the block_list command will also show these
indirectly lost blocks. The indirectly lost blocks will be indented according to the level of indirection between the
directly lost block and the indirectly lost block(s). Each indirectly lost block is followed by the reference of its loss
record.
76
The block_list command can be used on the results of a leak search as long as no block has been freed after this
leak search: as soon as the program frees a block, a new leak search is needed before block_list can be used again.
In the below example, the program leaks a tree structure by losing the pointer to the block A (top of the tree). So,
the block A is directly lost, causing an indirect loss of blocks B to G. The first block_list command shows the loss
record of A (a definitely lost block with address 0x4028028, size 16). The addresses and sizes of the indirectly lost
blocks due to block A are shown below the block A. The second command shows the details of one of the indirect
loss records output by the first command.
A
/ \
B
C
/ \ / \
D E F G
(gdb) bt
#0 main () at leak-tree.c:69
(gdb) monitor leak_check full any
==19552== 112 (16 direct, 96 indirect) bytes in 1 blocks are definitely lost in loss record 7
==19552==
at 0x40070B4: malloc (vg_replace_malloc.c:263)
==19552==
by 0x80484D5: mk (leak-tree.c:28)
==19552==
by 0x80484FC: f (leak-tree.c:41)
==19552==
by 0x8048856: main (leak-tree.c:63)
==19552==
==19552== LEAK SUMMARY:
==19552==
definitely lost: 16 bytes in 1 blocks
==19552==
indirectly lost: 96 bytes in 6 blocks
==19552==
possibly lost: 0 bytes in 0 blocks
==19552==
still reachable: 0 bytes in 0 blocks
==19552==
suppressed: 0 bytes in 0 blocks
==19552==
(gdb) monitor block_list 7
==19552== 112 (16 direct, 96 indirect) bytes in 1 blocks are definitely lost in loss record 7
==19552==
at 0x40070B4: malloc (vg_replace_malloc.c:263)
==19552==
by 0x80484D5: mk (leak-tree.c:28)
==19552==
by 0x80484FC: f (leak-tree.c:41)
==19552==
by 0x8048856: main (leak-tree.c:63)
==19552== 0x4028028[16]
==19552== 0x4028068[16] indirect loss record 1
==19552==
0x40280E8[16] indirect loss record 3
==19552==
0x4028128[16] indirect loss record 4
==19552== 0x40280A8[16] indirect loss record 2
==19552==
0x4028168[16] indirect loss record 5
==19552==
0x40281A8[16] indirect loss record 6
(gdb) mo b 2
==19552== 16 bytes in 1 blocks are indirectly lost in loss record 2 of 7
==19552==
at 0x40070B4: malloc (vg_replace_malloc.c:263)
==19552==
by 0x80484D5: mk (leak-tree.c:28)
==19552==
by 0x8048519: f (leak-tree.c:43)
77
==19552==
by 0x8048856: main (leak-tree.c:63)
==19552== 0x40280A8[16]
==19552== 0x4028168[16] indirect loss record 5
==19552== 0x40281A8[16] indirect loss record 6
(gdb)
who_points_at <addr> [<len>] shows all the locations where a pointer to addr is found. If len is equal
to 1, the command only shows the locations pointing exactly at addr (i.e. the "start pointers" to addr). If len is > 1,
"interior pointers" pointing at the len first bytes will also be shown.
The locations searched for are the same as the locations used in the leak search. So, who_points_at can a.o. be
used to show why the leak search still can reach a block, or can search for dangling pointers to a freed block. Each
location pointing at addr (or pointing inside addr if interior pointers are being searched for) will be described.
In the below example, the pointers to the tree block A (see example in command block_list) is shown before
the tree was leaked. The descriptions are detailed as the option --read-var-info=yes was given at Valgrind
startup. The second call shows the pointers (start and interior pointers) to block G. The block G (0x40281A8) is
reachable via block C (0x40280a8) and register ECX of tid 1 (tid is the Valgrind thread id). It is "interior reachable"
via the register EBX.
When who_points_at finds an interior pointer, it will report the heuristic(s) with which this interior
pointer will be considered as reachable. Note that this is done independently of the value of the option
--leak-check-heuristics. In the below example, the loss record 6 indicates a possibly lost block.
who_points_at reports that there is an interior pointer pointing in this block, and that the block can be considered reachable using the heuristic multipleinheritance.
79
80
"pool"
(anchor address)
|
v
+--------+---+
| header | o |
+--------+-|-+
|
v
superblock
+------+---+--------------+---+------------------+
|
|rzB| allocation |rzB|
|
+------+---+--------------+---+------------------+
^
^
|
|
"addr"
"addr"+"size"
Note that the header and the superblock may be contiguous or discontiguous, and there may be multiple superblocks
associated with a single header; such variations are opaque to Memcheck. The API only requires that your allocation
scheme can present sensible values of "pool", "addr" and "size".
Typically, before making client requests related to mempools, a client program will have allocated
such a header and superblock for their mempool, and marked the superblock NOACCESS using the
VALGRIND_MAKE_MEM_NOACCESS client request.
When dealing with mempools, the goal is to maintain a particular invariant condition: that Memcheck believes the
unallocated portions of the pools superblock (including redzones) are NOACCESS. To maintain this invariant, the
client program must ensure that the superblock starts out in that state; Memcheck cannot make it so, since Memcheck
never explicitly learns about the superblock of a pool, only the allocated chunks within the pool.
Once the header and superblock for a pool are established and properly marked, there are a number of client requests
programs can use to inform Memcheck about changes to the state of a mempool:
VALGRIND_CREATE_MEMPOOL(pool, rzB, is_zeroed): This request registers the address pool as the
anchor address for a memory pool. It also provides a size rzB, specifying how large the redzones placed around
chunks allocated from the pool should be. Finally, it provides an is_zeroed argument that specifies whether the
pools chunks are zeroed (more precisely: defined) when allocated.
Upon completion of this request, no chunks are associated with the pool. The request simply tells Memcheck that
the pool exists, so that subsequent calls can refer to it as a pool.
VALGRIND_DESTROY_MEMPOOL(pool): This request tells Memcheck that a pool is being torn down. Memcheck then removes all records of chunks associated with the pool, as well as its record of the pools existence. While
destroying its records of a mempool, Memcheck resets the redzones of any live chunks in the pool to NOACCESS.
VALGRIND_MEMPOOL_ALLOC(pool, addr, size): This request informs Memcheck that a size-byte
chunk has been allocated at addr, and associates the chunk with the specified pool. If the pool was created
with nonzero rzB redzones, Memcheck will mark the rzB bytes before and after the chunk as NOACCESS. If
the pool was created with the is_zeroed argument set, Memcheck will mark the chunk as DEFINED, otherwise
Memcheck will mark the chunk as UNDEFINED.
81
VALGRIND_MEMPOOL_FREE(pool, addr): This request informs Memcheck that the chunk at addr should
no longer be considered allocated. Memcheck will mark the chunk associated with addr as NOACCESS, and
delete its record of the chunks existence.
VALGRIND_MEMPOOL_TRIM(pool, addr, size): This request trims the chunks associated with pool.
The request only operates on chunks associated with pool. Trimming is formally defined as:
All chunks entirely inside the range addr..(addr+size-1) are preserved.
All chunks entirely outside the range addr..(addr+size-1)
VALGRIND_MEMPOOL_FREE was called on them.
are
discarded,
as
though
All other chunks must intersect with the range addr..(addr+size-1); areas outside the intersection are
marked as NOACCESS, as though they had been independently freed with VALGRIND_MEMPOOL_FREE.
This is a somewhat rare request, but can be useful in implementing the type of mass-free operations common in
custom LIFO allocators.
VALGRIND_MOVE_MEMPOOL(poolA, poolB): This request informs Memcheck that the pool previously
anchored at address poolA has moved to anchor address poolB. This is a rare request, typically only needed
if you realloc the header of a mempool.
No memory-status bits are altered by this request.
VALGRIND_MEMPOOL_CHANGE(pool, addrA, addrB, size): This request informs Memcheck that the
chunk previously allocated at address addrA within pool has been moved and/or resized, and should be changed
to cover the region addrB..(addrB+size-1). This is a rare request, typically only needed if you realloc a
superblock or wish to extend a chunk without changing its memory-status bits.
No memory-status bits are altered by this request.
VALGRIND_MEMPOOL_EXISTS(pool): This request informs the caller whether or not Memcheck is currently
tracking a mempool at anchor address pool. It evaluates to 1 when there is a mempool associated with that address,
0 otherwise. This is a rare request, only useful in circumstances when client code might have lost track of the set of
active mempools.
82
If it says ...
no, your mpicc has failed to compile and link a test MPI2 program.
If the configure test succeeds, continue in the usual way with make and make install.
should then contain libmpiwrap-<platform>.so.
Compile up a test MPI program (eg, MPI hello-world) and try this:
LD_PRELOAD=$prefix/lib/valgrind/libmpiwrap-<platform>.so
mpirun [args] $prefix/bin/valgrind ./hello
repeated for every process in the group. If you do not see these, there is an build/installation problem of some kind.
The MPI functions to be wrapped are assumed to be in an ELF shared object with soname matching libmpi.so*.
This is known to be correct at least for Open MPI and Quadrics MPI, and can easily be changed if required.
4.9.4. Functions
All MPI2 functions except MPI_Wtick, MPI_Wtime and MPI_Pcontrol have wrappers. The first two are not
wrapped because they return a double, which Valgrinds function-wrap mechanism cannot handle (but it could easily
be extended to do so). MPI_Pcontrol cannot be wrapped as it has variable arity: int MPI_Pcontrol(const
int level, ...)
Most functions are wrapped with a default wrapper which does nothing except complain or abort if it is called,
depending on settings in MPIWRAP_DEBUG listed above. The following functions have "real", do-something-useful
wrappers:
84
A few functions such as PMPI_Address are listed as HAS_NO_WRAPPER. They have no wrapper at all as there is
nothing worth checking, and giving a no-op wrapper would reduce performance for no reason.
Note that the wrapper library itself can itself generate large numbers of calls to the MPI implementation, especially when walking complex types.
The most common functions called are PMPI_Extent,
PMPI_Type_get_envelope, PMPI_Type_get_contents, and PMPI_Type_free.
4.9.5. Types
MPI-1.1 structured types are supported, and walked exactly.
The currently supported combiners
are
MPI_COMBINER_NAMED,
MPI_COMBINER_CONTIGUOUS,
MPI_COMBINER_VECTOR,
MPI_COMBINER_HVECTOR MPI_COMBINER_INDEXED, MPI_COMBINER_HINDEXED and MPI_COMBINER_STRUCT.
This should cover all MPI-1.1 types. The mechanism (function walk_type) should extend easily to cover MPI2
combiners.
MPI defines some named structured types (MPI_FLOAT_INT, MPI_DOUBLE_INT, MPI_LONG_INT, MPI_2INT,
MPI_SHORT_INT, MPI_LONG_DOUBLE_INT) which are pairs of some basic type and a C int. Unfortunately the
MPI specification makes it impossible to look inside these types and see where the fields are. Therefore these
wrappers assume the types are laid out as struct { float val; int loc; } (for MPI_FLOAT_INT), etc,
and act accordingly. This appears to be correct at least for Open MPI 1.0.2 and for Quadrics MPI.
If strict is an option specified in MPIWRAP_DEBUG, the application will abort if an unhandled type is encountered.
Otherwise, the application will print a warning message and continue.
85
Some effort is made to mark/check memory ranges corresponding to arrays of values in a single pass. This is
important for performance since asking Valgrind to mark/check any range, no matter how small, carries quite a large
constant cost. This optimisation is applied to arrays of primitive types (double, float, int, long, long long,
short, char, and long double on platforms where sizeof(long double) == 8). For arrays of all other
types, the wrappers handle each element individually and so there can be a very large performance cost.
86
5.1. Overview
Cachegrind simulates how your program interacts with a machines cache hierarchy and (optionally) branch predictor.
It simulates a machine with independent first-level instruction and data caches (I1 and D1), backed by a unified
second-level cache (L2). This exactly matches the configuration of many modern machines.
However, some modern machines have three or four levels of cache.
For these machines (in the cases where
Cachegrind can auto-detect the cache configuration) Cachegrind simulates the first-level and last-level caches. The
reason for this choice is that the last-level cache has the most influence on runtime, as it masks accesses to main
memory. Furthermore, the L1 caches often have low associativity, so simulating them can detect cases where the
code interacts badly with this cache (eg. traversing a matrix column-wise with the row length being a power of 2).
Therefore, Cachegrind always refers to the I1, D1 and LL (last-level) caches.
Cachegrind gathers the following statistics (abbreviations used for each statistic is given in parentheses):
I cache reads (Ir, which equals the number of instructions executed), I1 cache read misses (I1mr) and LL cache
instruction read misses (ILmr).
D cache reads (Dr, which equals the number of memory reads), D1 cache read misses (D1mr), and LL cache data
read misses (DLmr).
D cache writes (Dw, which equals the number of memory writes), D1 cache write misses (D1mw), and LL cache
data write misses (DLmw).
Conditional branches executed (Bc) and conditional branches mispredicted (Bcm).
Indirect branches executed (Bi) and indirect branches mispredicted (Bim).
Note that D1 total accesses is given by D1mr + D1mw, and that LL total accesses is given by ILmr + DLmr + DLmw.
These statistics are presented for the entire program and for each function in the program. You can also annotate each
line of source code in the program with the counts that were caused directly by it.
On a modern machine, an L1 miss will typically cost around 10 cycles, an LL miss can cost as much as 200 cycles,
and a mispredicted branch costs in the region of 10 to 30 cycles. Detailed cache and branch profiling can be very
useful for understanding how your program interacts with the machine and thus how to make it faster.
Also, since one instruction cache read is performed per instruction executed, you can find out how many instructions
are executed per line, which can be useful for traditional profiling.
First off, as for normal Valgrind use, you probably want to compile with debugging info (the -g option). But
by contrast with normal Valgrind use, you probably do want to turn optimisation on, since you should profile your
program as it will be normally run.
Then, you need to run Cachegrind itself to gather the profiling information, and then run cg_annotate to get a detailed
presentation of that information. As an optional intermediate step, you can use cg_merge to sum together the outputs
of multiple Cachegrind runs into a single file which you then use as the input for cg_annotate. Alternatively, you
can use cg_diff to difference the outputs of two Cachegrind runs into a single file which you then use as the input for
cg_annotate.
The program will execute (slowly). Upon completion, summary statistics that look like this will be printed:
==31751==
==31751==
==31751==
==31751==
==31751==
==31751==
==31751==
==31751==
==31751==
==31751==
==31751==
==31751==
==31751==
==31751==
I refs:
27,742,716
I1 misses:
276
LLi misses:
275
I1 miss rate:
0.0%
LLi miss rate:
0.0%
D refs:
15,430,290 (10,955,517 rd + 4,474,773 wr)
D1 misses:
41,185 (
21,905 rd +
19,280 wr)
LLd misses:
23,085 (
3,987 rd +
19,098 wr)
D1 miss rate:
0.2% (
0.1% +
0.4%)
LLd miss rate:
0.1% (
0.0% +
0.4%)
LL misses:
LL miss rate:
23,360 (
0.0% (
4,262 rd +
0.0% +
19,098 wr)
0.4%)
Cache accesses for instruction fetches are summarised first, giving the number of fetches made (this is the number of
instructions executed, which can be useful to know in its own right), the number of I1 misses, and the number of LL
instruction (LLi) misses.
Cache accesses for data follow. The information is similar to that of the instruction fetches, except that the values are
also shown split between reads and writes (note each rows rd and wr values add up to the rows total).
Combined instruction and data figures for the LL cache follow that. Note that the LL miss rate is computed relative
to the total number of memory accesses, not the number of L1 misses. I.e. it is (ILmr + DLmr + DLmw) /
(Ir + Dr + Dw) not (ILmr + DLmr + DLmw) / (I1mr + D1mr + D1mw)
Branch prediction statistics are not collected by default. To do so, add the option --branch-sim=yes.
can be changed with the --cachegrind-out-file option. This file is human-readable, but is intended to be
interpreted by the accompanying program cg_annotate, described in the next section.
The default .<pid> suffix on the output file name serves two purposes. Firstly, it means you dont have to rename
old log files that you dont want to overwrite. Secondly, and more importantly, it allows correct profiling with the
--trace-children=yes option of programs that spawn child processes.
The output file can be big, many megabytes for large applications built with full debugging information.
89
Event sort order: the sort order in which functions are shown. For example, in this case the functions are sorted
from highest Ir counts to lowest. If two functions have identical Ir counts, they will then be sorted by I1mr
counts, and so on. This order can be adjusted with the --sort option.
Note that this dictates the order the functions appear. It is not the order in which the columns appear; that is dictated
by the "events shown" line (and can be changed with the --show option).
Threshold: cg_annotate by default omits functions that cause very low counts to avoid drowning you in information.
In this case, cg_annotate shows summaries the functions that account for 99% of the Ir counts; Ir is chosen as the
threshold event since it is the primary sort event. The threshold can be adjusted with the --threshold option.
Chosen for annotation: names of files specified manually for annotation; in this case none.
Auto-annotation: whether auto-annotation was requested via the --auto=yes option. In this case no.
These are similar to the summary provided when Cachegrind finishes running.
Then comes function-by-function statistics:
90
-------------------------------------------------------------------------------Ir
I1mr ILmr Dr
D1mr DLmr Dw
D1mw DLmw
file:function
-------------------------------------------------------------------------------8,821,482
5
5,222,023
4
2,649,248
2
2,521,927
2
2,242,740
2
1,496,937
4
897,991 51
598,068
1
5
4
2
2
2
4
51
1
2,242,702 1,621
73 1,794,230
0
0 getc.c:_IO_getc
2,276,334
16
12 875,959
1
1 concord.c:get_word
1,344,810 7,326 1,385
.
.
. vg_main.c:strcmp
591,215
0
0 179,398
0
0 concord.c:hash
1,046,612 568
22 448,548
0
0 ctype.c:tolower
630,874 9,000 1,400 279,388
0
0 concord.c:insert
897,831
95
30
62
1
1 ???:???
299,034
0
0 149,517
0
0 ../sysdeps/generic/lockfile.c:__flo
598,068
299,034
598,024
213,580
35
16
446,587
215,973 2,167
430
341,760
128,160
320,782
150,711
276
298,998
149,518
149,518
95,983
1
0
0
4
1
0
0
4
106,785
149,516
149,516
38,031
0
0
0
0
0
0
0
0
64,071
1
1 concord.c:create
1
0
0 ???:tolower@@GLIBC_2.0
1
0
0 ???:fgetc@@GLIBC_2.0
34,409 3,152 3,150 concord.c:new_word_node
85,440
42,720
21,360
149,517
149,506
0 ../sysdeps/generic/lockfile.c:__fun
0 vg_clientmalloc.c:malloc
0 vg_clientmalloc.c:vg_trap_here_WRAP
53
53 concord.c:init_hash_table
0 vg_clientmalloc.c:vg_bogus_epilogue
Each function is identified by a file_name:function_name pair. If a column contains only a dot it means the
function never performs that event (e.g. the third row shows that strcmp() contains no instructions that write to
memory). The name ??? is used if the file name and/or function name could not be determined from debugging
information. If most of the entries have the form ???:??? the program probably wasnt compiled with -g.
It is worth noting that functions will come both from the profiled program (e.g. concord.c) and from libraries (e.g.
getc.c)
91
I1mr ILmr Dr
D1mr DLmr Dw
D1mw
DLmw
3
.
.
1
.
5
1
.
.
0
.
0
1
.
.
0
.
0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
.
.
1
.
3
0
.
.
1
.
0
0 {
.
.
1
.
0
.
4,991
3,988
.
.
6
2
.
.
0
.
.
0
.
.
0
0
.
0
.
.
.
998
1,994
0
0
.
.
.
.
.
.
1
0
0
997
53
52
table[i] = NULL;
.
.
.
.
.
.
/* Open file, check it. */
4
0
0
file_ptr = fopen(file_name, "r");
0
.
1
1
1
.
.
.
.
.
.
.
.
.
.
.
.
165,062
1
1 73,360
.
.
.
146,712
.
4
4
3
.
0
0
0
1
.
0 73,356
.
0
0
0
.
.
.
1,995
0
0
0
.
FILE *file_ptr;
Word_Info *data;
int line = 1, i;
.
1
1
2
.
0
0
0
.
.
.
.
.
.
.
.
.
.
0 91,700
.
.
.
0
0 73,356
.
0
0
0
.
.
.
0
.
2
2
.
.
0
0
.
.
0
0
. }
if (!(file_ptr)) {
fprintf(stderr, "Couldnt open %s.\n"
exit(EXIT_FAILURE);
}
insert(data->;word, data->line, ta
free(data);
fclose(file_ptr);
(Although column widths are automatically minimised, a wide terminal is clearly useful.)
Each source file is clearly marked (User-annotated source) as having been chosen manually for annotation.
If the file was found in one of the directories specified with the -I/--include option, the directory and file are both
given.
Each line is annotated with its event counts. Events not applicable for a line are represented by a dot. This is useful
for distinguishing between an event which cannot happen, and one which can but did not.
Sometimes only a small section of a source file is executed. To minimise uninteresting output, Cachegrind only shows
annotated lines and lines within a small distance of annotated lines. Gaps are marked with the line numbers so you
know which part of a file the shown code comes from, eg:
92
The amount of context to show around annotated lines is controlled by the --context option.
To get automatic annotation, use the --auto=yes option. cg_annotate will automatically annotate every source file
it can find that is mentioned in the function-by-function summary. Therefore, the files chosen for auto-annotation
are affected by the --sort and --threshold options. Each source file is clearly marked (Auto-annotated
source) as being chosen automatically. Any files that could not be found are mentioned at the end of the output, eg:
-----------------------------------------------------------------The following files chosen for auto-annotation could not be found:
-----------------------------------------------------------------getc.c
ctype.c
../sysdeps/generic/lockfile.c
This is quite common for library files, since libraries are usually compiled with debugging information, but the source
files are often not present on a system. If a file is chosen for annotation both manually and automatically, it is marked
as User-annotated source. Use the -I/--include option to tell Valgrind where to look for source files if
the filenames found from the debugging information arent specific enough.
Beware that cg_annotate can take some time to digest large cachegrind.out.<pid> files, e.g. 30 seconds or
more. Also beware that auto-annotation can produce a lot of output if your program is large!
93
If a source file is more recent than the cachegrind.out.<pid> file. This is because the information in
cachegrind.out.<pid> is only recorded with line numbers, so if the line numbers change at all in the source
(e.g. lines added, deleted, swapped), any annotations will be incorrect.
If information is recorded about line numbers past the end of a file. This can be caused by the above problem, i.e.
shortening the source file while using an old cachegrind.out.<pid> file. If this happens, the figures for the
bogus lines are printed anyway (clearly marked as bogus) in case they are important.
1
1
2
.
1
1
0
0
0
.
0
0
0
0
0
.
0
0
.
.
0
.
.
.
.
.
0
.
.
.
.
.
0
.
.
.
.
1
1
.
.
1
.
0
0
.
.
0
.
0
0
.
.
0
leal -12(%ebp),%eax
movl %eax,84(%ebx)
movl $1,-20(%ebp)
.align 4,0x90
movl $.LnrB,%eax
movl %eax,-16(%ebp)
How can the third instruction be executed twice when the others are executed only once? As it turns out, it isnt.
Heres a dump of the executable, using objdump -d:
8048f25:
8048f28:
8048f2b:
8048f32:
8048f34:
8048f39:
8d
89
c7
89
b8
89
45
43
45
f6
08
45
f4
lea
0xfffffff4(%ebp),%eax
54
mov
%eax,0x54(%ebx)
ec 01 00 00 00
movl $0x1,0xffffffec(%ebp)
mov
%esi,%esi
8b 07 08
mov
$0x8078b08,%eax
f0
mov
%eax,0xfffffff0(%ebp)
Notice the extra mov %esi,%esi instruction. Where did this come from? The GNU assembler inserted it to
serve as the two bytes of padding needed to align the movl $.LnrB,%eax instruction on a four-byte boundary,
but pretended it didnt exist when adding debug information. Thus when Valgrind reads the debug info it thinks
that the movl $0x1,0xffffffec(%ebp) instruction covers the address range 0x8048f2b--0x804833 by itself,
and attributes the counts for the mov %esi,%esi to it.
Sometimes, the same filename might be represented with a relative name and with an absolute name in different parts
of the debug info, eg: /home/user/proj/proj.h and ../proj.h. In this case, if you use auto-annotation,
the file will be annotated twice with the counts split between the two.
Files with more than 65,535 lines cause difficulties for the Stabs-format debug info reader. This is because the
line number in the struct nlist defined in a.out.h under Linux is only a 16-bit value. Valgrind can handle
some files with more than 65,535 lines correctly by making some guesses to identify line number overflows. But
some cases are beyond it, in which case youll get a warning message explaining that annotations for the file might
be incorrect.
If you are using GCC 3.1 or later, this is most likely irrelevant, since GCC switched to using the more modern
DWARF2 format by default at version 3.1. DWARF2 does not have any such limitations on line numbers.
94
If you compile some files with -g and some without, some events that take place in a file without debug info could
be attributed to the last line of a file with debug info (whichever one gets placed before the non-debug-info file in
the executable).
This list looks long, but these cases should be fairly rare.
It reads and checks file1, then read and checks file2 and merges it into the running totals, then the same with
file3, etc. The final results are written to outputfile, or to standard out if no output file is specified.
Costs are summed on a per-function, per-line and per-instruction basis. Because of this, the order in which the input
files does not matter, although you should take care to only mention each file once, since any file mentioned twice will
be added in twice.
cg_merge does not attempt to check that the input files come from runs of the same executable. It will happily merge
together profile files from completely unrelated programs. It does however check that the Events: lines of all the
inputs are identical, so as to ensure that the addition of costs makes sense. For example, it would be nonsensical for it
to add a number indicating D1 read references to a number from a different file indicating LL write misses.
A number of other syntax and sanity checks are done whilst reading the inputs. cg_merge will stop and attempt to
print a helpful error message if any of the input files fail these checks.
It reads and checks file1, then read and checks file2, then computes the difference (effectively file1 - file2).
The final results are written to standard output.
Costs are summed on a per-function basis. Per-line costs are not summed, because doing so is too difficult. For
example, consider differencing two profiles, one from a single-file program A, and one from the same program A
where a single blank line was inserted at the top of the file. Every single per-line count has changed. In comparison,
the per-function counts have not changed. The per-function count differences are still very useful for determining
differences between programs. Note that because the result is the difference of two profiles, many of the counts will
95
be negative; this indicates that the counts for the relevant function are fewer in the second version than those in the
first version.
cg_diff does not attempt to check that the input files come from runs of the same executable. It will happily merge
together profile files from completely unrelated programs. It does however check that the Events: lines of all the
inputs are identical, so as to ensure that the addition of costs makes sense. For example, it would be nonsensical for it
to add a number indicating D1 read references to a number from a different file indicating LL write misses.
A number of other syntax and sanity checks are done whilst reading the inputs. cg_diff will stop and attempt to print
a helpful error message if any of the input files fail these checks.
Sometimes you will want to compare Cachegrind profiles of two versions of a program that you have sitting side-byside. For example, you might have version1/prog.c and version2/prog.c, where the second is slightly
different to the first. A straight comparison of the two will not be useful -- because functions are qualified with
filenames, a function f will be listed as version1/prog.c:f for the first version but version2/prog.c:f
for the second version.
When this happens, you can use the --mod-filename option. Its argument is a Perl search-and-replace expression
that will be applied to all the filenames in both Cachegrind output files. It can be used to remove minor differences in
filenames. For example, the option --mod-filename=s/version[0-9]/versionN/ will suffice for this
case.
Similarly, sometimes compilers auto-generate certain functions and give them randomized names. For example,
GCC sometimes auto-generates functions with names like T.1234, and the suffixes vary from build to build.
You can use the --mod-funcname option to remove small differences like these; it works in the same way as
--mod-filename.
96
--cachegrind-out-file=<file>
Write the profile data to file rather than to the default output file, cachegrind.out.<pid>. The %p and %q
format specifiers can be used to embed the process ID and/or the contents of an environment variable in the name, as
is the case for the core option --log-file.
97
-o outfile
Write the profile data to outfile rather than to standard output.
98
enum E { A, B, C };
enum E e;
int i;
...
switch (e)
{
case A: i += 1; break;
case B: i += 2; break;
case C: i += 3; break;
}
This is obviously a contrived example, but the basic principle applies in a wide variety of situations.
In short, Cachegrind can tell you where some of the bottlenecks in your code are, but it cant tell you how to fix them.
You have to work that out for yourself. But at least you have the information!
The cache configuration simulated (cache size, associativity and line size) is determined automatically using the x86
CPUID instruction. If you have a machine that (a) doesnt support the CPUID instruction, or (b) supports it in an early
incarnation that doesnt give any cache information, then Cachegrind will fall back to using a default configuration
(that of a model 3/4 Athlon). Cachegrind will tell you if this happens. You can manually specify one, two or all
three levels (I1/D1/LL) of the cache from the command line using the --I1, --D1 and --LL options. For cache
parameters to be valid for simulation, the number of sets (with associativity being the number of cache lines in each
set) has to be a power of two.
On PowerPC platforms Cachegrind cannot automatically determine the cache configuration, so you will need to specify
it with the --I1, --D1 and --LL options.
Other noteworthy behaviour:
References that straddle two cache lines are treated as follows:
If both blocks hit --> counted as one hit
If one block hits, the other misses --> counted as one miss.
If both blocks miss --> counted as one miss (not two)
Instructions that modify a memory location (e.g. inc and dec) are counted as doing just a read, i.e. a single data
reference. This may seem strange, but since the write can never cause a miss (the read guarantees the block is in
the cache) its not very interesting.
Thus it measures not the number of times the data cache is accessed, but the number of times a data cache miss
could occur.
If you are interested in simulating a cache with different properties, it is not particularly hard to write your own cache
simulator, or to modify the existing ones in cg_sim.c. Wed be interested to hear from anyone who does.
See Hennessy and Pattersons classic text "Computer Architecture: A Quantitative Approach", 4th edition (2007),
Section 2.3 (pages 80-89) for background on modern branch predictors.
5.8.3. Accuracy
Valgrinds cache profiling has a number of shortcomings:
It doesnt account for kernel activity -- the effect of system calls on the cache and branch predictor contents is
ignored.
It doesnt account for other process activity. This is probably desirable when considering a single program.
It doesnt account for virtual-to-physical address mappings. Hence the simulation is not a true representation of
whats happening in the cache. Most caches and branch predictors are physically indexed, but Cachegrind simulates
caches using virtual addresses.
It doesnt account for cache misses not visible at the instruction level, e.g. those arising from TLB misses, or
speculative execution.
Valgrind will schedule threads differently from how they would be when running natively. This could warp the
results for threaded programs.
The x86/amd64 instructions bts, btr and btc will incorrectly be counted as doing a data read if both the
arguments are registers, eg:
The file format is fairly straightforward, basically giving the cost centre for every line, grouped by files and functions.
Its also totally generic and self-describing, in the sense that it can be used for any events that can be counted on a
line-by-line basis, not just cache and branch predictor events. For example, earlier versions of Cachegrind didnt have
a branch predictor simulation. When this was added, the file format didnt need to change at all. So the format (and
consequently, cg_annotate) could be used by other tools.
The file format:
file
::= desc_line* cmd_line events_line data_line+ summary_line
desc_line
::= "desc:" ws? non_nl_string
cmd_line
::= "cmd:" ws? cmd
events_line ::= "events:" ws? (event ws)+
data_line
::= file_line | fn_line | count_line
file_line
::= "fl=" filename
fn_line
::= "fn=" fn_name
count_line ::= line_num ws? (count ws)+
summary_line ::= "summary:" ws? (count ws)+
count
::= num | "."
Where:
non_nl_string is any string not containing a newline.
cmd is a string holding the command line of the profiled program.
event is a string containing no whitespace.
filename and fn_name are strings.
num and line_num are decimal numbers.
ws is whitespace.
The contents of the "desc:" lines are printed out at the top of the summary. This is a generic way of providing
simulation specific information, e.g. for giving the cache configuration for cache simulation.
More than one line of info can be presented for each file/fn/line number. In such cases, the counts for the named events
will be accumulated.
Counts can be "." to represent zero. This makes the files easier for humans to read.
The number of counts in each line and the summary_line should not exceed the number of events in the
event_line. If the number in each line is less, cg_annotate treats those missing as though they were a "."
entry. This saves space.
A file_line changes the current file name. A fn_line changes the current function name. A count_line
contains counts that pertain to the current filename/fn_name. A "fn=" file_line and a fn_line must appear
before any count_lines to give the context of the first count_lines.
Each file_line will normally be immediately followed by a fn_line. But it doesnt have to be.
The summary line is redundant, because it just holds the total counts for each event. But this serves as a useful sanity
check of the data; if the totals for each event dont match the summary line, something has gone wrong.
102
6.1. Overview
Callgrind is a profiling tool that records the call history among functions in a programs run as a call-graph. By default,
the collected data consists of the number of instructions executed, their relationship to source lines, the caller/callee
relationship between functions, and the numbers of such calls. Optionally, cache simulation and/or branch prediction
(similar to Cachegrind) can produce further information about the runtime behavior of an application.
The profile data is written out to a file at program termination. For presentation of the data, and interactive control of
the profiling, two command line tools are provided:
callgrind_annotate
This command reads in the profile data, and prints a sorted lists of functions, optionally with source annotation.
For graphical visualization of the data, try KCachegrind, which is a KDE/Qt based GUI that makes it easy to navigate
the large amount of data that Callgrind produces.
callgrind_control
This command enables you to interactively observe and control the status of a program currently running under
Callgrinds control, without stopping the program. You can get statistics information as well as the current stack
trace, and you can request zeroing of counters or dumping of profile data.
6.1.1. Functionality
Cachegrind collects flat profile data: event counts (data reads, cache misses, etc.) are attributed directly to the function
they occurred in. This cost attribution mechanism is called self or exclusive attribution.
Callgrind extends this functionality by propagating costs across function call boundaries. If function foo calls bar,
the costs from bar are added into foos costs. When applied to the program as a whole, this builds up a picture of
so called inclusive costs, that is, where the cost of each function includes the costs of all functions it called, directly or
indirectly.
As an example, the inclusive cost of main should be almost 100 percent of the total program cost. Because of costs
arising before main is run, such as initialization of the run time linker and construction of global C++ objects, the
inclusive cost of main is not exactly 100 percent of the total program cost.
Together with the call graph, this allows you to find the specific call chains starting from main in which the majority
of the programs costs occur. Caller/callee cost attribution is also useful for profiling functions called from multiple
call sites, and where optimization opportunities depend on changing code in the callers, in particular by reducing the
call count.
Callgrinds cache simulation is based on that of Cachegrind. Read the documentation for Cachegrind: a cache and
branch-prediction profiler first. The material below describes the features supported in addition to Cachegrinds
features.
Callgrinds ability to detect function calls and returns depends on the instruction set of the platform it is run on. It
works best on x86 and amd64, and unfortunately currently does not work so well on PowerPC, ARM, Thumb or MIPS
103
code. This is because there are no explicit call or return instructions in these instruction sets, so Callgrind has to rely
on heuristics to detect calls and returns.
This will print out the current backtrace. To annotate the backtrace with event counts, run
callgrind_control -e -b
After program termination, a profile data file named callgrind.out.<pid> is generated, where pid is the process
ID of the program being profiled. The data file contains information about the calls made in the program among the
functions executed, together with Instruction Read (Ir) event counts.
To generate a function-by-function summary from the profile data file, use
callgrind_annotate [options] callgrind.out.<pid>
This summary is similar to the output you get from a Cachegrind run with cg_annotate: the list of functions is ordered
by exclusive cost of functions, which also are the ones that are shown. Important for the additional features of Callgrind
are the following two options:
--inclusive=yes: Instead of using exclusive cost of functions as sorting order, use and show inclusive cost.
--tree=both: Interleave into the top level list of functions, information on the callers and the callees of each
function. In these lines, which represents executed calls, the cost gives the number of events spent in the call.
Indented, above each function, there is the list of callers, and below, the list of callees. The sum of events in calls to
a given function (caller lines), as well as the sum of events in calls from the function (callee lines) together with the
self cost, gives the total inclusive cost of the function.
104
Use --auto=yes to get annotated source code for all relevant functions for which the source can be found. In
addition to source annotation as produced by cg_annotate, you will see the annotated call sites with call counts.
For all other options, consult the (Cachegrind) documentation for cg_annotate.
For better call graph browsing experience, it is highly recommended to use KCachegrind. If your code has a significant
fraction of its cost in cycles (sets of functions calling each other in a recursive manner), you have to use KCachegrind,
as callgrind_annotate currently does not do any cycle detection, which is important to get correct results in
this case.
If you are additionally interested in measuring the cache behavior of your program, use Callgrind with the option
--cache-sim=yes. For branch prediction simulation, use --branch-sim=yes. Expect a further slow down
approximately by a factor of 2.
If the program section you want to profile is somewhere in the middle of the run, it is beneficial to fast forward to
this section without any profiling, and then enable profiling. This is achieved by using the command line option
--instr-atstart=no and running, in a shell: callgrind_control -i on just before the interesting
code section is executed. To exactly specify the code position where profiling should start, use the client request
CALLGRIND_START_INSTRUMENTATION.
If you want to be able to see assembly code level annotation, specify --dump-instr=yes. This will produce
profile data at instruction granularity. Note that the resulting profile data can only be viewed with KCachegrind. For
assembly annotation, it also is interesting to see more details of the control flow inside of functions, i.e. (conditional)
jumps. This will be collected by further specifying --collect-jumps=yes.
where pid is the PID of the running program, part is a number incremented on each dump (".part" is skipped for the
dump at program termination), and threadID is a thread identification ("-threadID" is only used if you request dumps
of individual threads with --separate-threads=yes).
There are different ways to generate multiple profile dumps while a program is running under Callgrinds supervision.
Nevertheless, all methods trigger the same action, which is "dump all profile information since the last dump or
program start, and zero cost counters afterwards". To allow for zeroing cost counters without dumping, there is a
second action "zero all cost counters now". The different methods are:
Dump on program termination. This method is the standard way and doesnt need any special action on your
part.
105
and off by specifying "off" instead of "on". Furthermore, instrumentation state can be programatically changed with
the macros CALLGRIND_START_INSTRUMENTATION; and CALLGRIND_STOP_INSTRUMENTATION;.
In addition to enabling instrumentation, you must also enable event collection for the parts of your program you are
interested in. By default, event collection is enabled everywhere. You can limit collection to a specific function by
using --toggle-collect=function. This will toggle the collection state on entering and leaving the specified
functions. When this option is in effect, the default collection state at program start is "off". Only events happening
while running inside of the given function will be collected. Recursive calls of the given function do not trigger any
action.
It is important to note that with instrumentation disabled, the cache simulator cannot see any memory access events,
and thus, any simulated cache state will be frozen and wrong without instrumentation. Therefore, to get useful cache
events (hits/misses) after switching on instrumentation, the cache first must warm up, probably leading to many cold
106
misses which would not have happened in reality. If you do not want to see these, start event collection a few million
instructions after you have enabled instrumentation.
107
quite capable of avoiding cycles, it has to be used carefully to not cause symbol explosion. The latter imposes large
memory requirement for Callgrind with possible out-of-memory conditions, and big profile data files.
A further possibility to avoid cycles in Callgrinds profile data output is to simply leave out given functions in the
call graph. Of course, this also skips any call information from and to an ignored function, and thus can break
a cycle. Candidates for this typically are dispatcher functions in event driven code. The option to ignore calls
to a function is --fn-skip=function. Aside from possibly breaking cycles, this is used in Callgrind to skip
trampoline functions in the PLT sections for calls to functions in shared libraries. You can see the difference if you
profile with --skip-plt=no. If a call is ignored, its cost events will be propagated to the enclosing function.
If you have a recursive function, you can distinguish the first 10 recursion levels by specifying
--separate-recs10=function. Or for all functions with --separate-recs=10, but this will give
you much bigger profile data files. In the profile data, you will see the recursion levels of "func" as the different
functions with names "func", "func2", "func3" and so on.
If you have call chains "A > B > C" and "A > C > B" in your program, you usually get a "false" cycle "B <> C". Use
--separate-callers2=B --separate-callers2=C, and functions "B" and "C" will be treated as different
functions depending on the direct caller. Using the apostrophe for appending this "context" to the function name, you
get "A > BA > CB" and "A > CA > BC", and there will be no cycle. Use --separate-callers=2 to get a
2-caller dependency for all functions. Note that doing this will increase the size of profile data files.
109
110
yes]
--skip-direct-rec=<no|yes> [default:
Ignore direct recursions.
--fn-skip=<function>
Ignore calls to/from a given function.
ignored, you will only see A > C.
yes]
E.g. if you have a call chain A > B > C, and you specify function B to be
This is very convenient to skip functions handling callback behaviour. For example, with the signal/slot mechanism
in the Qt graphics library, you only want to see the function emitting a signal to call the slots connected to that signal.
First, determine the real call chain to see the functions needed to be skipped, then use this option.
111
112
instrumentation [on|off] requests to set (if parameter on/off is given) or get the current instrumentation
state.
status requests to print out some status information.
CALLGRIND_START_INSTRUMENTATION
Start full Callgrind instrumentation if not already enabled. When cache simulation is done, this will flush the simulated
cache and lead to an artifical cache warmup phase afterwards with cache misses which would not have happened in
reality. See also option --instr-atstart.
CALLGRIND_STOP_INSTRUMENTATION
Stop full Callgrind instrumentation if not already disabled. This flushes Valgrinds translation cache, and does no
additional instrumentation afterwards: it effectivly will run at the same speed as Nulgrind, i.e. at minimal slowdown.
Use this to speed up the Callgrind run for uninteresting code parts. Use CALLGRIND_START_INSTRUMENTATION
to enable instrumentation again. See also option --instr-atstart.
no]
--tree=<none|caller|calling|both> [default:
Print for each function their callers, the called functions or both.
none]
-I, --include=<dir>
Add dir to the list of directories to search for source files.
--instr=<on|off>
Switch instrumentation mode on or off. If a Callgrind run has instrumentation disabled, no simulation is done and no
events are counted. This is useful to skip uninteresting program parts, as there is much less slowdown (same as with
the Valgrind tool "none"). See also the Callgrind option --instr-atstart.
--vgdb-prefix=<prefix>
Specify the vgdb prefix to use by callgrind_control. callgrind_control internally uses vgdb to find and control the
active Callgrind runs. If the --vgdb-prefix option was used for launching valgrind, then the same option must be
given to callgrind_control.
115
7.1. Overview
Helgrind is a Valgrind tool for detecting synchronisation errors in C, C++ and Fortran programs that use the POSIX
pthreads threading primitives.
The main abstractions in POSIX pthreads are: a set of threads sharing a common address space, thread creation,
thread joining, thread exit, mutexes (locks), condition variables (inter-thread event notifications), reader-writer locks,
spinlocks, semaphores and barriers.
Helgrind can detect three classes of errors, which are discussed in detail in the next three sections:
1. Misuses of the POSIX pthreads API.
2. Potential deadlocks arising from lock ordering problems.
3. Data races -- accessing memory without adequate locking or synchronisation.
Problems like these often result in unreproducible, timing-dependent crashes, deadlocks and other misbehaviour, and
can be difficult to find by other means.
Helgrind is aware of all the pthread abstractions and tracks their effects as accurately as it can. On x86 and amd64
platforms, it understands and partially handles implicit locking arising from the use of the LOCK instruction prefix.
On PowerPC/POWER and ARM platforms, it partially handles implicit locking arising from load-linked and storeconditional instruction pairs.
Helgrind works best when your application uses only the POSIX pthreads API. However, if you want to use
custom threading primitives, you can describe their behaviour to Helgrind using the ANNOTATE_* macros defined in
helgrind.h.
Following those is a section containing hints and tips on how to get the best out of Helgrind.
Then there is a summary of command-line options.
Finally, there is a brief summary of areas in which Helgrind could be improved.
Reported errors always contain a primary stack trace indicating where the error was detected. They may also contain
auxiliary stack traces giving additional information. In particular, most errors relating to mutexes will also tell you
where that mutex first came to Helgrinds attention (the "was first observed at" part), so you have a chance
of figuring out which mutex it is referring to. For example:
Thread #1 unlocked a not-locked lock at 0x7FEFFFA90
at 0x4C2408D: pthread_mutex_unlock (hg_intercepts.c:492)
by 0x40073A: nearly_main (tc09_bad_unlock.c:27)
by 0x40079B: main (tc09_bad_unlock.c:50)
Lock at 0x7FEFFFA90 was first observed
at 0x4C25D01: pthread_mutex_init (hg_intercepts.c:326)
by 0x40071F: nearly_main (tc09_bad_unlock.c:23)
by 0x40079B: main (tc09_bad_unlock.c:50)
Helgrind has a way of summarising thread identities, as you see here with the text "Thread #1". This is so that it
can speak about threads and sets of threads without overwhelming you with details. See below for more information
on interpreting error messages.
In this section, and in general, to "acquire" a lock simply means to lock that lock, and to "release" a lock means to
unlock it.
Helgrind monitors the order in which threads acquire locks. This allows it to detect potential deadlocks which could
arise from the formation of cycles of locks. Detecting such inconsistencies is useful because, whilst actual deadlocks
are fairly obvious, potential deadlocks may never be discovered during testing and could later lead to hard-to-diagnose
in-service failures.
The simplest example of such a problem is as follows.
Imagine some shared resource R, which, for whatever reason, is guarded by two locks, L1 and L2, which must both
be held when R is accessed.
Suppose a thread acquires L1, then L2, and proceeds to access R. The implication of this is that all threads in the
program must acquire the two locks in the order first L1 then L2. Not doing so risks deadlock.
The deadlock could happen if two threads -- call them T1 and T2 -- both want to access R. Suppose T1 acquires
L1 first, and T2 acquires L2 first. Then T1 tries to acquire L2, and T2 tries to acquire L1, but those locks are both
already held. So T1 and T2 become deadlocked.
Helgrind builds a directed graph indicating the order in which locks have been acquired in the past. When a thread
acquires a new lock, the graph is updated, and then checked to see if it now contains a cycle. The presence of a cycle
indicates a potential deadlock involving the locks in the cycle.
In general, Helgrind will choose two locks involved in the cycle and show you how their acquisition ordering has
become inconsistent. It does this by showing the program points that first defined the ordering, and the program points
which later violated it. Here is a simple example involving just two locks:
Thread #1: lock order "0x7FF0006D0 before 0x7FF0006A0" violated
Observed (incorrect) order is: acquisition of lock at 0x7FF0006A0
at 0x4C2BC62: pthread_mutex_lock (hg_intercepts.c:494)
by 0x400825: main (tc13_laog1.c:23)
followed by a later acquisition of lock at 0x7FF0006D0
at 0x4C2BC62: pthread_mutex_lock (hg_intercepts.c:494)
by 0x400853: main (tc13_laog1.c:24)
Required order was established by acquisition of lock at 0x7FF0006D0
at 0x4C2BC62: pthread_mutex_lock (hg_intercepts.c:494)
by 0x40076D: main (tc13_laog1.c:17)
followed by a later acquisition of lock at 0x7FF0006A0
at 0x4C2BC62: pthread_mutex_lock (hg_intercepts.c:494)
by 0x40079B: main (tc13_laog1.c:18)
When there are more than two locks in the cycle, the error is equally serious. However, at present Helgrind does
not show the locks involved, sometimes because that information is not available, but also so as to avoid flooding you
with information. For example, a naive implementation of the famous Dining Philosophers problem involves a cycle
of five locks (see helgrind/tests/tc14_laog_dinphils.c). In this case Helgrind has detected that all 5
philosophers could simultaneously pick up their left fork and then deadlock whilst waiting to pick up their right forks.
118
119
The problem is there is nothing to stop var being updated simultaneously by both threads. A correct program would
protect var with a lock of type pthread_mutex_t, which is acquired before each access and released afterwards.
Helgrinds output for this program is:
Thread #1 is the programs root thread
Thread #2 was created
at 0x511C08E: clone (in /lib64/libc-2.8.so)
by 0x4E333A4: do_clone (in /lib64/libpthread-2.8.so)
by 0x4E33A30: pthread_create@@GLIBC_2.2.5 (in /lib64/libpthread-2.8.so)
by 0x4C299D4: pthread_create@* (hg_intercepts.c:214)
by 0x400605: main (simple_race.c:12)
Possible data race during read of size 4 at 0x601038 by thread #1
Locks held: none
at 0x400606: main (simple_race.c:13)
This conflicts with a previous write of size 4 by thread #2
Locks held: none
at 0x4005DC: child_fn (simple_race.c:6)
by 0x4C29AFF: mythread_wrapper (hg_intercepts.c:194)
by 0x4E3403F: start_thread (in /lib64/libpthread-2.8.so)
by 0x511C0CC: clone (in /lib64/libc-2.8.so)
Location 0x601038 is 0 bytes inside global var "var"
declared at simple_race.c:3
This is quite a lot of detail for an apparently simple error. The last clause is the main error message. It says there is a
race as a result of a read of size 4 (bytes), at 0x601038, which is the address of var, happening in function main at
line 13 in the program.
Two important parts of the message are:
Helgrind shows two stack traces for the error, not one. By definition, a race involves two different threads accessing
the same location in such a way that the result depends on the relative speeds of the two threads.
The first stack trace follows the text "Possible data race during read of size 4 ..." and the
second trace follows the text "This conflicts with a previous write of size 4 ...". Helgrind is usually able to show both accesses involved in a race. At least one of these will be a write (since two
concurrent, unsynchronised reads are harmless), and they will of course be from different threads.
By examining your program at the two locations, you should be able to get at least some idea of what the root cause
of the problem is. For each location, Helgrind shows the set of locks held at the time of the access. This often
makes it clear which thread, if any, failed to take a required lock. In this example neither thread holds a lock during
the access.
120
For races which occur on global or stack variables, Helgrind tries to identify the name and defining point
of the variable. Hence the text "Location 0x601038 is 0 bytes inside global var "var"
declared at simple_race.c:3".
Showing names of stack and global variables carries no run-time overhead once Helgrind has your program up
and running. However, it does require Helgrind to spend considerable extra time and memory at program startup
to read the relevant debug info. Hence this facility is disabled by default. To enable it, you need to give the
--read-var-info=yes option to Helgrind.
The following section explains Helgrinds race detection algorithm in more detail.
Child thread:
int var;
// create child thread
pthread_create(...)
var = 20;
var = 10;
exit
The parent thread creates a child. Both then write different values to some variable var, and the parent then waits
for the child to exit.
What is the value of var at the end of the program, 10 or 20? We dont know. The program is considered buggy (it
has a race) because the final value of var depends on the relative rates of progress of the parent and child threads. If
the parent is fast and the child is slow, then the childs assignment may happen later, so the final value will be 10; and
vice versa if the child is faster than the parent.
The relative rates of progress of parent vs child is not something the programmer can control, and will often change
from run to run. It depends on factors such as the load on the machine, what else is running, the kernels scheduling
strategy, and many other factors.
The obvious fix is to use a lock to protect var. It is however instructive to consider a somewhat more abstract
solution, which is to send a message from one thread to the other:
121
Parent thread:
Child thread:
int var;
// create child thread
pthread_create(...)
var = 20;
// send message to child
// wait for message to arrive
var = 10;
exit
// wait for child
pthread_join(...)
printf("%d\n", var);
Now the program reliably prints "10", regardless of the speed of the threads. Why? Because the childs assignment
cannot happen until after it receives the message. And the message is not sent until after the parents assignment is
done.
The message transmission creates a "happens-before" dependency between the two assignments: var = 20; must
now happen-before var = 10;. And so there is no longer a race on var.
Note that its not significant that the parent sends a message to the child. Sending a message from the child (after
its assignment) to the parent (before its assignment) would also fix the problem, causing the program to reliably print
"20".
Helgrinds algorithm is (conceptually) very simple. It monitors all accesses to memory locations. If a location -- in
this example, var, is accessed by two different threads, Helgrind checks to see if the two accesses are ordered by the
happens-before relation. If so, thats fine; if not, it reports a race.
It is important to understand that the happens-before relation creates only a partial ordering, not a total ordering. An
example of a total ordering is comparison of numbers: for any two numbers x and y, either x is less than, equal to,
or greater than y. A partial ordering is like a total ordering, but it can also express the concept that two elements are
neither equal, less or greater, but merely unordered with respect to each other.
In the fixed example above, we say that var = 20; "happens-before" var = 10;. But in the original version,
they are unordered: we cannot say that either happens-before the other.
What does it mean to say that two accesses from different threads are ordered by the happens-before relation? It
means that there is some chain of inter-thread synchronisation operations which cause those accesses to happen in a
particular order, irrespective of the actual rates of progress of the individual threads. This is a required property for a
reliable threaded program, which is why Helgrind checks for it.
The happens-before relations created by standard threading primitives are as follows:
When a mutex is unlocked by thread T1 and later (or immediately) locked by thread T2, then the memory accesses
in T1 prior to the unlock must happen-before those in T2 after it acquires the lock.
The same idea applies to reader-writer locks, although with some complication so as to allow correct handling of
reads vs writes.
122
When a condition variable (CV) is signalled on by thread T1 and some other thread T2 is thereby released from a
wait on the same CV, then the memory accesses in T1 prior to the signalling must happen-before those in T2 after
it returns from the wait. If no thread was waiting on the CV then there is no effect.
If instead T1 broadcasts on a CV, then all of the waiting threads, rather than just one of them, acquire a happensbefore dependency on the broadcasting thread at the point it did the broadcast.
A thread T2 that continues after completing sem_wait on a semaphore that thread T1 posts on, acquires a happensbefore dependence on the posting thread, a bit like dependencies caused mutex unlock-lock pairs. However, since
a semaphore can be posted on many times, it is unspecified from which of the post calls the wait call gets its
happens-before dependency.
For a group of threads T1 .. Tn which arrive at a barrier and then move on, each thread after the call has a happensafter dependency from all threads before the barrier.
A newly-created child thread acquires an initial happens-after dependency on the point where its parent created it.
That is, all memory accesses performed by the parent prior to creating the child are regarded as happening-before
all the accesses of the child.
Similarly, when an exiting thread is reaped via a call to pthread_join, once the call returns, the reaping thread
acquires a happens-after dependency relative to all memory accesses made by the exiting thread.
In summary: Helgrind intercepts the above listed events, and builds a directed acyclic graph represented the collective
happens-before dependencies. It also monitors all memory accesses.
If a location is accessed by two different threads, but Helgrind cannot find any path through the happens-before graph
from one access to the other, then it reports a race.
There are a couple of caveats:
Helgrind doesnt check for a race in the case where both accesses are reads. That would be silly, since concurrent
reads are harmless.
Two accesses are considered to be ordered by the happens-before dependency even through arbitrarily long chains of
synchronisation events. For example, if T1 accesses some location L, and then pthread_cond_signals T2,
which later pthread_cond_signals T3, which then accesses L, then a suitable happens-before dependency
exists between the first and second accesses, even though it involves two different inter-thread synchronisation
events.
123
Helgrind first announces the creation points of any threads referenced in the error message. This is so it can speak
concisely about threads without repeatedly printing their creation point call stacks. Each thread is only ever announced
once, the first time it appears in any Helgrind error message.
The main error message begins at the text "Possible data race during read". At the start is information
you would expect to see -- address and size of the racing access, whether a read or a write, and the call stack at the
point it was detected.
A second call stack is presented starting at the text "This conflicts with a previous write". This
shows a previous access which also accessed the stated address, and which is believed to be racing against the access
in the first call stack. Note that this second call stack is limited to a maximum of 8 entries to limit the memory usage.
Finally, Helgrind may attempt to give a description of the raced-on address in source level terms. In this example, it
identifies it as a local variable, shows its name, declaration point, and in which frame (of the first call stack) it lives.
Note that this information is only shown when --read-var-info=yes is specified on the command line. Thats
because reading the DWARF3 debug information in enough detail to capture variable type and location information
makes Helgrind much slower at startup, and also requires considerable amounts of memory, for large programs.
Once you have your two call stacks, how do you find the root cause of the race?
124
The first thing to do is examine the source locations referred to by each call stack. They should both show an access
to the same location, or variable.
Now figure out how how that location should have been made thread-safe:
Perhaps the location was intended to be protected by a mutex? If so, you need to lock and unlock the mutex at both
access points, even if one of the accesses is reported to be a read. Did you perhaps forget the locking at one or other
of the accesses? To help you do this, Helgrind shows the set of locks held by each threads at the time they accessed
the raced-on location.
Alternatively, perhaps you intended to use a some other scheme to make it safe, such as signalling on a condition
variable. In all such cases, try to find a synchronisation event (or a chain thereof) which separates the earlierobserved access (as shown in the second call stack) from the later-observed access (as shown in the first call stack).
In other words, try to find evidence that the earlier access "happens-before" the later access. See the previous
subsection for an explanation of the happens-before relation.
The fact that Helgrind is reporting a race means it did not observe any happens-before relation between the two
accesses. If Helgrind is working correctly, it should also be the case that you also cannot find any such relation,
even on detailed inspection of the source code. Hopefully, though, your inspection of the code will show where the
missing synchronisation operation(s) should have been.
Qt version 4.X. Qt 3.X is harmless in that it only uses POSIX pthreads primitives. Unfortunately Qt 4.X has its
Runtime
support library
for GNU
OpenMPand
(part
of GCC),
at least
for GCC
4.2 and
4.3. for
TheQtGNU
own implementation
of mutexes
(QMutex)
thread
reaping.
Helgrind
3.4.x versions
contains direct
support
4.X
OpenMP
runtime
library
(libgomp.so)
constructs
its
own
synchronisation
primitives
using
combinations
threading, which is experimental but is believed to work fairly well. A side effect of supporting Qt 4 directly of
is
atomic
memory
and the
futexapplications.
syscall, which
total
chaos since
in Helgrind
since
it cannot
that Helgrind
caninstructions
be used to debug
KDE4
As causes
this is an
experimental
feature,
we would
particularly
"see"
those.feedback from folks who have used Helgrind to successfully debug Qt 4 and/or KDE4 applications.
appreciate
Fortunately, this can be solved using a configuration-time option (for GCC). Rebuild GCC from source, and
configure using --disable-linux-futex. This makes libgomp.so use the standard POSIX threading
primitives instead. Note that this was tested using GCC 4.2.3 and has not been re-tested using more recent
GCC versions. We would appreciate hearing about any successes or failures with more recent versions.
If you must implement your own threading primitives, there are a set of client request macros in helgrind.h to
help you describe your primitives to Helgrind. You should be able to mark up mutexes, condition variables, etc,
without difficulty.
It is also possible to mark up the effects of thread-safe reference counting using the ANNOTATE_HAPPENS_BEFORE,
ANNOTATE_HAPPENS_AFTER and ANNOTATE_HAPPENS_BEFORE_FORGET_ALL, macros. Thread-safe
reference counting using an atomically incremented/decremented refcount variable causes Helgrind problems
because a one-to-zero transition of the reference count means the accessing thread has exclusive ownership of the
associated resource (normally, a C++ object) and can therefore access it (normally, to run its destructor) without
locking. Helgrind doesnt understand this, and markup is essential to avoid false positives.
Here are recommended guidelines for marking up thread safe reference counting in C++. You only need to mark
up your release methods -- the ones which decrement the reference count. Given a class like this:
class MyClass {
unsigned int mRefCount;
void Release ( void ) {
unsigned int newCount = atomic_decrement(&mRefCount);
if (newCount == 0) {
delete this;
}
}
}
the release method should be marked up as follows:
void Release ( void ) {
unsigned int newCount = atomic_decrement(&mRefCount);
if (newCount == 0) {
ANNOTATE_HAPPENS_AFTER(&mRefCount);
ANNOTATE_HAPPENS_BEFORE_FORGET_ALL(&mRefCount);
delete this;
} else {
ANNOTATE_HAPPENS_BEFORE(&mRefCount);
}
}
There are a number of complex, mostly-theoretical objections to this scheme. From a theoretical standpoint it
appears to be impossible to devise a markup scheme which is completely correct in the sense of guaranteeing to
remove all false races. The proposed scheme however works well in practice.
126
Waiter:
lock(mx)
while (b == False)
wait(cv,mx)
unlock(mx)
Assume b is False most of the time. If the waiter arrives at the rendezvous first, it enters its while-loop, waits for
the signaller to signal, and eventually proceeds. Helgrind sees the signal, notes the dependency, and all is well.
If the signaller arrives first, b is set to true, and the signal disappears into nowhere. When the waiter later arrives, it
does not enter its while-loop and simply carries on. But even in this case, the waiter code following the while-loop
cannot execute until the signaller sets b to True. Hence there is still the same inter-thread dependency, but this
time it is through an arbitrary in-memory condition, and Helgrind cannot see it.
By comparison, Helgrinds detection of inter-thread dependencies caused by semaphore operations is believed to
be exactly correct.
As far as I know, a solution to this problem that does not require source-level annotation of condition-variable wait
loops is beyond the current state of the art.
127
4. Make sure you are using a supported Linux distribution. At present, Helgrind only properly supports glibc-2.3
or later. This in turn means we only support glibcs NPTL threading implementation. The old LinuxThreads
implementation is not supported.
5. If your application is using thread local variables, helgrind might report false positive race conditions on these variables, despite being very probably race free.
On Linux, you can use
--sim-hints=deactivate-pthread-stack-cache-via-hack to avoid such false positive error messages (see --sim-hints).
6. Round up all finished threads using pthread_join. Avoid detaching threads: dont create threads in the
detached state, and dont call pthread_detach on existing threads.
Using pthread_join to round up finished threads provides a clear synchronisation point that both Helgrind and
programmers can see. If you dont call pthread_join on a thread, Helgrind has no way to know when it
finishes, relative to any significant synchronisation points for other threads in the program. So it assumes that the
thread lingers indefinitely and can potentially interfere indefinitely with the memory state of the program. It has
every right to assume that -- after all, it might really be the case that, for scheduling reasons, the exiting thread did
run very slowly in the last stages of its life.
7. Perform thread debugging (with Helgrind) and memory debugging (with Memcheck) together.
Helgrind tracks the state of memory in detail, and memory management bugs in the application are liable to cause
confusion. In extreme cases, applications which do many invalid reads and writes (particularly to freed memory)
have been known to crash Helgrind. So, ideally, you should make your application Memcheck-clean before using
Helgrind.
It may be impossible to make your application Memcheck-clean unless you first remove threading bugs. In
particular, it may be difficult to remove all reads and writes to freed memory in multithreaded C++ destructor
sequences at program termination. So, ideally, you should make your application Helgrind-clean before using
Memcheck.
Since this circularity is obviously unresolvable, at least bear in mind that Memcheck and Helgrind are to some
extent complementary, and you may need to use them together.
8. POSIX requires that implementations of standard I/O (printf, fprintf, fwrite, fread, etc) are thread safe.
Unfortunately GNU libc implements this by using internal locking primitives that Helgrind is unable to intercept.
Consequently Helgrind generates many false race reports when you use these functions.
Helgrind attempts to hide these errors using the standard Valgrind error-suppression mechanism. So, at least for
simple test cases, you dont see any. Nevertheless, some may slip through. Just something to be aware of.
9. Helgrinds error checks do not work properly inside the system threading library itself (libpthread.so), and it
usually observes large numbers of (false) errors in there. Valgrinds suppression system then filters these out, so
you should not see them.
If you see any race errors reported where libpthread.so or ld.so is the object associated with the innermost
stack frame, please file a bug report at http://www.valgrind.org/.
128
129
If you give the option --read-var-info=yes, then more information will be provided about the lock location,
such as the global variable or the heap block that contains the lock:
Lock ga 0x8049a20 {
Location 0x8049a20 is 0 bytes inside global var "s_rwlock"
declared at rwlock_race.c:17
kind rdwr
{ R1:thread #3 tid 3 }
}
130
ANNOTATE_HAPPENS_BEFORE
ANNOTATE_HAPPENS_AFTER
ANNOTATE_NEW_MEMORY
ANNOTATE_RWLOCK_CREATE
ANNOTATE_RWLOCK_DESTROY
ANNOTATE_RWLOCK_ACQUIRED
ANNOTATE_RWLOCK_RELEASED
These are used to describe to Helgrind, the behaviour of custom (non-POSIX) synchronisation primitives, which it
otherwise has no way to understand. See comments in helgrind.h for further documentation.
Dont update the lock-order graph, and dont check for errors, when a "try"-style lock operation happens (e.g.
pthread_mutex_trylock). Such calls do not add any real restrictions to the locking order, since they can
always fail to acquire the lock, resulting in the caller going off and doing Plan B (presumably it will have a Plan B).
Doing such checks could generate false lock-order errors and confuse users.
Performance can be very poor.
performance improvements.
131
8.1. Overview
DRD is a Valgrind tool for detecting errors in multithreaded C and C++ programs. The tool works for any program that
uses the POSIX threading primitives or that uses threading concepts built on top of the POSIX threading primitives.
A shared address space. All threads running within the same process share the same address space. All data, whether
shared or not, is identified by its address.
Regular load and store operations, which allow to read values from or to write values to the memory shared by all
threads running in the same process.
Atomic store and load-modify-store operations. While these are not mentioned in the POSIX threads standard, most
microprocessors support atomic memory operations.
Threads. Each thread represents a concurrent activity.
Synchronization objects and operations on these synchronization objects. The following types of synchronization
objects have been defined in the POSIX threads standard: mutexes, condition variables, semaphores, reader-writer
synchronization objects, barriers and spinlocks.
Which source code statements generate which memory accesses depends on the memory model of the programming
language being used. There is not yet a definitive memory model for the C and C++ languages. For a draft memory
model, see also the document WG21/N2338: Concurrency memory model compiler consequences.
For more information about POSIX threads, see also the Single UNIX Specification version 3, also known as IEEE
Std 1003.1.
2. Synchronization operations determine certain ordering constraints on memory operations performed by different
threads. These ordering constraints are called the synchronization order.
The combination of program order and synchronization order is called the happens-before relationship. This concept
was first defined by S. Adve et al in the paper Detecting data races on weak memory systems, ACM SIGARCH
Computer Architecture News, v.19 n.3, p.234-243, May 1991.
Two memory operations conflict if both operations are performed by different threads, refer to the same memory
location and at least one of them is a store operation.
A multithreaded program is data-race free if all conflicting memory accesses are ordered by synchronization
operations.
A well known way to ensure that a multithreaded program is data-race free is to ensure that a locking discipline is
followed. It is e.g. possible to associate a mutex with each shared data item, and to hold a lock on the associated mutex
while the shared data is accessed.
All programs that follow a locking discipline are data-race free, but not all data-race free programs follow a locking
discipline. There exist multithreaded programs where access to shared data is arbitrated via condition variables,
semaphores or barriers. As an example, a certain class of HPC applications consists of a sequence of computation
steps separated in time by barriers, and where these barriers are the only means of synchronization. Although there
are many conflicting memory accesses in such applications and although such applications do not make use mutexes,
most of these applications do not contain data races.
There exist two different approaches for verifying the correctness of multithreaded programs at runtime. The approach
of the so-called Eraser algorithm is to verify whether all shared memory accesses follow a consistent locking strategy.
And the happens-before data race detectors verify directly whether all interthread memory accesses are ordered by
synchronization operations. While the last approach is more complex to implement, and while it is more sensitive to
OS scheduling, it is a general approach that works for all classes of multithreaded programs. An important advantage
of happens-before data race detectors is that these do not report any false positives.
DRD is based on the happens-before algorithm.
134
135
no]
no]
ANNOTATE_HAPPENS_AFTER()
and
no]
no]
--trace-semaphore=<yes|no> [default:
Trace all semaphore activity.
no]
136
Below you can find an example of a message printed by DRD when it detects a data race:
$ valgrind --tool=drd --read-var-info=yes drd/tests/rwlock_race
...
==9466== Thread 3:
==9466== Conflicting load by thread 3 at 0x006020b8 size 4
==9466==
at 0x400B6C: thread_func (rwlock_race.c:29)
==9466==
by 0x4C291DF: vg_thread_wrapper (drd_pthread_intercepts.c:186)
==9466==
by 0x4E3403F: start_thread (in /lib64/libpthread-2.8.so)
==9466==
by 0x53250CC: clone (in /lib64/libc-2.8.so)
==9466== Location 0x6020b8 is 0 bytes inside local var "s_racy"
==9466== declared at rwlock_race.c:18, in frame #0 of thread 3
==9466== Other segment start (thread 2)
==9466==
at 0x4C2847D: pthread_rwlock_rdlock* (drd_pthread_intercepts.c:813)
==9466==
by 0x400B6B: thread_func (rwlock_race.c:28)
==9466==
by 0x4C291DF: vg_thread_wrapper (drd_pthread_intercepts.c:186)
==9466==
by 0x4E3403F: start_thread (in /lib64/libpthread-2.8.so)
==9466==
by 0x53250CC: clone (in /lib64/libc-2.8.so)
==9466== Other segment end (thread 2)
==9466==
at 0x4C28B54: pthread_rwlock_unlock* (drd_pthread_intercepts.c:912)
==9466==
by 0x400B84: thread_func (rwlock_race.c:30)
==9466==
by 0x4C291DF: vg_thread_wrapper (drd_pthread_intercepts.c:186)
==9466==
by 0x4E3403F: start_thread (in /lib64/libpthread-2.8.so)
==9466==
by 0x53250CC: clone (in /lib64/libc-2.8.so)
...
137
1. Start at the bottom of both call stacks, and count the number stack frames with identical function name, file name
and line number. In the above example the three bottommost frames are identical (clone, start_thread
and vg_thread_wrapper).
2. The next higher stack frame in both call stacks now tells you between in which source code region the other
memory access happened. The above output tells that the other memory access involved in the data race happened
between source code lines 28 and 30 in file rwlock_race.c.
The hold_lock test program holds a lock as long as specified by the -i (interval) argument. The DRD output
reports that the lock acquired at line 51 in source file hold_lock.c and released at line 55 was held during 503 ms,
while a threshold of 10 ms was specified to DRD.
138
Calling pthread_cond_wait on a mutex that is not locked, that is locked by another thread or that has been
locked recursively.
Associating two different mutexes with a condition variable through pthread_cond_wait.
Destruction or deallocation of a condition variable that is being waited upon.
Destruction or deallocation of a locked reader-writer synchronization object.
Attempts to unlock a reader-writer synchronization object that was not locked by the calling thread.
Attempts to recursively lock a reader-writer synchronization object exclusively.
Attempts to pass the address of a user-defined reader-writer synchronization object to a POSIX threads function.
Attempts to pass the address of a POSIX reader-writer synchronization object to one of the annotations for userdefined reader-writer synchronization objects.
Reinitialization of a mutex, condition variable, reader-writer lock, semaphore or barrier.
Destruction or deallocation of a semaphore or barrier that is being waited upon.
Missing synchronization between barrier wait and barrier destruction.
Exiting a thread without first unlocking the spinlocks, mutexes or reader-writer synchronization objects that were
locked by that thread.
Passing an invalid thread ID to pthread_join or pthread_cancel.
139
The macro ANNOTATE_BARRIER_WAIT_BEFORE(barrier) tells DRD that waiting for a barrier will start.
The macro ANNOTATE_BARRIER_WAIT_AFTER(barrier) tells DRD that waiting for a barrier has finished.
The macro ANNOTATE_BENIGN_RACE_SIZED(addr, size, descr) tells DRD that any races detected
on the specified address are benign and hence should not be reported. The descr argument is ignored but can be
used to document why data races on addr are benign.
The macro ANNOTATE_BENIGN_RACE_STATIC(var, descr) tells DRD that any races detected on the
specified static variable are benign and hence should not be reported. The descr argument is ignored but can
be used to document why data races on var are benign. Note: this macro can only be used in C++ programs and
not in C programs.
The macro ANNOTATE_IGNORE_READS_BEGIN tells DRD to ignore all memory loads performed by the current
thread.
The macro ANNOTATE_IGNORE_READS_END tells DRD to stop ignoring the memory loads performed by the
current thread.
The macro ANNOTATE_IGNORE_WRITES_BEGIN tells DRD to ignore all memory stores performed by the
current thread.
The macro ANNOTATE_IGNORE_WRITES_END tells DRD to stop ignoring the memory stores performed by the
current thread.
The macro ANNOTATE_IGNORE_READS_AND_WRITES_BEGIN tells DRD to ignore all memory accesses
performed by the current thread.
The macro ANNOTATE_IGNORE_READS_AND_WRITES_END tells DRD to stop ignoring the memory accesses
performed by the current thread.
The macro ANNOTATE_NEW_MEMORY(addr, size) tells DRD that the specified memory range has been
allocated by a custom memory allocator in the client program and that the client program will start using this
memory range.
The macro ANNOTATE_THREAD_NAME(name) tells DRD to associate the specified name with the current thread
and to include this name in the error messages printed by DRD.
The macros VALGRIND_MALLOCLIKE_BLOCK and VALGRIND_FREELIKE_BLOCK from the Valgrind core
are implemented; they are described in The Client Request mechanism.
141
Note: if you compiled Valgrind yourself, the header file <valgrind/drd.h> will have been installed in the
directory /usr/include by the command make install. If you obtained Valgrind by installing it as a package
however, you will probably have to install another package with a name like valgrind-devel before Valgrinds
header files are available.
#include <valgrind/drd.h>
#define _GLIBCXX_SYNCHRONIZATION_HAPPENS_BEFORE(addr) ANNOTATE_HAPPENS_BEFORE(addr)
#define _GLIBCXX_SYNCHRONIZATION_HAPPENS_AFTER(addr) ANNOTATE_HAPPENS_AFTER(addr)
Download the gcc source code and from source file libstdc++-v3/src/c++11/thread.cc copy the implementation of the execute_native_thread_routine() and std::thread::_M_start_thread() functions into a source file that is linked with your application. Make sure that also in this source file the
_GLIBCXX_SYNCHRONIZATION_HAPPENS_*() macros are defined properly.
For more information, see also The GNU C++ Library Manual, Debugging Support (http://gcc.gnu.org/onlinedocs/libstdc++/manual/deb
As an example, the test OpenMP test program drd/tests/omp_matinv triggers a data race when the option -r
has been specified on the command line. The data race is triggered by the following code:
#pragma omp parallel for private(j)
for (j = 0; j < rows; j++)
{
if (i != j)
{
const elem_t factor = a[j * cols + i];
for (k = 0; k < cols; k++)
{
a[j * cols + k] -= a[i * cols + k] * factor;
}
}
}
The above code is racy because the variable k has not been declared private. DRD will print the following error
message for the above code:
143
In the above output the function name gj.omp_fn.0 has been generated by GCC from the function name gj. The
allocation context information shows that the data race has been caused by modifying the variable k.
Note: for GCC versions before 4.4.0, no allocation context information is shown. With these GCC versions the most
usable information in the above output is the source file name and the line number where the data race has been
detected (omp_matinv.c:203).
For more information about OpenMP, see also openmp.org.
It is essential for correct operation of DRD that the tool knows about memory allocation and deallocation events. When
analyzing a client program with DRD that uses a custom memory allocator, either instrument the custom memory
allocator with the VALGRIND_MALLOCLIKE_BLOCK and VALGRIND_FREELIKE_BLOCK macros or disable the
custom memory allocator.
As an example, the GNU libstdc++ library can be configured to use standard memory allocation functions instead
of memory pools by setting the environment variable GLIBCXX_FORCE_NEW. For more information, see also the
libstdc++ manual.
So which tool should be run first? In case both DRD and Memcheck complain about a program, a possible approach
is to run both tools alternatingly and to fix as many errors as possible after each run of each tool until none of the two
tools prints any more error messages.
145
146
8.4. Limitations
DRD currently has the following limitations:
DRD, just like Memcheck, will refuse to start on Linux distributions where all symbol information has been removed
from ld.so. This is e.g. the case for the PPC editions of openSUSE and Gentoo. You will have to install the glibc
debuginfo package on these platforms before you can use DRD. See also openSUSE bug 396197 and Gentoo bug
214065.
With gcc 4.4.3 and before, DRD may report data races on the C++ class std::string in a multithreaded program.
This is a know libstdc++ issue -- see also GCC bug 40518 for more information.
If you compile the DRD source code yourself, you need GCC 3.0 or later. GCC 2.95 is not supported.
Of the two POSIX threads implementations for Linux, only the NPTL (Native POSIX Thread Library) is supported.
The older LinuxThreads library is not supported.
8.5. Feedback
If you have any comments, suggestions, feedback or bug reports about DRD, feel free to either post a message on the
Valgrind users mailing list or to file a bug report. See also http://www.valgrind.org/ for more information.
147
9.1. Overview
Massif is a heap profiler. It measures how much heap memory your program uses. This includes both the useful
space, and the extra bytes allocated for book-keeping and alignment purposes. It can also measure the size of your
programs stack(s), although it does not do so by default.
Heap profiling can help you reduce the amount of memory your program uses.
memory, this provides the following benefits:
It can speed up your program -- a smaller program will interact better with your machines caches and avoid paging.
If your program uses lots of memory, it will reduce the chance that it exhausts your machines swap space.
Also, there are certain space leaks that arent detected by traditional leak-checkers, such as Memchecks. Thats
because the memory isnt ever actually lost -- a pointer remains to it -- but its not in use. Programs that have leaks
like this can unnecessarily increase the amount of memory they are using over time. Massif can help identify these
leaks.
Importantly, Massif tells you not only how much heap memory your program is using, it also gives very detailed
information that indicates which parts of your program are responsible for allocating the heap memory.
148
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
#include <stdlib.h>
void g(void)
{
malloc(4000);
}
void f(void)
{
malloc(2000);
g();
}
int main(void)
{
int i;
int* a[10];
for (i = 0; i < 10; i++) {
a[i] = malloc(1000);
}
f();
g();
for (i = 0; i < 10; i++) {
free(a[i]);
}
return 0;
}
The program will execute (slowly). Upon completion, no summary statistics are printed to Valgrinds commentary;
all of Massifs profiling data is written to a file. By default, this file is called massif.out.<pid>, where <pid>
is the process ID, although this filename can be changed with the --massif-out-file option.
149
ms_print massif.out.12345
ms_print will produce (a) a graph showing the memory consumption over the programs execution, and (b) detailed
information about the responsible allocation sites at various points in the program, including the point of peak memory
allocation. The use of a separate script for presenting the results is deliberate: it separates the data gathering from its
presentation, and means that new methods of presenting the data can be added in the future.
150
KB
19.63^
#
|
#
|
#
|
#
|
#
|
#
|
#
|
#
|
#
|
#
|
#
|
#
|
#
|
#
|
#
|
#
|
#
|
:#
|
:#
|
:#
0 +----------------------------------------------------------------------->ki
Number of snapshots: 25
Detailed snapshots: [9, 14 (peak), 24]
Why is most of the graph empty, with only a couple of bars at the very end? By default, Massif uses "instructions
executed" as the unit of time. For very short-run programs such as the example, most of the executed instructions
involve the loading and dynamic linking of the program. The execution of main (and thus the heap allocations) only
occur at the very end. For a short-running program like this, we can use the --time-unit=B option to specify that
we want the time unit to instead be the number of bytes allocated/deallocated on the heap and stack(s).
If we re-run the program under Massif with this option, and then re-run ms_print, we get this more useful graph:
151
19.63^
###
|
#
|
# ::
|
# : :::
|
:::::::::# : : ::
|
:
# : : : ::
|
:
# : : : : :::
|
:
# : : : : : ::
|
:::::::::::
# : : : : : : :::
|
:
:
# : : : : : : : ::
|
:::::
:
# : : : : : : : : ::
|
@@@: :
:
# : : : : : : : : : @
|
::@ : :
:
# : : : : : : : : : @
|
:::: @ : :
:
# : : : : : : : : : @
|
::: : @ : :
:
# : : : : : : : : : @
|
::: : : @ : :
:
# : : : : : : : : : @
|
:::: : : : @ : :
:
# : : : : : : : : : @
|
::: : : : : @ : :
:
# : : : : : : : : : @
|
:::: : : : : : @ : :
:
# : : : : : : : : : @
| ::: : : : : : : @ : :
:
# : : : : : : : : : @
0 +----------------------------------------------------------------------->KB
Number of snapshots: 25
Detailed snapshots: [9, 14 (peak), 24]
The size of the graph can be changed with ms_prints --x and --y options. Each vertical bar represents a snapshot,
i.e. a measurement of the memory usage at a certain point in time. If the next snapshot is more than one column away,
a horizontal line of characters is drawn from the top of the snapshot to just before the next snapshot column. The text
at the bottom show that 25 snapshots were taken for this program, which is one per heap allocation/deallocation, plus
a couple of extras. Massif starts by taking snapshots for every heap allocation/deallocation, but as a program runs for
longer, it takes snapshots less frequently. It also discards older snapshots as the program goes on; when it reaches
the maximum number of snapshots (100 by default, although changeable with the --max-snapshots option) half
of them are deleted. This means that a reasonable number of snapshots are always maintained.
Most snapshots are normal, and only basic information is recorded for them. Normal snapshots are represented in the
graph by bars consisting of : characters.
Some snapshots are detailed. Information about where allocations happened are recorded for these snapshots, as we
will see shortly. Detailed snapshots are represented in the graph by bars consisting of @ characters. The text at the
bottom show that 3 detailed snapshots were taken for this program (snapshots 9, 14 and 24). By default, every 10th
snapshot is detailed, although this can be changed via the --detailed-freq option.
Finally, there is at most one peak snapshot. The peak snapshot is a detailed snapshot, and records the point where
memory consumption was greatest. The peak snapshot is represented in the graph by a bar consisting of # characters.
The text at the bottom shows that snapshot 14 was the peak.
Massifs determination of when the peak occurred can be wrong, for two reasons.
152
Peak snapshots are only ever taken after a deallocation happens. This avoids lots of unnecessary peak snapshot
recordings (imagine what happens if your program allocates a lot of heap blocks in succession, hitting a new peak
every time). But it means that if your program never deallocates any blocks, no peak will be recorded. It
also means that if your program does deallocate blocks but later allocates to a higher peak without subsequently
deallocating, the reported peak will be too low.
Even with this behaviour, recording the peak accurately is slow. So by default Massif records a peak whose size
is within 1% of the size of the true peak. This inaccuracy in the peak measurement can be changed with the
--peak-inaccuracy option.
The following graph is from an execution of Konqueror, the KDE web browser.
programs look like.
MB
3.952^
#
|
@#:
|
:@@#:
|
@@::::@@#:
|
@ :: :@@#::
|
@@@ :: :@@#::
|
@@:@@@ :: :@@#::
|
:::@ :@@@ :: :@@#::
|
: :@ :@@@ :: :@@#::
|
:@: :@ :@@@ :: :@@#::
|
@@:@: :@ :@@@ :: :@@#:::
|
:
::
::@@:@: :@ :@@@ :: :@@#:::
|
:@@:
::::: ::::@@@:::@@:@: :@ :@@@ :: :@@#:::
|
::::@@: ::: ::::::: @ :::@@:@: :@ :@@@ :: :@@#:::
|
@: ::@@: ::: ::::::: @ :::@@:@: :@ :@@@ :: :@@#:::
|
@: ::@@: ::: ::::::: @ :::@@:@: :@ :@@@ :: :@@#:::
|
@: ::@@:::::: ::::::: @ :::@@:@: :@ :@@@ :: :@@#:::
|
::@@@: ::@@:: ::: ::::::: @ :::@@:@: :@ :@@@ :: :@@#:::
|
:::::@ @: ::@@:: ::: ::::::: @ :::@@:@: :@ :@@@ :: :@@#:::
|
@@:::::@ @: ::@@:: ::: ::::::: @ :::@@:@: :@ :@@@ :: :@@#:::
0 +----------------------------------------------------------------------->Mi
0
626.4
Number of snapshots: 63
Detailed snapshots: [3, 4, 10, 11, 15, 16, 29, 33, 34, 36, 39, 41,
42, 43, 44, 49, 50, 51, 53, 55, 56, 57 (peak)]
Note that the larger size units are KB, MB, GB, etc. As is typical for memory measurements, these are based on a
multiplier of 1024, rather than the standard SI multiplier of 1000. Strictly speaking, they should be written KiB, MiB,
GiB, etc.
153
-------------------------------------------------------------------------------n
time(B)
total(B) useful-heap(B) extra-heap(B)
stacks(B)
-------------------------------------------------------------------------------0
1
2
3
4
5
6
7
8
0
1,008
2,016
3,024
4,032
5,040
6,048
7,056
8,064
0
1,008
2,016
3,024
4,032
5,040
6,048
7,056
8,064
0
1,000
2,000
3,000
4,000
5,000
6,000
7,000
8,000
0
8
16
24
32
40
48
56
64
0
0
0
0
0
0
0
0
154
The next snapshot is detailed. As well as the basic counts, it gives an allocation tree which indicates exactly which
pieces of code were responsible for allocating heap memory:
9
9,072
9,072
9,000
72
0
99.21% (9,000B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.
->99.21% (9,000B) 0x804841A: main (example.c:20)
The allocation tree can be read from the top down. The first line indicates all heap allocation functions such as
malloc and C++ new. All heap allocations go through these functions, and so all 9,000 useful bytes (which is
99.21% of all allocated bytes) go through them. But how were malloc and new called? At this point, every
allocation so far has been due to line 20 inside main, hence the second line in the tree. The -> indicates that main
(line 20) called malloc.
Lets see what the subsequent output shows happened next:
-------------------------------------------------------------------------------n
time(B)
total(B) useful-heap(B) extra-heap(B)
stacks(B)
-------------------------------------------------------------------------------10
10,080
10,080
10,000
80
11
12,088
12,088
12,000
88
12
16,096
16,096
16,000
96
13
20,104
20,104
20,000
104
14
20,104
20,104
20,000
104
99.48% (20,000B) (heap allocation functions) malloc/new/new[],
->49.74% (10,000B) 0x804841A: main (example.c:20)
|
->39.79% (8,000B) 0x80483C2: g (example.c:5)
| ->19.90% (4,000B) 0x80483E2: f (example.c:11)
| | ->19.90% (4,000B) 0x8048431: main (example.c:23)
| |
| ->19.90% (4,000B) 0x8048436: main (example.c:25)
|
->09.95% (2,000B) 0x80483DA: f (example.c:10)
->09.95% (2,000B) 0x8048431: main (example.c:23)
0
0
0
0
0
--alloc-fns, etc.
The first four snapshots are similar to the previous ones. But then the global allocation peak is reached, and a detailed
snapshot (number 14) is taken. Its allocation tree shows that 20,000B of useful heap memory has been allocated, and
the lines and arrows indicate that this is from three different code locations: line 20, which is responsible for 10,000B
(49.74%); line 5, which is responsible for 8,000B (39.79%); and line 10, which is responsible for 2,000B (9.95%).
We can then drill down further in the allocation tree. For example, of the 8,000B asked for by line 5, half of it was
due to a call from line 11, and half was due to a call from line 25.
In short, Massif collates the stack trace of every single allocation point in the program into a single tree, which gives
a complete picture at a particular point in time of how and why all heap memory was allocated.
Note that the tree entries correspond not to functions, but to individual code locations. For example, if function A
calls malloc, and function B calls A twice, once on line 10 and once on line 11, then the two calls will result in two
155
distinct stack traces in the tree. In contrast, if B calls A repeatedly from line 15 (e.g. due to a loop), then each of those
calls will be represented by the same stack trace in the tree.
Note also that each tree entry with children in the example satisfies an invariant: the entrys size is equal to the sum
of its childrens sizes. For example, the first entry has size 20,000B, and its children have sizes 10,000B, 8,000B,
and 2,000B. In general, this invariant almost always holds. However, in rare circumstances stack traces can be
malformed, in which case a stack trace can be a sub-trace of another stack trace. This means that some entries in the
tree may not satisfy the invariant -- the entrys size will be greater than the sum of its childrens sizes. This is not
a big problem, but could make the results confusing. Massif can sometimes detect when this happens; if it does, it
issues a warning:
Warning: Malformed stack trace detected. In Massifs output,
the size of an entrys child entries may not sum up
to the entrys size as they normally do.
However, Massif does not detect and warn about every such occurrence. Fortunately, malformed stack traces are rare
in practice.
Returning now to ms_prints output, the final part is similar:
-------------------------------------------------------------------------------n
time(B)
total(B) useful-heap(B) extra-heap(B)
stacks(B)
-------------------------------------------------------------------------------15
21,112
19,096
19,000
96
0
16
22,120
18,088
18,000
88
0
17
23,128
17,080
17,000
80
0
18
24,136
16,072
16,000
72
0
19
25,144
15,064
15,000
64
0
20
26,152
14,056
14,000
56
0
21
27,160
13,048
13,000
48
0
22
28,168
12,040
12,000
40
0
23
29,176
11,032
11,000
32
0
24
30,184
10,024
10,000
24
0
99.76% (10,000B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.
->79.81% (8,000B) 0x80483C2: g (example.c:5)
| ->39.90% (4,000B) 0x80483E2: f (example.c:11)
| | ->39.90% (4,000B) 0x8048431: main (example.c:23)
| |
| ->39.90% (4,000B) 0x8048436: main (example.c:25)
|
->19.95% (2,000B) 0x80483DA: f (example.c:10)
| ->19.95% (2,000B) 0x8048431: main (example.c:23)
|
->00.00% (0B) in 1+ places, all below ms_prints threshold (01.00%)
The final detailed snapshot shows how the heap looked at termination. The 00.00% entry represents the code locations
for which memory was allocated and then freed (line 20 in this case, the memory for which was freed on line 28).
However, no code location details are given for this entry; by default, Massif only records the details for code locations
156
responsible for more than 1% of useful memory bytes, and ms_print likewise only prints the details for code locations
responsible for more than 1%. The entries that do not meet this threshold are aggregated. This avoids filling up the
output with large numbers of unimportant entries. The thresholds can be changed with the --threshold option
that both Massif and ms_print support.
The stack traces in the output may be more difficult to read, and interpreting them may require some detailed
understanding of the lower levels of a program like the memory allocators. But for some programs having the
full information about memory usage can be very useful.
157
158
--alloc-fn=<name>
Functions specified with this option will be treated as though they were a heap allocation function such as malloc.
This is useful for functions that are wrappers to malloc or new, which can fill up the allocation trees with
uninteresting information. This option can be specified multiple times on the command line, to name multiple
functions.
Note that the named function will only be treated this way if it is the top entry in a stack trace, or just below
another function treated this way.
For example, if you have a function malloc1 that wraps malloc, and
malloc2 that wraps malloc1, just specifying --alloc-fn=malloc2 will have no effect. You need to specify
--alloc-fn=malloc1 as well. This is a little inconvenient, but the reason is that checking for allocation functions
is slow, and it saves a lot of time if Massif can stop looking through the stack trace entries as soon as it finds one that
doesnt match rather than having to continue through all the entries.
Note that C++ names are demangled. Note also that overloaded C++ names must be written in full. Single quotes
may be necessary to prevent the shell from breaking them up. For example:
--alloc-fn=operator new(unsigned, std::nothrow_t const&)
--ignore-fn=<name>
Any direct heap allocation (i.e. a call to malloc, new, etc, or a call to a function named by an --alloc-fn option)
that occurs in a function specified by this option will be ignored. This is mostly useful for testing purposes. This
option can be specified multiple times on the command line, to name multiple functions.
Any realloc of an ignored block will also be ignored, even if the realloc call does not occur in an ignored
function. This avoids the possibility of negative heap sizes if ignored blocks are shrunk with realloc.
The rules for writing C++ function names are the same as for --alloc-fn above.
--threshold=<m.n> [default: 1.0]
The significance threshold for heap allocations, as a percentage of total memory size. Allocation tree entries that
account for less than this will be aggregated. Note that this should be specified in tandem with ms_prints option of
the same name.
--peak-inaccuracy=<m.n> [default: 1.0]
Massif does not necessarily record the actual global memory allocation peak; by default it records a peak only when
the global memory allocation size exceeds the previous peak by at least 1.0%. This is because there can be many local
allocation peaks along the way, and doing a detailed snapshot for every one would be expensive and wasteful, as all
but one of them will be later discarded. This inaccuracy can be changed (even to 0.0%) via this option, but Massif
will run drastically slower as the number approaches zero.
--time-unit=<i|ms|B> [default: i]
The time unit used for the profiling. There are three possibilities: instructions executed (i), which is good for most
cases; real (wallclock) time (ms, i.e. milliseconds), which is sometimes useful; and bytes allocated/deallocated on the
heap and/or stack (B), which is useful for very short-run programs, and for testing purposes, because it is the most
reproducible across different machines.
--detailed-freq=<n> [default: 10]
Frequency of detailed snapshots. With --detailed-freq=1, every snapshot is detailed.
--max-snapshots=<n> [default: 100]
The maximum number of snapshots recorded. If set to N, for all programs except very short-running ones, the final
number of snapshots will be between N/2 and N.
159
72]
--y=<4..1000> [default:
Height of the graph, in rows.
20]
160
10.1. Overview
DHAT is a tool for examining how programs use their heap allocations.
It tracks the allocated blocks, and inspects every memory access to find which block, if any, it is to. The following
data is collected and presented per allocation point (allocation stack):
Total allocation (number of bytes and blocks)
maximum live volume (number of bytes and blocks)
average block lifetime (number of instructions between allocation and freeing)
average number of reads and writes to each byte in the block ("access ratios")
for allocation points which always allocate blocks only of one size, and that size is 4096 bytes or less: counts
showing how often each byte offset inside the block is accessed.
Using these statistics it is possible to identify allocation points with the following characteristics:
potential process-lifetime leaks: blocks allocated by the point just accumulate, and are freed only at the end of the
run.
excessive turnover: points which chew through a lot of heap, even if it is not held onto for very long
excessively transient: points which allocate very short lived blocks
useless or underused allocations: blocks which are allocated but not completely filled in, or are filled in but not
subsequently read.
blocks with inefficient layout -- areas never accessed, or with hot fields scattered throughout the block.
161
As with the Massif heap profiler, DHAT measures program progress by counting instructions, and so presents all
age/time related figures as instruction counts. This sounds a little odd at first, but it makes runs repeatable in a way
which is not possible if CPU time is used.
Over the entire run of the program, this stack (allocation point) allocated 29,520 blocks in total, containing 1,904,700
bytes in total. By looking at the max-live data, we see that not many blocks were simultaneously live, though: at the
peak, there were 63,490 allocated bytes in 984 blocks. This tells us that the program is steadily freeing such blocks
as it runs, rather than hanging on to all of them until the end and freeing them all.
The deaths entry tells us that 29,520 blocks allocated by this stack died (were freed) during the run of the program.
Since 29,520 is also the number of blocks allocated in total, that tells us that all allocated blocks were freed by the end
of the program.
It also tells us that the average age at death was 22,227,424 instructions. From the summary statistics we see that
the program ran for 1,045,339,534 instructions, and so the average age at death is about 2% of the programs total run
time.
162
There are two tell-tale signs that this might be a process-lifetime leak. Firstly, the max-live and tot-alloc numbers are
identical. The only way that can happen is if these blocks are all allocated and then all deallocated.
Secondly, the average age at death (300 million insns) is 71% of the total program lifetime (419 million insns), hence
this is not a transient allocation-free spike -- rather, it is spread out over a large part of the entire run. One interpretation
is, roughly, that all 254 blocks were allocated in the first half of the run, held onto for the second half, and then freed
just before exit.
The acc-ratios field tells us that each byte in the blocks allocated here is read an average of 2.13 times before the
block is deallocated. Given that the blocks have an average age at death of 34,611,026, thats one read per block per
approximately every 15 million instructions. So from that standpoint the blocks arent "working" very hard.
More interesting is the write ratio: each byte is written an average of 0.91 times. This tells us that some parts of the
allocated blocks are never written, at least 9% on average. To completely initialise the block would require writing
each byte at least once, and that would give a write ratio of 1.0. The fact that some block areas are evidently unused
might point to data alignment holes or other layout inefficiencies.
Well, at least all the blocks are freed (24,240 allocations, 24,240 deaths).
If all the blocks had been the same size, DHAT would also show the access counts by block offset, so we could see
where exactly these unused areas are. However, that isnt the case: the blocks have varying sizes, so DHAT cant
163
perform such an analysis. We can see that they must have varying sizes since the average block size, 61.13, isnt a
whole number.
Here, both the read and write access ratios are zero. Hence this point is allocating blocks which are never used,
neither read nor written. Indeed, they are also not freed ("deaths: none") and are simply leaked. So, here is 180k of
completely useless allocation that could be removed.
Re-running with Memcheck does indeed report the same leak. What DHAT can tell us, that Memcheck cant, is that
not only are the blocks leaked, they are also never used.
In the previous two examples, it is easy to see blocks that are never written to, or never read from, or some combination
of both. Unfortunately, in C++ code, the situation is less clear. Thats because an objects constructor will write to
the underlying block, and its destructor will read from it. So the blocks read and write ratios will be non-zero even if
the object, once constructed, is never used, but only eventually destructed.
Really, what we want is to measure only memory accesses in between the end of an objects construction and the start
of its destruction. Unfortunately I do not know of a reliable way to determine when those transitions are made.
max-live:
317,408 in 5,668 blocks
tot-alloc: 317,408 in 5,668 blocks (avg size 56.00)
deaths:
5,668, at avg age 622,890,597
acc-ratios: 1.03 rd, 1.28 wr (327,642 b-read, 408,172 b-written)
at 0x4C275B8: malloc (vg_replace_malloc.c:236)
by 0x5440C16: QDesignerPropertySheetPrivate::ensureInfo (qhash.h:515)
by 0x544350B: QDesignerPropertySheet::setVisible (qdesigner_propertysh...)
by 0x5446232: QDesignerPropertySheet::QDesignerPropertySheet (qdesigne...)
Aggregated access counts by offset:
[
[
[
[
[
[
[
0]
8]
16]
24]
32]
36]
48]
This is fairly typical, for C++ code running on a 64-bit platform. Here, we have aggregated access statistics for 5668
blocks, all of size 56 bytes. Each byte has been accessed at least 5668 times, except for offsets 12--15, 36--39 and
52--55. These are likely to be alignment holes.
Careful interpretation of the numbers reveals useful information. Groups of N consecutive identical numbers that
begin at an N-aligned offset, for N being 2, 4 or 8, are likely to indicate an N-byte object in the structure at that point.
For example, the first 32 bytes of this object are likely to have the layout
[0 ]
[8 ]
[12]
[16]
[24]
64-bit type
32-bit type
32-bit alignment hole
64-bit type
64-bit type
As a counterexample, its also clear that, whatever is at offset 32, it is not a 32-bit value.
number of the group (37422) is not the same as the first three (18883 18883 18883).
This example leads one to enquire (by reading the source code) whether the zeroes at 12--15 and 52--55 are alignment
holes, and whether 48--51 is indeed a 32-bit type. If so, it might be possible to place whats at 48--51 at 12--15
instead, which would reduce the object size from 56 to 48 bytes.
Bear in mind that the above inferences are all only "maybes". Thats because they are based on dynamic data, not
static analysis of the object layout. For example, the zeroes might not be alignment holes, but rather just parts of the
structure which were not used at all for this particular run. Experience shows thats unlikely to be the case, but it
could happen.
tot-bytes-allocd
max-blocks-live
This controls the order in which allocation points are displayed. You can choose to look at allocation points with the
highest maximum liveness, or the highest total turnover, or by the highest number of live blocks. These give usefully
different pictures of program behaviour. For example, sorting by maximum live blocks tends to show up allocation
points creating large numbers of small objects.
One important point to note is that each allocation stack counts as a seperate allocation point. Because stacks by
default have 12 frames, this tends to spread data out over multiple allocation points. You may want to use the flag
--num-callers=4 or some such small number, to reduce the spreading.
166
11.1. Overview
SGCheck is a tool for finding overruns of stack and global arrays. It works by using a heuristic approach derived
from an observation about the likely forms of stack and global array accesses.
At run time we will know the precise address of a[] on the stack, and so we can observe that the first store resulting
from a[i] = 42 writes a[], and we will (correctly) assume that that instruction is intended always to access a[].
Then, on the 11th iteration, it accesses somewhere else, possibly a different local, possibly an un-accounted for area
of the stack (eg, spill slot), so SGCheck reports an error.
There is an important caveat.
Imagine a function such as memcpy, which is used to read and write many different areas of memory over the lifetime
of the program. If we insist that the read and write instructions in its memory copying loop only ever access one
particular stack or global variable, we will be flooded with errors resulting from calls to memcpy.
To avoid this problem, SGCheck instantiates fresh likely-target records for each entry to a function, and discards them
on exit. This allows detection of cases where (e.g.) memcpy overflows its source or destination buffers for any
specific call, but does not carry any restriction from one call to the next. Indeed, multiple threads may make multiple
simultaneous calls to (e.g.) memcpy without mutual interference.
167
11.5. Limitations
This is an experimental tool, which relies rather too heavily on some not-as-robust-as-I-would-like assumptions on the
behaviour of correct programs. There are a number of limitations which you should be aware of.
False negatives (missed errors): it follows from the description above (How SGCheck Works) that the first access
by a memory referencing instruction to a stack or global array creates an association between that instruction and
the array, which is checked on subsequent accesses by that instruction, until the containing function exits. Hence,
the first access by an instruction to an array (in any given function instantiation) is not checked for overrun, since
SGCheck uses that as the "example" of how subsequent accesses should behave.
False positives (false errors): similarly, and more serious, it is clearly possible to write legitimate pieces of code
which break the basic assumption upon which the checking algorithm depends. For example:
In this case the store sometimes accesses a[] and sometimes b[], but in no cases is the addressed array overrun.
Nevertheless the change in target will cause an error to be reported.
It is hard to see how to get around this problem. The only mitigating factor is that such constructions appear very
rare, at least judging from the results using the tool so far. Such a construction appears only once in the Valgrind
sources (running Valgrind on Valgrind) and perhaps two or three times for a start and exit of Firefox. The best that
can be done is to suppress the errors.
Performance: SGCheck has to read all of the DWARF3 type and variable information on the executable and its
shared objects. This is computationally expensive and makes startup quite slow. You can expect debuginfo
reading time to be in the region of a minute for an OpenOffice sized application, on a 2.4 GHz Core 2 machine.
Reading this information also requires a lot of memory. To make it viable, SGCheck goes to considerable trouble
to compress the in-memory representation of the DWARF3 data, which is why the process of reading it appears
slow.
Performance: SGCheck runs slower than Memcheck. This is partly due to a lack of tuning, but partly due to
algorithmic difficulties. The stack and global checks can sometimes require a number of range checks per memory
access, and these are difficult to short-circuit, despite considerable efforts having been made. A redesign and
reimplementation could potentially make it much faster.
168
Coverage: Stack and global checking is fragile. If a shared object does not have debug information attached, then
SGCheck will not be able to determine the bounds of any stack or global arrays defined within that shared object,
and so will not be able to check accesses to them. This is true even when those arrays are accessed from some
other shared object which was compiled with debug info.
At the moment SGCheck accepts objects lacking debuginfo without comment. This is dangerous as it causes
SGCheck to silently skip stack and global checking for such objects. It would be better to print a warning in such
circumstances.
Coverage: SGCheck does not check whether the areas read or written by system calls do overrun stack or global
arrays. This would be easy to add.
Platforms: the stack/global checks wont work properly on PowerPC, ARM or S390X platforms, only on X86 and
AMD64 targets. Thats because the stack and global checking requires tracking function calls and exits reliably,
and theres no obvious way to do it on ABIs that use a link register for function returns.
Robustness: related to the previous point. Function call/exit tracking for X86 and AMD64 is believed to work
properly even in the presence of longjmps within the same stack (although this has not been tested). However, code
which switches stacks is likely to cause breakage/chaos.
169
12.1. Overview
A basic block is a linear section of code with one entry point and one exit point. A basic block vector (BBV) is a list
of all basic blocks entered during program execution, and a count of how many times each basic block was run.
BBV is a tool that generates basic block vectors for use with the SimPoint analysis tool. The SimPoint methodology
enables speeding up architectural simulations by only running a small portion of a program and then extrapolating
total behavior from this small portion. Most programs exhibit phase-based behavior, which means that at various
times during execution a program will encounter intervals of time where the code behaves similarly to a previous
interval. If you can detect these intervals and group them together, an approximation of the total program behavior
can be obtained by only simulating a bare minimum number of intervals, and then scaling the results.
In computer architecture research, running a benchmark on a cycle-accurate simulator can cause slowdowns on the
order of 1000 times, making it take days, weeks, or even longer to run full benchmarks. By utilizing SimPoint this
can be reduced significantly, usually by 90-95%, while still retaining reasonable accuracy.
A more complete introduction to how SimPoint works can be found in the paper "Automatically Characterizing Large
Scale Program Behavior" by T. Sherwood, E. Perelman, G. Hamerly, and B. Calder.
In this case we are running on /bin/ls, but this can be any program. By default a file called bb.out.PID will
be created, where PID is replaced by the process ID of the running process. This file contains the basic block vector.
For long-running programs this file can be quite large, so it might be wise to compress it with gzip or some other
compression program.
To create actual SimPoint results, you will need the SimPoint utility, available from the SimPoint webpage. Assuming
you have downloaded SimPoint 3.2 and compiled it, create SimPoint results with a command like the following:
./SimPoint.3.2/bin/simpoint -inputVectorsGzipped \
-loadFVFile bb.out.1234.gz \
-k 5 -saveSimpoints results.simpts \
-saveSimpointWeights results.weights
where bb.out.1234.gz is your compressed basic block vector file generated by BBV.
The SimPoint utility does random linear projection using 15-dimensions, then does k-mean clustering to calculate
which intervals are of interest. In this example we specify 5 intervals with the -k 5 option.
170
The outputs from the SimPoint run are the results.simpts and results.weights files. The first holds the
5 most relevant intervals of the program. The seconds holds the weight to scale each interval by when extrapolating
full-program behavior. The intervals and the weights can be used in conjunction with a simulator that supports fastforwarding; you fast-forward to the interval of interest, collect stats for the desired interval length, then use statistics
gathered in conjunction with the weights to calculate your results.
Each new interval starts with a T. This is followed on the same line by a series of basic block and frequency pairs, one
for each basic block that was entered during the interval. The format for each block/frequency pair is a colon, followed
by a number that uniquely identifies the basic block, another colon, and then the frequency (which is the number of
times the block was entered, multiplied by the number of instructions in the block). The pairs are separated from each
other by a space.
The frequency count is multiplied by the number of instructions that are in the basic block, in order to weigh the count
so that instructions in small basic blocks arent counted as more important than instructions in large basic blocks.
171
The SimPoint program only processes lines that start with a "T". All other lines are ignored. Traditionally comments
are indicated by starting a line with a "#" character. Some other BBV generation tools, such as PinPoints, generate
lines beginning with letters other than "T" to indicate more information about the program being run. We do not
generate these, as the SimPoint utility ignores them.
12.5. Implementation
Valgrind provides all of the information necessary to create BBV files. In the current implementation, all instructions
are instrumented. This is slower (by approximately a factor of two) than a method that instruments at the basic block
level, but there are some complications (especially with rep prefix detection) that make that method more difficult.
Valgrind actually provides instrumentation at a superblock level. A superblock has one entry point but unlike basic
blocks can have multiple exit points. Once a branch occurs into the middle of a block, it is split into a new basic
block. Because Valgrind cannot produce "true" basic blocks, the generated BBV vectors will be different than those
generated by other tools. In practice this does not seem to affect the accuracy of the SimPoint results. We do internally
force the --vex-guest-chase-thresh=0 option to Valgrind which forces a more basic-block-like behavior.
When a superblock is run for the first time, it is instrumented with our BBV routine. A block info (bbInfo) structure is
allocated which holds the various information and statistics for the block. A unique block ID is assigned to the block,
and then the structure is placed into an ordered set. Then each native instruction in the block is instrumented to call an
instruction counting routine with a pointer to the block info structure as an argument.
At run-time, our instruction counting routines are called once per native instruction. The relevant block info structure
is accessed and the block count and total instruction count is updated. If the total instruction count overflows the
interval size then we walk the ordered set, writing out the statistics for any block that was accessed in the interval, then
resetting the block counters to zero.
On the x86 and amd64 architectures the counting code has extra code to handle rep-prefixed string instructions. This
is because actual hardware counts a rep-prefixed instruction as one instruction, while a naive Valgrind implementation
would count it as many (possibly hundreds, thousands or even millions) of instructions. We handle rep-prefixed
instructions specially, in order to make the results match those obtained with hardware performance counters.
BBV also counts the fldcw instruction. This instruction is used on x86 machines in various ways; it is most commonly
found when converting floating point values into integers. On Pentium 4 systems the retired instruction performance
counter counts this instruction as two instructions (all other known processors only count it as one). This can affect
results when using SimPoint on Pentium 4 systems. We provide the fldcw count so that users can evaluate whether it
will impact their results enough to avoid using Pentium 4 machines for their experiments. It would be possible to add
an option to this tool that mimics the double-counting so that the generated BBV files would be usable for experiments
using hardware performance counters on Pentium 4 systems.
12.7. Validation
BBV has been tested on x86, amd64, and ppc32 platforms. An earlier version of BBV was tested in detail using
hardware performance counters, this work is described in a paper from the HiPEAC08 conference, "Using Dynamic
172
Binary Instrumentation to Generate Multi-Platform SimPoints: Methodology and Accuracy" by V.M. Weaver and S.A.
McKee.
12.8. Performance
Using this program slows down execution by roughly a factor of 40 over native execution. This varies depending on
the machine used and the benchmark being run. On the SPEC CPU 2000 benchmarks running on a 3.4GHz Pentium
D processor, the slowdown ranges from 24x (mcf) to 340x (vortex.2).
173
13.1. Overview
Lackey is a simple Valgrind tool that does various kinds of basic program measurement. It adds quite a lot of simple
instrumentation to the programs code. It is primarily intended to be of use as an example tool, and consequently
emphasises clarity of implementation over performance.
174
14.1. Overview
Nulgrind is the simplest possible Valgrind tool. It performs no instrumentation or analysis of a program, just runs it
normally. It is mainly of use for Valgrinds developers for debugging and regression testing.
Nonetheless you can run programs with Nulgrind. They will run roughly 5 times more slowly than normal, for no
useful effect. Note that you need to use the option --tool=none to run Nulgrind (ie. not --tool=nulgrind).
175
Valgrind FAQ
Release 3.10.0 10 September 2014
Copyright 2000-2014 Valgrind Developers
Email: [email protected]
Valgrind FAQ
Table of Contents
Valgrind Frequently Asked Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
clxxvii
1. Background
1.1. How do you pronounce "Valgrind"?
The "Val" as in the word "value". The "grind" is pronounced with a short i -- ie. "grinned" (rhymes with
"tinned") rather than "grined" (rhymes with "find").
Dont feel bad: almost everyone gets it wrong at first.
1.2. Where does the name "Valgrind" come from?
From Nordic mythology. Originally (before release) the project was named Heimdall, after the watchman of
the Nordic gods. He could "see a hundred miles by day or night, hear the grass growing, see the wool growing
on a sheeps back", etc. This would have been a great name, but it was already taken by a security package
"Heimdal".
Keeping with the Nordic theme, Valgrind was chosen. Valgrind is the name of the main entrance to Valhalla
(the Hall of the Chosen Slain in Asgard). Over this entrance there resides a wolf and over it there is the head
of a boar and on it perches a huge eagle, whose eyes can see to the far regions of the nine worlds. Only those
judged worthy by the guardians are allowed to pass through Valgrind. All others are refused entrance.
Its not short for "value grinder", although thats not a bad guess.
Its probably a bug in make. Some, but not all, instances of version 3.79.1 have this bug, see this. Try
upgrading to a more recent version of make. Alternatively, we have heard that unsetting the CFLAGS
environment variable avoids the problem.
2.2. When building Valgrind, make fails with this:
Valgrind can handle dynamically generated code, so long as none of the generated code is later overwritten
by other generated code. If this happens, though, things will go wrong as Valgrind will continue running
its translations of the old code (this is true on x86 and amd64, on PowerPC there are explicit cache flush
instructions which Valgrind detects and honours). You should try running with --smc-check=all in this
case. Valgrind will run much more slowly, but should detect the use of the out-of-date code.
Alternatively, if you have the source code to the JIT compiler you can insert calls to the
VALGRIND_DISCARD_TRANSLATIONS client request to mark out-of-date code, saving you from
using --smc-check=all.
Apart from this, in theory Valgrind can run any Java program just fine, even those that use JNI and are partially
implemented in other languages like C and C++. In practice, Java implementations tend to do nasty things
that most programs do not, and Valgrind sometimes falls over these corner cases.
If your Java programs do not run under Valgrind, even with --smc-check=all, please file a bug report and
hopefully well be able to fix the problem.
4.3. The stack traces given by Memcheck (or another tool) seem to have the wrong function name in them. Whats
happening?
Occasionally Valgrind stack traces get the wrong function names. This is caused by glibc using aliases
to effectively give one function two names. Most of the time Valgrind chooses a suitable name, but very
occasionally it gets it wrong. Examples we know of are printing bcmp instead of memcmp, index instead of
strchr, and rindex instead of strrchr.
4.4. My program crashes normally, but doesnt under Valgrind, or vice versa. Whats happening?
When a program runs under Valgrind, its environment is slightly different to when it runs natively.
example, the memory layout is different, and the way that threads are scheduled is different.
For
Most of the time this doesnt make any difference, but it can, particularly if your program is buggy. For
example, if your program crashes because it erroneously accesses memory that is unaddressable, its possible
that this memory will not be unaddressable when run under Valgrind. Alternatively, if your program has data
races, these may not manifest under Valgrind.
There isnt anything you can do to change this, its just the nature of the way Valgrind works that it cannot
exactly replicate a native execution environment. In the case where your program crashes due to a memory
error when run natively but not when run under Valgrind, in most cases Memcheck should identify the bad
memory operation.
4.5. Memcheck doesnt report any errors and I know my program has errors.
There are two possible causes of this.
First, by default, Valgrind only traces the top-level process. So if your program spawns children, they wont
be traced by Valgrind by default. Also, if your program is started by a shell script, Perl script, or something
similar, Valgrind will trace the shell, or the Perl interpreter, or equivalent.
To trace child processes, use the --trace-children=yes option.
If you are tracing large trees of processes, it can be less disruptive to have the output sent over the network.
Give Valgrind the option --log-socket=127.0.0.1:12345 (if you want logging output sent to port
12345 on localhost). You can use the valgrind-listener program to listen on that port:
valgrind-listener 12345
Obviously you have to start the listener process first. See the manual for more details.
Second, if your program is statically linked, most Valgrind tools will only work well if they are able to
replace certain functions, such as malloc, with their own versions. By default, statically linked malloc
functions are not replaced. A key indicator of this is if Memcheck says:
int static[5];
int main(void)
{
int stack[5];
static[5] = 0;
stack [5] = 0;
return 0;
}
Unfortunately, Memcheck doesnt do bounds checking on global or stack arrays. Wed like to, but its just not
possible to do in a reasonable way that fits with how Memcheck works. Sorry.
However, the experimental tool SGcheck can detect errors like this.
Run Valgrind with the
--tool=exp-sgcheck option to try it, but be aware that it is not as robust as Memcheck.
5. Miscellaneous
5.1. I tried writing a suppression but it didnt work. Can you write my suppression for me?
Yes! Use the --gen-suppressions=yes feature to spit out suppressions automatically for you. You
can then edit them if you like, eg. combining similar automatically generated suppressions using wildcards
like *.
If you really want to write suppressions by hand, read the manual carefully.
function names must be mangled (that is, not demangled).
5.2. With Memchecks memory leak detector, whats the difference between "definitely lost", "indirectly lost",
"possibly lost", "still reachable", and "suppressed"?
The details are in the Memcheck section of the user manual.
In short:
"definitely lost" means your program is leaking memory -- fix those leaks!
"indirectly lost" means your program is leaking memory in a pointer-based structure. (E.g. if the root node
of a binary tree is "definitely lost", all the children will be "indirectly lost".) If you fix the "definitely lost"
leaks, the "indirectly lost" leaks should go away.
"possibly lost" means your program is leaking memory, unless youre doing unusual things with pointers
that could cause them to point into the middle of an allocated block; see the user manual for some possible
causes. Use --show-possibly-lost=no if you dont want to see these reports.
"still reachable" means your program is probably ok -- it didnt free some memory it could have. This is
quite common and often reasonable. Dont use --show-reachable=yes if you dont want to see these
reports.
"suppressed" means that a leak error has been suppressed. There are some suppressions in the default
suppression files. You can ignore suppressed errors.
5.3. Memchecks uninitialised value errors are hard to track down, because they are often reported some time after
they are caused. Could Memcheck record a trail of operations to better link the cause to the effect? Or maybe
just eagerly report any copies of uninitialised memory values?
Prior to version 3.4.0, the answer was "we dont know how to do it without huge performance penalties". As
of 3.4.0, try using the --track-origins=yes option. It will run slower than usual, but will give you
extra information about the origin of uninitialised values.
Or if you want to do it the old fashioned way, you can use the client request VALGRIND_CHECK_VALUE_IS_DEFINED
to help track these errors down -- work backwards from the point where the uninitialised error occurs, checking
suspect values until you find the cause. This requires editing, compiling and re-running your program multiple
times, which is a pain, but still easier than debugging the problem without Memchecks help.
As for eager reporting of copies of uninitialised memory values, this has been suggested multiple times.
Unfortunately, almost all programs legitimately copy uninitialised memory values around (because compilers
pad structs to preserve alignment) and eager checking leads to hundreds of false positives.
Therefore
Memcheck does not support eager checking at this time.
5.4. Is it possible to attach Valgrind to a program that is already running?
No. The environment that Valgrind provides for running programs is significantly different to that for normal
programs, e.g. due to different layout of memory. Therefore Valgrind has to have full control from the very
start.
It is possible to achieve something like this by running your program without any instrumentation (which
involves a slow-down of about 5x, less than that of most tools), and then adding instrumentation once you get
to a point of interest. Support for this must be provided by the tool, however, and Callgrind is the only tool
that currently has such support. See the instructions on the callgrind_control program for details.
Table of Contents
1. The Design and Implementation of Valgrind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2. Writing a New Valgrind Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2. Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2.1. How tools work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2.2. Getting the code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2.3. Getting started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2.4. Writing the code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2.5. Initialisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2.6. Instrumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2.7. Finalisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2.8. Other Important Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3. Advanced Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3.1. Debugging Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3.2. Suppressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3.3. Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3.4. Regression Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3.5. Profiling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3.6. Other Makefile Hackery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3.7. The Core/tool Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4. Final Words . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3. Callgrind Format Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1.1. Basic Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1.2. Simple Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1.3. Associations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.1.4. Extended Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.1.5. Name Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1.6. Subposition Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1.7. Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2. Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2.1. Grammar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2.2. Description of Header Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.3. Description of Body Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
xi
Nicholas Nethercote.
2.1. Introduction
The key idea behind Valgrinds architecture is the division between its core and tools.
The core provides the common low-level infrastructure to support program instrumentation, including the JIT
compiler, low-level memory manager, signal handling and a thread scheduler. It also provides certain services
that are useful to some but not all tools, such as support for error recording, and support for replacing heap allocation
functions such as malloc.
But the core leaves certain operations undefined, which must be filled by tools. Most notably, tools define how
program code should be instrumented. They can also call certain functions to indicate to the core that they would like
to use certain services, or be notified when certain interesting events occur. But the core takes care of all the hard
work.
2.2. Basics
2.2.1. How tools work
Tools must define various functions for instrumenting programs that are called by Valgrinds core. They are then
linked against Valgrinds core to define a complete Valgrind tool which will be used when the --tool option is used
to select it.
5. Copy none/nl_main.c into foobar/, renaming it as fb_main.c. Edit it by changing the details lines
in nl_pre_clo_init to something appropriate for the tool. These fields are used in the startup message, except
for bug_reports_to which is used if a tool assertion fails. Also, replace the string "nl_" throughout with
"fb_" again.
6. Edit Makefile.am, adding the new directory foobar to the TOOLS or EXP_TOOLS variables.
7. Edit configure.in, adding foobar/Makefile and foobar/tests/Makefile to the AC_OUTPUT list.
8. Run:
autogen.sh
./configure --prefix=pwd/inst
make
make install
It should automake, configure and compile without errors, putting copies of the tool in foobar/ and
inst/lib/valgrind/.
9. You can test it with a command like:
The names can be different to the above, but these are the usual names. The first one is registered using the macro
VG_DETERMINE_INTERFACE_VERSION. The last three are registered using the VG_(basic_tool_funcs)
function.
3
In addition, if a tool wants to use some of the optional services provided by the core, it may have to define other
functions and tell the core about them.
2.2.5. Initialisation
Most of the initialisation should be done in pre_clo_init. Only use post_clo_init if a tool provides
command line options and must do some initialisation after option processing takes place ("clo" stands for
"command line options").
First of all, various "details" need to be set for a tool, using the functions VG_(details_*). Some are all
compulsory, some arent. Some are used when constructing the startup message, detail_bug_reports_to is
used if VG_(tool_panic) is ever called, or a tool assertion fails. Others have other uses.
Second, various "needs" can be set for a tool, using the functions VG_(needs_*). They are mostly booleans, and
can be left untouched (they default to False). They determine whether a tool can do various things such as: record,
report and suppress errors; process command line options; wrap system calls; record extra information about heap
blocks; etc.
For example, if a tool wants the cores help in recording and reporting errors, it must call
VG_(needs_tool_errors) and provide definitions of eight functions for comparing errors, printing out
errors, reading suppressions from a suppressions file, etc. While writing these functions requires some work, its
much less than doing error handling from scratch because the core is doing most of the work.
Third, the tool can indicate which events in core it wants to be notified about, using the functions VG_(track_*).
These include things such as heap blocks being allocated, the stack pointer changing, a mutex being locked, etc. If a
tool wants to know about this, it should provide a pointer to a function, which will be called when that event happens.
For example, if the tool want to be notified when a new heap block is allocated, it should call
VG_(track_new_mem_heap) with an appropriate function pointer, and the assigned function will be called each
time this happens.
More information about "details", "needs" and "trackable events" can be found in include/pub_tool_tooliface.h.
2.2.6. Instrumentation
instrument is the interesting one. It allows you to instrument VEX IR, which is Valgrinds RISC-like intermediate
language. VEX IR is described in the comments of the header file VEX/pub/libvex_ir.h.
The easiest way to instrument VEX IR is to insert calls to C functions when interesting things happen. See the tool
"Lackey" (lackey/lk_main.c) for a simple example of this, or Cachegrind (cachegrind/cg_main.c) for a
more complex example.
2.2.7. Finalisation
This is where you can present the final results, such as a summary of the information collected. Any log files should
be written out at this point.
The files include/pub_tool_*.h contain all the types, macros, functions, etc. that a tool should (hopefully)
need, and are the only .h files a tool should need to #include. They have a reasonable amount of documentation
in it that should hopefully be enough to get you going.
Note that you cant use anything from the C library (there are deep reasons for this, trust us). Valgrind provides an
implementation of a reasonable subset of the C library, details of which are in pub_tool_libc*.h.
When writing a tool, in theory you shouldnt need to look at any of the code in Valgrinds core, but in practice it might
be useful sometimes to help understand something.
The include/pub_tool_basics.h and VEX/pub/libvex_basictypes.h files have some basic types
that are widely used.
Ultimately, the tools distributed (Memcheck, Cachegrind, Lackey, etc.) are probably the best documentation of all, for
the moment.
The VG_ macro is used heavily. This just prepends a longer string in front of names to avoid potential namespace
clashes. It is defined in include/pub_tool_basics.h.
There are some assorted notes about various aspects of the implementation in docs/internals/. Much of it isnt
that relevant to tool-writers, however.
2.3.2. Suppressions
If your tool reports errors and you want to suppress some common ones, you can add suppressions to the suppression
files. The relevant files are *.supp; the final suppression file is aggregated from these files by combining the relevant
.supp files depending on the versions of linux, X and glibc on a system.
2.3.3. Documentation
If you are feeling conscientious and want to write some documentation for your tool, please use XML as the rest of
Valgrind does. The file docs/README has more details on getting the XML toolchain to work; this can be difficult,
unfortunately.
To write the documentation, follow these steps (using foobar as the example tool name again):
1. The docs go in foobar/docs/, which you will have created when you started writing the tool.
2. Copy the XML documentation file for the tool Nulgrind from none/docs/nl-manual.xml to
foobar/docs/, and rename it to foobar/docs/fb-manual.xml.
Note: there is a tetex bug involving underscores in filenames, so dont use _.
3. Write the documentation.
There are some helpful bits and pieces on using XML markup in
docs/xml/xml_help.txt.
4. Include it in the User Manual by adding the relevant entry to docs/xml/manual.xml.
existing entry.
5. Include it in the man page by adding the relevant entry to docs/xml/valgrind-manpage.xml. Copy and
edit an existing entry.
6. Validate foobar/docs/fb-manual.xml using the following command from within docs/:
make valid
make html-docs
8. When you have finished, try to generate PDF and PostScript output to check all is well, from within docs/:
make print-docs
2.3.5. Profiling
Lots of profiling tools have trouble running Valgrind. For example, trying to use gprof is hopeless.
Probably the best way to profile a tool is with OProfile on Linux.
You can also use Cachegrind on it. Read README_DEVELOPERS for details on running Valgrind under Valgrind;
its a bit fragile but can usually be made to work.
3.1. Overview
The profile data format is ASCII based. It is written by Callgrind, and it is upwards compatible to the format used by
Cachegrind (ie. Cachegrind uses a subset). It can be read by callgrind_annotate and KCachegrind.
This chapter gives on overview of format features and examples. For detailed syntax, look at the format reference.
The above example gives profile information for event types "Cycles", "Instructions", and "Flops". Thus, cost lines
give the number of CPU cycles passed by, number of executed instructions, and number of floating point operations
executed while running code corresponding to some source position. As there is no line specifying the value of
"positions", it defaults to "line", which means that the first number of a cost line is always a line number.
Thus, the first cost line specifies that in line 15 of source file file.f there is code belonging to function main. While
running, 90 CPU cycles passed by, and 2 of the 14 instructions executed were floating point operations. Similarly, the
next line specifies that there were 12 instructions executed in the context of function main which can be related to
9
line 16 in file file.f, taking 20 CPU cycles. If a cost line specifies less event counts than given in the "events" line,
the rest is assumed to be zero. I.e. there was no floating point instruction executed relating to line 16.
Note that regular cost lines always give self (also called exclusive) cost of code at a given position. If you specify
multiple cost lines for the same position, these will be summed up. On the other hand, in the example above there is
no specification of how many times function main actually was called: profile data only contains sums.
3.1.3. Associations
The most important extension to the original format of Cachegrind is the ability to specify call relationship among
functions. More generally, you specify associations among positions. For this, the second part of the file also can
contain association specifications. These look similar to position specifications, but consist of two lines. For calls, the
format looks like
calls=(Call Count) (Destination position)
(Source position) (Inclusive cost of call)
The destination only specifies subpositions like line number. Therefore, to be able to specify a call to another
function in another source file, you have to precede the above lines with a "cfn=" specification for the name of the
called function, and optionally a "cfi=" specification if the function is in another source file ("cfl=" is an alternative
specification for "cfi=" because of historical reasons, and both should be supported by format readers). The second line
looks like a regular cost line with the difference that inclusive cost spent inside of the function call has to be specified.
Other associations are for example (conditional) jumps. See the reference below for details.
10
events: Instructions
fl=file1.c
fn=main
16 20
cfn=func1
calls=1 50
16 400
cfi=file2.c
cfn=func2
calls=3 20
16 400
fn=func1
51 100
cfi=file2.c
cfn=func2
calls=2 20
51 300
fl=file2.c
fn=func2
20 700
One can see that in main only code from line 16 is executed where also the other functions are called. Inclusive cost
of main is 820, which is the sum of self cost 20 and costs spent in the calls: 400 for the single call to func1 and 400
as sum for the three calls to func2.
Function func1 is located in file1.c, the same as main. Therefore, a "cfi=" specification for the call to func1
is not needed. The function func1 only consists of code at line 51 of file1.c, where func2 is called.
11
events: Instructions
fl=(1) file1.c
fn=(1) main
16 20
cfn=(2) func1
calls=1 50
16 400
cfi=(2) file2.c
cfn=(3) func2
calls=3 20
16 400
fn=(2)
51 100
cfi=(2)
cfn=(3)
calls=2 20
51 300
fl=(2)
fn=(3)
20 700
As position specifications carry no information themselves, but only change the meaning of subsequent cost lines or
associations, they can appear everywhere in the file without any negative consequence. Especially, you can define
name compression mappings directly after the header, and before any cost lines. Thus, the above example can also be
written as
events: Instructions
# define file ID mapping
fl=(1) file1.c
fl=(2) file2.c
# define function ID mapping
fn=(1) main
fn=(2) func1
fn=(3) func2
fl=(1)
fn=(1)
16 20
...
is not only allowed for instruction addresses, but also for line numbers; both addresses and line numbers are called
"subpositions".
A relative subposition always is based on the corresponding subposition of the last cost line, and starts with a "+" to
specify a positive difference, a "-" to specify a negative difference, or consists of "*" to specify the same subposition.
Because absolute subpositions always are positive (ie. never prefixed by "-"), any relative specification is nonambiguous; additionally, absolute and relative subposition specifications can be mixed freely. Assume the following
example (subpositions can always be specified as hexadecimal numbers, beginning with "0x"):
positions: instr line
events: ticks
fn=func
0x80001234 90 1
0x80001237 90 5
0x80001238 91 6
Remark: For assembler annotation to work, instruction addresses have to be corrected to correspond to addresses
found in the original binary. I.e. for relocatable shared objects, often a load offset has to be subtracted.
3.1.7. Miscellaneous
3.1.7.1. Cost Summary Information
For the visualization to be able to show cost percentage, a sum of the cost of the full run has to be known. Usually, it
is assumed that this is the sum of all cost lines in a file. But sometimes, this is not correct. Thus, you can specify a
"summary:" line in the header giving the full cost for the profile run. An import filter may use this to show a progress
bar while loading a large data file.
In this example, "Dr" itself has no long name associated. The order of "event:" lines and the "events:" line is of no
importance. Additionally, inherited event types can be introduced for which no raw data is available, but which are
calculated from given types. Suppose the last example, you could add
13
event: Sum = Ir + Dr
to specify an additional event type "Sum", which is calculated by adding costs for "Ir and "Dr".
3.2. Reference
3.2.1. Grammar
ProfileDataFile := FormatVersion? Creator? PartData*
InheritedExpr := Name
| Number Space* ("*" Space*)? Name
| InheritedExpr Space* "+" Space* InheritedExpr
14
AssociationSpecification := CallSpecification
| JumpSpecification
JumpSpecification := ...
15
number [Callgrind]
This is used to distinguish future profile data formats. A major version of 0 or 1 is supposed to be upwards
compatible with Cachegrinds format. It is optional; if not appearing, version 1 is assumed. Otherwise, this has to
be the first header line.
pid:
process id [Callgrind]
Optional. This specifies the process ID of the supervised application for which this profile was generated.
cmd:
Optional. This specifies the full command line of the supervised application for which this profile was generated.
part:
number [Callgrind]
Optional. This specifies a sequentially incremented number for each dump generated, starting at 1.
desc:
type:
value [Cachegrind]
This specifies various information for this dump. For some types, the semantic is defined, but any description type
is allowed. Unknown types should be ignored.
There are the types "I1 cache", "D1 cache", "LL cache", which specify parameters used for the cache simulator.
These are the only types originally used by Cachegrind.
Additionally, Callgrind uses the following types:
"Timerange" gives a rough range of the basic block counter, for which the cost of this dump was collected. Type
"Trigger" states the reason of why this trace was generated. E.g. program termination or forced interactive dump.
16
positions:
For cost lines, this defines the semantic of the first numbers. Any combination of "instr", "bb" and "line" is allowed,
but has to be in this order which corresponds to position numbers at the start of the cost lines later in the file.
If "instr" is specified, the position is the address of an instruction whose execution raised the events given later on
the line. This address is relative to the offset of the binary/shared library file to not have to specify relocation info.
For "line", the position is the line number of a source file, which is responsible for the events raised. Note that the
mapping of "instr" and "line" positions are given by the debugging line information produced by the compiler.
This field is optional. If not specified, "line" is supposed only.
events:
A list of short names of the event types logged in this file. The order is the same as in cost lines. The first event
type is the second or third number in a cost line, depending on the value of "positions". Callgrind does not add
additional cost types. Specify exactly once.
Cost types from original Cachegrind are:
Ir: Instruction read access
I1mr: Instruction Level 1 read cache miss
ILmr: Instruction last-level read cache miss
...
summary:
costs [Callgrind]
Optional. This header line specifies a summary cost, which should be equal or larger than a total over all self costs.
It may be larger as the cost lines may not represent all cost of the program run.
totals:
costs [Cachegrind]
Optional. Should appear at the end of the file (although looking like a header line). Must give the total of all cost
lines, to allow for a consistency check.
17
fi= [Cachegrind]
fe= [Cachegrind]
The source file including the code which is responsible for the cost of next cost lines. "fi="/"fe=" is used when the
source file changes inside of a function, i.e. for inlined code.
fn= [Cachegrind]
The name of the function where the cost of next cost lines happens.
cob= [Callgrind]
The ELF object of the target of the next call cost lines.
cfi= [Callgrind]
The source file including the code of the target of the next call cost lines.
cfl= [Callgrind]
Alternative spelling for cfi= specification (because of historical reasons).
cfn= [Callgrind]
The name of the target function of the next call cost lines.
calls= [Callgrind]
The number of nonrecursive calls which are responsible for the cost specified by the next call cost line. This is the
cost spent inside of the called function.
After "calls=" there MUST be a cost line. This is the cost spent in the called function. The first number is the source
line from where the call happened.
jump=count target position [Callgrind]
Unconditional jump, executed count times, to the given target position.
jcnd=exe.count jumpcount target position [Callgrind]
Conditional jump, executed exe.count times with jumpcount jumps to the given target position.
18
Table of Contents
1. AUTHORS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2. NEWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3. OLDER NEWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4. README . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5. README_MISSING_SYSCALL_OR_IOCTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6. README_DEVELOPERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
7. README_PACKAGERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
8. README.S390 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
9. README.android . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
10. README.android_emulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
11. README.mips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
xx
1. AUTHORS
Julian Seward was the original founder, designer and author of
Valgrind, created the dynamic translation frameworks, wrote Memcheck,
the 3.X versions of Helgrind, SGCheck, DHAT, and did lots of other
things.
Nicholas Nethercote did the core/tool generalisation, wrote
Cachegrind and Massif, and tons of other stuff.
Tom Hughes did a vast number of bug fixes, helped out with support for
more recent Linux/glibc versions, set up the present build system, and has
helped out with test and build machines.
Jeremy Fitzhardinge wrote Helgrind (in the 2.X line) and totally
overhauled low-level syscall/signal and address space layout stuff,
among many other things.
Josef Weidendorfer wrote and maintains Callgrind and the associated
KCachegrind GUI.
Paul Mackerras did a lot of the initial per-architecture factoring
that forms the basis of the 3.0 line and was also seen in 2.4.0.
He also did UCode-based dynamic translation support for PowerPC, and
created a set of ppc-linux derivatives of the 2.X release line.
Greg Parker wrote the Mac OS X port.
Dirk Mueller contributed the malloc/free mismatch checking
and other bits and pieces, and acts as our KDE liaison.
Robert Walsh added file descriptor leakage checking, new library
interception machinery, support for client allocation pools, and minor
other tweakage.
Bart Van Assche wrote and maintains DRD.
Cerion Armour-Brown worked on PowerPC instruction set support in the
Vex dynamic-translation framework. Maynard Johnson improved the
Power6 support.
Kirill Batuzov and Dmitry Zhurikhin did the NEON instruction set
support for ARM. Donna Robinson did the v6 media instruction support.
Donna Robinson created and maintains the very excellent
http://www.valgrind.org.
Vince Weaver wrote and maintains BBV.
Frederic Gobry helped with autoconf and automake.
AUTHORS
Daniel Berlin modified readelfs dwarf2 source line reader, written by Nick
Clifton, for use in Valgrind.o
Michael Matz and Simon Hausmann modified the GNU binutils demangler(s) for
use in Valgrind.
David Woodhouse has helped out with test and build machines over the course
of many releases.
Florian Krohm and Christian Borntraeger wrote and maintain the
S390X/Linux port. Florian improved and ruggedised the regression test
system during 2011.
Philippe Waroquiers wrote and maintains the embedded GDB server. He
also made a bunch of performance and memory-reduction fixes across
diverse parts of the system.
Carl Love and Maynard Johnson contributed IBM Power6 and Power7
support, and generally deal with ppc{32,64}-linux issues.
Petar Jovanovic and Dejan Jevtic wrote and maintain the mips32-linux
port.
Dragos Tatulea modified the arm-android port so it also works on
x86-android.
Jakub Jelinek helped out extensively with the AVX and AVX2 support.
Mark Wielaard fixed a bunch of bugs and acts as our Fedora/RHEL
liaison.
Maran Pakkirisamy implemented support for decimal floating point on
s390.
Many, many people sent bug reports, patches, and helpful feedback.
Development of Valgrind was supported in part by the Tri-Lab Partners
(Lawrence Livermore National Laboratory, Los Alamos National
Laboratory, and Sandia National Laboratories) of the U.S. Department
of Energys Advanced Simulation & Computing (ASC) Program.
2. NEWS
Release 3.10.0 (10 September 2014)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3.10.0 is a feature release with many improvements and the usual
collection of bug fixes.
This release supports X86/Linux, AMD64/Linux, ARM32/Linux, ARM64/Linux,
PPC32/Linux, PPC64BE/Linux, PPC64LE/Linux, S390X/Linux, MIPS32/Linux,
MIPS64/Linux, ARM/Android, MIPS32/Android, X86/Android, X86/MacOSX 10.9
and AMD64/MacOSX 10.9. Support for MacOSX 10.8 and 10.9 is
significantly improved relative to the 3.9.0 release.
* ================== PLATFORM CHANGES =================
* Support for the 64-bit ARM Architecture (AArch64 ARMv8). This port
is mostly complete, and is usable, but some SIMD instructions are as
yet unsupported.
* Support for little-endian variant of the 64-bit POWER architecture.
* Support for Android on MIPS32.
* Support for 64bit FPU on MIPS32 platforms.
* Both 32- and 64-bit executables are supported on MacOSX 10.8 and 10.9.
* Configuration for and running on Android targets has changed.
See README.android in the source tree for details.
* ================== DEPRECATED FEATURES =================
* --db-attach is now deprecated and will be removed in the next
valgrind feature release. The built-in GDB server capabilities are
superior and should be used instead. Learn more here:
http://valgrind.org/docs/manual/manual-core-adv.html#manual-core-adv.gdbserver
* ==================== TOOL CHANGES ====================
* Memcheck:
- Client code can now selectively disable and re-enable reporting of
invalid address errors in specific ranges using the new client
requests VALGRIND_DISABLE_ADDR_ERROR_REPORTING_IN_RANGE and
VALGRIND_ENABLE_ADDR_ERROR_REPORTING_IN_RANGE.
- Leak checker: there is a new leak check heuristic called
"length64". This is used to detect interior pointers pointing 8
bytes inside a block, on the assumption that the first 8 bytes
holds the value "block size - 8". This is used by
3
NEWS
NEWS
NEWS
as a usage error.
* The semantics of stack start/end boundaries in the valgrind.h
VALGRIND_STACK_REGISTER client request has been clarified and
documented. The convention is that start and end are respectively
the lowest and highest addressable bytes of the stack.
* ==================== FIXED BUGS ====================
The following bugs have been fixed or resolved. Note that "n-i-bz"
stands for "not in bugzilla" -- that is, a bug that was reported to us
but never got a bugzilla entry. We encourage you to file bugs in
bugzilla (https://bugs.kde.org/enter_bug.cgi?product=valgrind) rather
than mailing the developers (or mailing lists) directly -- bugs that
are not entered into bugzilla tend to get forgotten about or ignored.
To see details of a given bug, visit
https://bugs.kde.org/show_bug.cgi?id=XXXXXX
where XXXXXX is the bug number as listed below.
175819
232510
249435
278972
291310
303536
308729
315199
315952
323178
323179
324050
325110
325124
325477
325538
325628
325714
325751
325816
325856
326026
326436
326444
326462
326469
326623
326724
326816
326921
326983
327212
327223
NEWS
327238
327284
327639
327837
327916
327943
328100
328205
328454
328455
328711
328878
329612
329694
329956
330228
330257
330319
330459
330469
330594
330622
330939
330941
331057
331254
331255
331257
331305
331337
331380
331476
331829
331830
331839
331847
332037
332055
332263
332265
332276
332658
332765
333072
333145
333228
333230
333248
333428
333501
NEWS
333666
333788
333817
334049
334384
334585
334705
334727
334788
334834
334836
334936
335034
335155
335262
335263
335441
335496
335554
335564
335735
335736
335848
335902
335903
336055
336062
336139
336189
336435
336619
336772
336957
337094
337285
337528
337740
337762
337766
337871
338023
338024
338106
338115
338160
338205
338300
338445
338499
338615
== 336577
== 292281
Recognize MPX instructions and bnd prefix.
Valgrind does not support the CDROM_DISC_STATUS ioctl (has patch)
Valgrind reports the memory areas written to by the SG_IO
ioctl as untouched
lzcnt fails silently (x86_32)
Valgrind does not have support Little Endian support for
IBM POWER PPC 64
recvmmsg unhandled (+patch) (arm)
sendmsg and recvmsg should guard against bogus msghdr fields.
Build fails with -Werror=format-security
clarify doc about --log-file initial program directory
PPC64 Little Endian support, patch 2
PPC64 Little Endian support, patch 3 testcase fixes
patch to fix false positives on alsa SNDRV_CTL_* ioctls
Unhandled ioctl: HCIGETDEVLIST
vgdb, fix error print statement.
arm64: movi 8bit version is not supported
arm64: dmb instruction is not implemented
unhandled ioctl 0x8905 (SIOCATMARK) when running wine under valgrind
arm64: sbc/abc instructions are not implemented
arm64: unhandled instruction: abs
arm64: unhandled instruction: fcvtpu Xn, Sn
arm64: unhandled instruction: cnt
arm64: unhandled instruction: uaddlv
arm64: unhandled instruction: {s,u}cvtf
arm64: unhandled instruction: sli
arm64: unhandled instruction: umull (vector)
arm64: unhandled instruction: mov (element)
arm64: unhandled instruction: shrn{,2}
mip64: [...] valgrind hangs and spins on a single core [...]
arm64: unhandled Instruction: mvn
Valgrind hangs in pthread_spin_lock consuming 100% CPU
valgrind --read-var-info=yes doesnt handle DW_TAG_restrict_type
Make moans about unknown ioctls more informative
Add a section about the Solaris/illumos port on the webpage
ifunc wrapper is broken on ppc64
fcntl commands F_OFD_SETLK, F_OFD_SETLKW, and F_OFD_GETLK not supported
leak check heuristic for block prefixed by length as 64bit number
Implement additional Xen hypercalls
guest_arm64_toIR.c:4166 (dis_ARM64_load_store): Assertion 0 failed.
arm64-linux: unhandled syscalls mlock (228) and mlockall (230)
deprecate --db-attach
Add support for all V4L2/media ioctls
inlined functions are not shown if DW_AT_ranges is used
Add support for kcmp syscall
DRD: computed conflict set differs from actual after fork
implement display of thread local storage in gdbsrv
configure.ac and check for -Wno-tautological-compare
coredumps are missing one byte of every segment
amd64 vbit-test fails with unknown opcodes used by arm64 VEX
--sim-hints parsing broken due to wrong order in tokens
suppress glibc 2.20 optimized strcmp implementation for ARMv7
8
NEWS
NEWS
* Improved support for MacOSX 10.8 (64-bit only). Memcheck can now
run large GUI apps tolerably well.
* ==================== TOOL CHANGES ====================
* Memcheck:
- Improvements in handling of vectorised code, leading to
significantly fewer false error reports. You need to use the flag
--partial-loads-ok=yes to get the benefits of these changes.
- Better control over the leak checker. It is now possible to
specify which leak kinds (definite/indirect/possible/reachable)
should be displayed, which should be regarded as errors, and which
should be suppressed by a given leak suppression. This is done
using the options --show-leak-kinds=kind1,kind2,..,
--errors-for-leak-kinds=kind1,kind2,.. and an optional
"match-leak-kinds:" line in suppression entries, respectively.
Note that generated leak suppressions contain this new line and
are therefore more specific than in previous releases. To get the
same behaviour as previous releases, remove the "match-leak-kinds:"
line from generated suppressions before using them.
- Reduced "possible leak" reports from the leak checker by the use
of better heuristics. The available heuristics provide detection
of valid interior pointers to std::stdstring, to new[] allocated
arrays with elements having destructors and to interior pointers
pointing to an inner part of a C++ object using multiple
inheritance. They can be selected individually using the
option --leak-check-heuristics=heur1,heur2,...
- Better control of stacktrace acquisition for heap-allocated
blocks. Using the --keep-stacktraces option, it is possible to
control independently whether a stack trace is acquired for each
allocation and deallocation. This can be used to create better
"use after free" errors or to decrease Valgrinds resource
consumption by recording less information.
- Better reporting of leak suppression usage. The list of used
suppressions (shown when the -v option is given) now shows, for
each leak suppressions, how many blocks and bytes it suppressed
during the last leak search.
* Helgrind:
- False errors resulting from the use of statically initialised
mutexes and condition variables (PTHREAD_MUTEX_INITIALISER, etc)
have been removed.
- False errors resulting from the use of pthread_cond_waits that
timeout, have been removed.
* ==================== OTHER CHANGES ====================
10
NEWS
11
NEWS
NEWS
284540
289578
296311
304832
305431
305728
305948
306035
306054
306098
306587
306783
307038
307082
307101
307103
307106
307113
307141
307155
307285
307290
307463
307465
307557
307729
307828
307955
308089
308135
308321
308333
308341
308427
308495
308573
308626
308627
308644
308711
308717
308718
308886
308930
309229
309323
309425
309427
309430
309600
309823
309921
309922
310169
NEWS
310424
310792
310931
311100
311318
311407
311690
311880
311922
311933
312171
312571
312620
312913
312980
313267
313348
313354
313811
314099
314269
314718
315345
315441
315534
315545
315689
315738
315959
316144
316145
316145
316181
316503
316535
316696
316761
317091
317186
317318
317444
317461
317463
317506
318050
318203
318643
318773
318929
318932
319235
319395
319494
319505
NEWS
319858
319932
320057
320063
320083
320116
320131
320211
320661
320895
320998
321065
321148
321363
321364
321466
321467
321468
321619
321620
321621
321692
321693
321694
321696
321697
321703
321704
321730
321738
321814
321891
321960
321969
322254
322294
322368
322563
322807
322851
323035
323036
323116
323175
323177
323432
323437
323713
323803
323893
323905
323912
324047
324149
NEWS
== 301281
Unhandled instruction: 0xF 0x29 0xE5 (MOVAPS)
amd64->IR: 0xF3 0xF 0xBC 0xC0 (TZCNT)
wcslen causes false(?) uninitialised value warnings
valgrind hangs on OS X when the process calls system()
disInstr(arm): unhandled instruction 0xE1023053
implement MOVBE instruction in x86 mode
Assertion lo <= hi failed in vgModuleLocal_find_rx_mapping
amd64: implement 0F 7F encoding of movq between two registers
ARM: implement QDADD and QDSUB
amd64->IR: 0xF 0xD 0xC (prefetchw)
killed by fatal signal: SIGSEGV
16
NEWS
NEWS
NEWS
- Added even more facilities that can help finding the cause of a data
race, namely the command-line option --ptrace-addr and the macro
DRD_STOP_TRACING_VAR(x). More information can be found in the manual.
- Fixed a subtle bug that could cause false positive data race reports.
* ==================== OTHER CHANGES ====================
* The C++ demangler has been updated so as to work well with C++
compiled by up to at least g++ 4.6.
* Tool developers can make replacement/wrapping more flexible thanks
to the new option --soname-synonyms. This was reported above, but
in fact is very general and applies to all function
replacement/wrapping, not just to malloc-family functions.
* Round-robin scheduling of threads can be selected, using the new
option --fair-sched= yes. Prior to this change, the pipe-based
thread serialisation mechanism (which is still the default) could
give very unfair scheduling. --fair-sched=yes improves
responsiveness of interactive multithreaded applications, and
improves repeatability of results from the thread checkers Helgrind
and DRD.
* For tool developers: support to run Valgrind on Valgrind has been
improved. We can now routinely Valgrind on Helgrind or Memcheck.
* gdbserver now shows the float shadow registers as integer
rather than float values, as the shadow values are mostly
used as bit patterns.
* Increased limit for the --num-callers command line flag to 500.
* Performance improvements for error matching when there are many
suppression records in use.
* Improved support for DWARF4 debugging information (bug 284184).
* Initial support for DWZ compressed Dwarf debug info.
* Improved control over the IR optimisers handling of the tradeoff
between performance and precision of exceptions. Specifically,
--vex-iropt-precise-memory-exns has been removed and replaced by
--vex-iropt-register-updates, with extended functionality. This
allows the Valgrind gdbserver to always show up to date register
values to GDB.
* Modest performance gains through the use of translation chaining for
JIT-generated code.
* ==================== FIXED BUGS ====================
The following bugs have been fixed or resolved. Note that "n-i-bz"
19
NEWS
stands for "not in bugzilla" -- that is, a bug that was reported to us
but never got a bugzilla entry. We encourage you to file bugs in
bugzilla (https://bugs.kde.org/enter_bug.cgi?product=valgrind) rather
than mailing the developers (or mailing lists) directly -- bugs that
are not entered into bugzilla tend to get forgotten about or ignored.
To see details of a given bug, visit
https://bugs.kde.org/show_bug.cgi?id=XXXXXX
where XXXXXX is the bug number as listed below.
197914
203877
219156
247386
270006
270777
270796
271438
273114
273475
274078
276993
278313
281482
282230
283413
283671
283961
284124
284864
285219
285662
285725
286261
286270
286374
286384
286497
286596
286917
287175
287260
287301
287307
287858
288298
288995
289470
289656
289699
289823
289839
289939
290006
NEWS
290655
290719
290974
291253
291568
291865
292300
292430
292493
292626
292627
292628
292841
292993
292995
293088
293751
293754
293755
293808
294047
294048
294055
294185
294190
294191
294260
294523
294617
294736
294812
295089
295221
295427
295428
295590
295617
295799
296229
296318
296422
296457
296792
296983
297078
297147
297329
297497
297701
297911
297976
297991
297992
297993
NEWS
NEWS
(3.8.0:
23
NEWS
NEWS
NEWS
267552
267630
267769
267819
267925
267968
267997
268513
268619
268620
268621
268715
268792
268930
269078
269079
269144
269209
269354
269641
269736
269778
269863
269864
269884
270082
270115
270309
270320
270326
270794
270851
270856
270925
270959
271042
271043
271259
271337
271385
271501
271504
271579
271615
271730
271776
271779
271799
271820
271917
272067
272615
272661
272893
NEWS
272955
272967
272986
273318
273318
273431
273465
273536
273640
273729
273778
274089
274378
274447
274776
274784
274926
275148
275151
275168
275212
275278
275284
275308
275339
275517
275710
275815
275852
276784
276987
277045
277199
277471
277610
277653
277663
277689
277694
277780
278057
278078
278349
278454
278502
278892
279027
279027
279062
279071
279212
279378
279698
279795
NEWS
28
NEWS
NEWS
30
NEWS
NEWS
NEWS
194402
212419
213685
216837
237920
242137
242423
243232
243483
243935
244677
The following bugs have been fixed or resolved. Note that "n-i-bz"
stands for "not in bugzilla" -- that is, a bug that was reported to us
but never got a bugzilla entry. We encourage you to file bugs in
bugzilla (http://bugs.kde.org/enter_valgrind_bug.cgi) rather than
mailing the developers (or mailing lists) directly -- bugs that are
not entered into bugzilla tend to get forgotten about or ignored.
To see details of a given bug, visit
https://bugs.kde.org/show_bug.cgi?id=XXXXXX
where XXXXXX is the bug number as listed below.
135264
142688
153699
180217
190429
197266
197988
202315
203256
205093
205241
206600
210935
NEWS
NEWS
35
NEWS
NEWS
NEWS
really (a.c:20)
really (in /foo/a.out)
really
(within /foo/a.out)
??? (a.c:20)
???
The third and fourth of these forms have been made more consistent
with the others. The six possible forms are now:
0x80483BF:
0x80483BF:
0x80483BF:
0x80483BF:
0x80483BF:
0x80483BF:
really (a.c:20)
really (in /foo/a.out)
really (in ???)
??? (in /foo/a.out)
??? (a.c:20)
???
* Helgrind and Ptrcheck now support XML output, so they can be used
from GUI tools. Also, the XML output mechanism has been
overhauled.
- The XML format has been overhauled and generalised, so it is more
suitable for error reporting tools in general. The Memcheck
specific aspects of it have been removed. The new format, which
is an evolution of the old format, is described in
docs/internals/xml-output-protocol4.txt.
- Memcheck has been updated to use the new format.
- Helgrind and Ptrcheck are now able to emit output in this format.
- The XML output mechanism has been overhauled. XML is now output
to its own file descriptor, which means that:
* Valgrind can output text and XML independently.
* The longstanding problem of XML output being corrupted by
unexpected un-tagged text messages is solved.
38
NEWS
39
NEWS
40
NEWS
* A new experimental tool, BBV, has been added. BBV generates basic
block vectors for use with the SimPoint analysis tool, which allows
a programs overall behaviour to be approximated by running only a
fraction of it. This is useful for computer architecture
researchers. You can run BBV by specifying --tool=exp-bbv (the
"exp-" prefix is short for "experimental"). BBV was written by
Vince Weaver.
NEWS
42
NEWS
* KNOWN LIMITATIONS:
- Memcheck is unusable with the Intel compiler suite version 11.1,
when it generates code for SSE2-and-above capable targets. This
is because of iccs use of highly optimised inlined strlen
implementations. It causes Memcheck to report huge numbers of
false errors even in simple programs. Helgrind and DRD may also
have problems.
Versions 11.0 and earlier may be OK, but this has not been
properly tested.
The following bugs have been fixed or resolved. Note that "n-i-bz"
stands for "not in bugzilla" -- that is, a bug that was reported to us
but never got a bugzilla entry. We encourage you to file bugs in
bugzilla (http://bugs.kde.org/enter_valgrind_bug.cgi) rather than
mailing the developers (or mailing lists) directly -- bugs that are
not entered into bugzilla tend to get forgotten about or ignored.
To see details of a given bug, visit
https://bugs.kde.org/show_bug.cgi?id=XXXXXX
where XXXXXX is the bug number as listed below.
84303
91633
97452
100628
108528
110126
110128
110770
111102
115673
117564
119404
133679
135847
136154
136230
137073
137904
139076
142228
145347
148441
148742
149878
NEWS
NEWS
190429
190820
191095
191182
191189
191192
191271
191761
191992
192634
192954
194429
194474
194671
195069
195169
195268
195838
195860
196528
197227
197456
197512
197591
197793
197794
197898
197901
197929
197930
197933
197966
198395
198624
198649
199338
199977
200029
200760
200827
200990
201016
201169
201323
201384
201585
201708
201757
204377
NEWS
NEWS
47
NEWS
NEWS
This is useful for running apps that need a lot of stack space.
* The limitation that you cant use --trace-children=yes together
with --db-attach=yes has been removed.
* The following bugs have been fixed. Note that "n-i-bz" stands for
"not in bugzilla" -- that is, a bug that was reported to us but
never got a bugzilla entry. We encourage you to file bugs in
bugzilla (http://bugs.kde.org/enter_valgrind_bug.cgi) rather than
mailing the developers (or mailing lists) directly.
n-i-bz Make return types for some client requests 64-bit clean
n-i-bz glibc 2.9 support
n-i-bz ignore unsafe .valgrindrcs (CVE-2008-4865)
n-i-bz MPI_Init(0,0) is valid but libmpiwrap.c segfaults
n-i-bz Building in an env without gdb gives bogus gdb attach
92456 Tracing the origin of uninitialised memory
106497 Valgrind does not demangle some C++ template symbols
162222 ==106497
151612 Suppression with "..." (frame-level wildcards in .supp files)
156404 Unable to start oocalc under memcheck on openSUSE 10.3 (64-bit)
159285 unhandled syscall:25 (stime, on x86-linux)
159452 unhandled ioctl 0x8B01 on "valgrind iwconfig"
160954 ppc build of valgrind crashes with illegal instruction (isel)
160956 mallinfo implementation, w/ patch
162092 Valgrind fails to start gnome-system-monitor
162819 malloc_free_fill test doesnt pass on glibc2.8 x86
163794 assertion failure with "--track-origins=yes"
163933 sigcontext.err and .trapno must be set together
163955 remove constraint !(--db-attach=yes && --trace-children=yes)
164476 Missing kernel module loading system calls
164669 SVN regression: mmap() drops posix file locks
166581 Callgrind output corruption when program forks
167288 Patch file for missing system calls on Cell BE
168943 unsupported scas instruction pentium
171645 Unrecognised instruction (MOVSD, non-binutils encoding)
172417 x86->IR: 0x82 ...
172563 amd64->IR: 0xD9 0xF5 - fprem1
173099 .lds linker script generation error
173177 [x86_64] syscalls: 125/126/179 (capget/capset/quotactl)
173751 amd64->IR: 0x48 0xF 0x6F 0x45 (even more redundant prefixes)
174532 == 173751
174908 --log-file value not expanded correctly for core file
175044 Add lookup_dcookie for amd64
175150 x86->IR: 0xF2 0xF 0x11 0xC1 (movss non-binutils encoding)
Developer-visible changes:
* Valgrinds debug-info reading machinery has been majorly overhauled.
It can now correctly establish the addresses for ELF data symbols,
which is something that has never worked properly before now.
Also, Valgrind can now read DWARF3 type and location information for
stack and global variables. This makes it possible to use the
49
NEWS
framework to build tools that rely on knowing the type and locations
of stack and global variables, for example exp-Ptrcheck.
Reading of such information is disabled by default, because most
tools dont need it, and because it is expensive in space and time.
However, you can force Valgrind to read it, using the
--read-var-info=yes flag. Memcheck, Helgrind and DRD are able to
make use of such information, if present, to provide source-level
descriptions of data addresses in the error messages they create.
(3.4.0.RC1: 24 Dec 2008, vex r1878, valgrind r8882).
(3.4.0:
3 Jan 2009, vex r1878, valgrind r8899).
50
3. OLDER NEWS
Release 3.3.1 (4 June 2008)
~~~~~~~~~~~~~~~~~~~~~~~~~~~
3.3.1 fixes a bunch of bugs in 3.3.0, adds support for glibc-2.8 based
systems (openSUSE 11, Fedora Core 9), improves the existing glibc-2.7
support, and adds support for the SSSE3 (Core 2) instruction set.
3.3.1 will likely be the last release that supports some very old
systems. In particular, the next major release, 3.4.0, will drop
support for the old LinuxThreads threading library, and for gcc
versions prior to 3.0.
The fixed bugs are as follows. Note that "n-i-bz" stands for "not in
bugzilla" -- that is, a bug that was reported to us but never got a
bugzilla entry. We encourage you to file bugs in bugzilla
(http://bugs.kde.org/enter_valgrind_bug.cgi) rather than mailing the
developers (or mailing lists) directly -- bugs that are not entered
into bugzilla tend to get forgotten about or ignored.
n-i-bz
n-i-bz
n-i-bz
n-i-bz
n-i-bz
n-i-bz
n-i-bz
n-i-bz
n-i-bz
n-i-bz
n-i-bz
n-i-bz
n-i-bz
OLDER NEWS
161285
161378
160136
161487
162386
161036
162663
OLDER NEWS
some people will find them useful, and because exposure to a wider
user group provides tool authors with more end-user feedback. These
tools have a "exp-" prefix attached to their names to indicate their
experimental nature. Currently there are two experimental tools:
* exp-Omega: an instantaneous leak detector. See
exp-omega/docs/omega_introduction.txt.
* exp-DRD: a data race detector based on the happens-before
relation. See exp-drd/docs/README.txt.
- Scalability improvements for very large programs, particularly those
which have a million or more mallocd blocks in use at once. These
improvements mostly affect Memcheck. Memcheck is also up to 10%
faster for all programs, with x86-linux seeing the largest
improvement.
- Works well on the latest Linux distros. Has been tested on Fedora
Core 8 (x86, amd64, ppc32, ppc64) and openSUSE 10.3. glibc 2.6 and
2.7 are supported. gcc-4.3 (in its current pre-release state) is
supported. At the same time, 3.3.0 retains support for older
distros.
- The documentation has been modestly reorganised with the aim of
making it easier to find information on common-usage scenarios.
Some advanced material has been moved into a new chapter in the main
manual, so as to unclutter the main flow, and other tidying up has
been done.
- There is experimental support for AIX 5.3, both 32-bit and 64-bit
processes. You need to be running a 64-bit kernel to use Valgrind
on a 64-bit executable.
- There have been some changes to command line options, which may
affect you:
* --log-file-exactly and
--log-file-qualifier options have been removed.
To make up for this --log-file option has been made more powerful.
It now accepts a %p format specifier, which is replaced with the
process ID, and a %q{FOO} format specifier, which is replaced with
the contents of the environment variable FOO.
* --child-silent-after-fork=yes|no [no]
Causes Valgrind to not show any debugging or logging output for
the child process resulting from a fork() call. This can make the
output less confusing (although more misleading) when dealing with
processes that create children.
* --cachegrind-out-file, --callgrind-out-file and --massif-out-file
These control the names of the output files produced by
53
OLDER NEWS
54
OLDER NEWS
OLDER NEWS
OLDER NEWS
The fixed bugs are as follows. Note that "n-i-bz" stands for "not in
bugzilla" -- that is, a bug that was reported to us but never got a
bugzilla entry. We encourage you to file bugs in bugzilla
(http://bugs.kde.org/enter_valgrind_bug.cgi) rather than mailing the
developers (or mailing lists) directly.
129390
129968
134319
133054
118903
132998
134207
134727
n-i-bz
n-i-bz
135012
125959
126147
136650
135421
n-i-bz
n-i-bz
n-i-bz
n-i-bz
n-i-bz
n-i-bz
n-i-bz
136844
138507
n-i-bz
n-i-bz
n-i-bz
n-i-bz
n-i-bz
n-i-bz
136300
139124
n-i-bz
137493
137714
138424
138856
138627
138896
136059
139050
n-i-bz
n-i-bz
n-i-bz
n-i-bz
n-i-bz
n-i-bz
OLDER NEWS
n-i-bz
n-i-bz
139776
n-i-bz
n-i-bz
139910
n-i-bz
n-i-bz
n-i-bz
n-i-bz
n-i-bz
OLDER NEWS
132813
133051
132722
n-i-bz
n-i-bz
n-i-bz
n-i-bz
n-i-bz
n-i-bz
n-i-bz
n-i-bz
n-i-bz
133678
133694
n-i-bz
n-i-bz
n-i-bz
n-i-bz
n-i-bz
The following bugs were not fixed, due primarily to lack of developer
time, and also because bug reporters did not answer requests for
feedback in time for the release:
129390 ppc?->IR: some kind of VMX prefetch (dstt)
129968 amd64->IR: 0xF 0xAE 0x0 (fxsave)
133054 make install fails with syntax errors
n-i-bz Signal race condition (users list, 13 June, Johannes Berg)
n-i-bz Unrecognised instruction at address 0x70198EC2 (users list,
19 July, Bennee)
132998 startup fails in when running on UML
The following bug was tentatively fixed on the mainline but the fix
was considered too risky to push into 3.2.X:
133154
OLDER NEWS
OLDER NEWS
OLDER NEWS
OLDER NEWS
126583
126668
126696
126722
126938
OLDER NEWS
OLDER NEWS
inconvenience.
Other user-visible changes:
- The --weird-hacks option has been renamed --sim-hints.
- The --time-stamp option no longer gives an absolute date and time.
It now prints the time elapsed since the program began.
- It should build with gcc-2.96.
- Valgrind can now run itself (see README_DEVELOPERS for how).
This is not much use to you, but it means the developers can now
profile Valgrind using Cachegrind. As a result a couple of
performance bad cases have been fixed.
- The XML output format has changed slightly. See
docs/internals/xml-output.txt.
- Core dumping has been reinstated (it was disabled in 3.0.0 and 3.0.1).
If your program crashes while running under Valgrind, a core file with
the name "vgcore.<pid>" will be created (if your settings allow core
file creation). Note that the floating point information is not all
there. If Valgrind itself crashes, the OS will create a normal core
file.
The following are some user-visible changes that occurred in earlier
versions that may not have been announced, or were announced but not
widely noticed. So were mentioning them now.
- The --tool flag is optional once again; if you omit it, Memcheck
is run by default.
- The --num-callers flag now has a default value of 12. It was
previously 4.
- The --xml=yes flag causes Valgrinds output to be produced in XML
format. This is designed to make it easy for other programs to
consume Valgrinds output. The format is described in the file
docs/internals/xml-format.txt.
- The --gen-suppressions flag supports an "all" value that causes every
suppression to be printed without asking.
- The --log-file option no longer puts "pid" in the filename, eg. the
old name "foo.pid12345" is now "foo.12345".
- There are several graphical front-ends for Valgrind, such as Valkyrie,
Alleyoop and Valgui. See http://www.valgrind.org/downloads/guis.html
for a list.
BUGS FIXED:
109861 amd64 hangs at startup
65
OLDER NEWS
110301
111554
111809
111901
113468
92071
109744
110183
82301
98278
108994
115643
105974
109323
109345
110831
110829
111781
112670
112941
110201
113015
113126
104065
115741
113403
113541
113642
113810
113796
113851
114366
114412
114455
115590:
115953
116057
116483
102202
109487
110536
112687
111724
111748
111785
111829
111851
112031
112152
112167
112789
112199
112501
ditto
valgrind crashes with Cannot allocate memory
Memcheck tool doesnt start java
cross-platform run of cachegrind fails on opteron
(vgPlain_mprotect_range): Assertion r != -1 failed.
Reading debugging info uses too much memory
memcheck loses track of mmap from direct ld-linux.so.2
tail of page with _end
FV memory layout too rigid
Infinite recursion possible when allocating memory
Valgrind runs out of memory due to 133x overhead
valgrind cannot allocate memory
vg_hashtable.c static hash table
ppc32: dispatch.S uses Altivec insn, which doesnt work on POWER.
ptrace_setregs not yet implemented for ppc
Would like to be able to run against both 32 and 64 bit
binaries on AMD64
== 110831
compile of valgrind-3.0.0 fails on my linux (gcc 2.X prob)
Cachegrind: cg_main.c:486 (handleOneStatement ...
vex x86: 0xD9 0xF4 (fxtract)
== 112941
vex amd64->IR: 0xE3 0x14 0x48 0x83 (jrcxz)
Crash with binaries built with -gstabs+/-ggdb
== 113126
== 113126
Partial SSE3 support on x86
vex: Grp5(x86) (alt encoding inc/dec) case 1
valgrind crashes when trying to read debug information
vex x86->IR: 66 0F F6 (66 + PSADBW == SSE PSADBW)
read() and write() do not work if buffer is in shared memory
vex x86->IR: (pmaddwd): 0x66 0xF 0xF5 0xC7
vex amd64 cannnot handle __asm__( "fninit" )
vex amd64->IR: 0xF 0xAD 0xC2 0xD3 (128-bit shift, shrdq?)
vex amd64->IR: 0xF 0xAC 0xD0 0x1 (also shrdq)
amd64->IR: 0x67 0xE3 0x9 0xEB (address size override)
valgrind svn r5042 does not build with parallel make (-j3)
maximum instruction size - VG_MAX_INSTR_SZB too small?
shmat failes with invalid argument
valgrind crashes when reallocing until out of memory
== 102202
== 102202
== 102202
vex amd64->IR: 0x41 0xF 0xAB (more BT{,S,R,C} fun n games)
vex amd64->IR: 0xDD 0xE2 (fucom)
make fails if CC contains spaces
vex x86->IR: sbb AL, Ib
vex x86->IR: 0x9F 0x89 (lahf/sahf)
iopl on AMD64 and README_MISSING_SYSCALL_OR_IOCTL update
code generation for Xin_MFence on x86 with SSE0 subarch
== 112152
== 112152
naked ar tool is used in vex makefile
vex x86->IR: movq (0xF 0x7F 0xC1 0xF) (mmx MOVQ)
66
OLDER NEWS
113583
112538
113190
113230
113996
114196
114289
114756
114757
114563
114564
114565
115496
116200
== 112501
memalign crash
Broken links in docs/html/
Valgrind sys_pipe on x86-64 wrongly thinks file descriptors
should be 64bit
vex amd64->IR: fucomp (0xDD 0xE9)
vex x86->IR: out %eax,(%dx) (0xEF 0xC9 0xC3 0x90)
Memcheck fails to intercept malloc when used in an uclibc environment
mbind syscall support
Valgrind dies with assertion: Assertion noLargerThan > 0 failed
stack tracking module not informed when valgrind switches threads
clone() and stacks
== 114564
glibc crashes trying to use sysinfo page
enable fsetxattr, fgetxattr, and fremovexattr for amd64
OLDER NEWS
OLDER NEWS
OLDER NEWS
sz == 4 assertion failed
vex amd64->IR: unhandled instruction bytes: 0xA3 0x4C 0x70 0xD7
Add a plausible_stack_size command-line parameter ?
unhandled ioctl TIOCMGET (running hw detection tool discover)
unhandled ioctl BLKSSZGET (running fdisk -l /dev/hda)
vex x86->IR: unhandled instruction: ffreep
AMD64 unhandled syscall: 127 (sigpending)
false positive uninit in strchr from ld-linux.so.2
"stabs" parse failure
amd64: unhandled instruction REP NOP
amd64: unhandled instruction LOOP Jb
AMD64 unhandled instruction bytes
AMD64 unhandled syscall: 24 (sched_yield)
fork() wont work with valgrind-3.0 SVN
amd64 unhandled instruction: ADC Ev, Gv
Bogus memcheck report on amd64
Crash; vg_memory.c:905 (vgPlain_init_shadow_range):
Assertion vgPlain_defined_init_shadow_page() failed.
mincore syscall parameter checked incorrectly
build infrastructure: small update
epoll_ctl event parameter checked on EPOLL_CTL_DEL
Vex dies with unhandled instructions: 0xD9 0x31 0xF 0xAE
auxmap & openGL problems
SDL_Init causes valgrind to exit
setcontext and makecontext not handled correctly
addresses beyond initial client stack allocation
not checked in VALGRIND_DO_LEAK_CHECK
70
OLDER NEWS
106283
105831
105039
104797
103594
103320
103168
102039
101881
101543
75247
71
OLDER NEWS
OLDER NEWS
89106
89139
89198
89263
89440
89481
89663
89792
90111
90128
90778
90834
91028
91162
91199
91325
91599
91604
91821
91844
92264
92331
92420
92513
92528
93096
93117
93128
93174
93309
93328
93763
93776
93810
94378
94429
94645
94953
95667
96243
96252
96520
96660
96747
96923
96948
96966
97398
97407
97427
97785
97792
97880
97975
OLDER NEWS
98129 Failed when open and close file 230000 times using stdio
98175 Crashes when using valgrind-2.2.0 with a program using al...
98288 Massif broken
98303 UNIMPLEMENTED FUNCTION pthread_condattr_setpshared
98630 failed--compilation missing warnings.pm, fails to make he...
98756 Cannot valgrind signal-heavy kdrive X server
98966 valgrinding the JVM fails with a sanity check assertion
99035 Valgrind crashes while profiling
99142 loops with message "Signal 11 being dropped from thread 0...
99195 threaded apps crash on thread start (using QThread::start...
99348 Assertion vgPlain_lseek(core_fd, 0, 1) == phdrs[i].p_off...
99568 False negative due to mishandling of mprotect
99738 valgrind memcheck crashes on program that uses sigitimer
99923 0-sized allocations are reported as leaks
99949 program seg faults after exit()
100036 "newSuperblocks request for 1048576 bytes failed"
100116 valgrind: (pthread_cond_init): Assertion sizeof(* cond) ...
100486 memcheck reports "valgrind: the impossible happened: V...
100833 second call to "mremap" fails with EINVAL
101156 (vgPlain_find_map_space): Assertion (addr & ((1 << 12)-1...
101173 Assertion recDepth >= 0 && recDepth < 500 failed
101291 creating threads in a forked process fails
101313 valgrind causes different behavior when resizing a window...
101423 segfault for c++ array of floats
101562 valgrind massif dies on SIGINT even with signal handler r...
OLDER NEWS
* Massif: a new space profiling tool. Try it! Its cool, and itll
tell you in detail where and when your C/C++ code is allocating heap.
Draws pretty .ps pictures of memory use against time. A potentially
powerful tool for making sense of your programs space use.
* File descriptor leakage checks. When enabled, Valgrind will print out
a list of open file descriptors on exit.
* Improved SSE2/SSE3 support.
* Time-stamped output; use --time-stamp=yes
80716
86987
86696
86730
86641
85947
84978
86254
87089
86407
OLDER NEWS
70587
84937
86317
86989
85811
79138
77369
88115
78765
OLDER NEWS
the last stable release, 2.0.0, might also want to try this release.
The following bugs, and probably many more, have been fixed. These
are listed at http://bugs.kde.org. Reporting a bug for valgrind in
the http://bugs.kde.org is much more likely to get you a fix than
mailing developers directly, so please continue to keep sending bugs
there.
76869
69508
java 1.4.2 client fails with erroneous "stack size too small".
This fix makes more of the pthread stack attribute related
functions work properly. Java still doesnt work though.
71906
81970
78514
77952
80942
78048
73655
83060
69872
82026
70344
81297
82872
83025
83340
79714
77022
82098
83573
82999
83040
83998
82722
78958
85416
OLDER NEWS
OLDER NEWS
The following bugs, and probably many more, have been fixed. These
are listed at http://bugs.kde.org. Reporting a bug for valgrind in
the http://bugs.kde.org is much more likely to get you a fix than
mailing developers directly, so please continue to keep sending bugs
there.
69616
69856
73892
73145
73902
68633
75099
76839
76762
76747
76223
75604
76416
75614
75787
75294
73326
72596
69489
72781
73055
73026
71705
OLDER NEWS
72643
72484
72650
72006
71781
71180
69886
71791
69783
69782
70385
69529
70827
71028
80
OLDER NEWS
OLDER NEWS
82
OLDER NEWS
OLDER NEWS
OLDER NEWS
85
OLDER NEWS
OLDER NEWS
attempt to mend my errant ways :-) Changes in this and future releases
will be documented in the NEWS file in the source distribution.
Major changes in 1.9.5:
- (Critical bug fix): Fix a bug in the FPU simulation. This was
causing some floating point conditional tests not to work right.
Several people reported this. If you had floating point code which
didnt work right on 1.9.1 to 1.9.4, its worth trying 1.9.5.
- Partial support for Red Hat 9. RH9 uses the new Native Posix
Threads Library (NPTL), instead of the older LinuxThreads.
This potentially causes problems with V which will take some
time to correct. In the meantime we have partially worked around
this, and so 1.9.5 works on RH9. Threaded programs still work,
but they may deadlock, because some system calls (accept, read,
write, etc) which should be nonblocking, in fact do block. This
is a known bug which we are looking into.
If you can, your best bet (unfortunately) is to avoid using
1.9.5 on a Red Hat 9 system, or on any NPTL-based distribution.
If your glibc is 2.3.1 or earlier, youre almost certainly OK.
Minor changes in 1.9.5:
- Added some #errors to valgrind.h to ensure people dont include
it accidentally in their sources. This is a change from 1.0.X
which was never properly documented. The right thing to include
is now memcheck.h. Some people reported problems and strange
behaviour when (incorrectly) including valgrind.h in code with
1.9.1 -- 1.9.4. This is no longer possible.
- Add some __extension__ bits and pieces so that gcc configured
for valgrind-checking compiles even with -Werror. If you
dont understand this, ignore it. Of interest to gcc developers
only.
- Removed a pointless check which caused problems interworking
with Clearcase. V would complain about shared objects whose
names did not end ".so", and refuse to run. This is now fixed.
In fact it was fixed in 1.9.4 but not documented.
- Fixed a bug causing an assertion failure of "waiters == 1"
somewhere in vg_scheduler.c, when running large threaded apps,
notably MySQL.
- Add support for the munlock system call (124).
Some comments about future releases:
1.9.5 is, we hope, the most stable Valgrind so far. It pretty much
supersedes the 1.0.X branch. If you are a valgrind packager, please
consider making 1.9.5 available to your users. You can regard the
1.0.X branch as obsolete: 1.9.5 is stable and vastly superior. There
87
OLDER NEWS
88
4. README
Release notes for Valgrind
~~~~~~~~~~~~~~~~~~~~~~~~~~
If you are building a binary package of Valgrind for distribution,
please read README_PACKAGERS. It contains some important information.
If you are developing Valgrind, please read README_DEVELOPERS. It contains
some useful information.
For instructions on how to build/install, see the end of this file.
If you have problems, consult the FAQ to see if there are workarounds.
Executive Summary
~~~~~~~~~~~~~~~~~
Valgrind is a framework for building dynamic analysis tools. There are
Valgrind tools that can automatically detect many memory management
and threading bugs, and profile your programs in detail. You can also
use Valgrind to build new tools.
The Valgrind distribution currently includes six production-quality
tools: a memory error detector, two thread error detectors, a cache
and branch-prediction profiler, a call-graph generating cache abd
branch-prediction profiler, and a heap profiler. It also includes
three experimental tools: a heap/stack/global array overrun detector,
a different kind of heap profiler, and a SimPoint basic block vector
generator.
Valgrind is closely tied to details of the CPU, operating system and to
a lesser extent, compiler and basic C libraries. This makes it difficult
to make it portable. Nonetheless, it is available for the following
platforms:
-
X86/Linux
AMD64/Linux
PPC32/Linux
PPC64/Linux
ARM/Linux
x86/MacOSX
AMD64/MacOSX
S390X/Linux
MIPS32/Linux
MIPS64/Linux
Note that AMD64 is just another name for x86_64, and Valgrind runs fine
on Intel processors. Also note that the core of MacOSX is called
"Darwin" and this name is used sometimes.
Valgrind is licensed under the GNU General Public License, version 2.
89
README
Documentation
~~~~~~~~~~~~~
A comprehensive user guide is supplied. Point your browser at
$PREFIX/share/doc/valgrind/manual.html, where $PREFIX is whatever you
specified with --prefix= when building.
90
5. README_MISSING_SYSCALL_OR_IOCTL
Dealing with missing system call or ioctl wrappers in Valgrind
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Youre probably reading this because Valgrind bombed out whilst
running your program, and advised you to read this file. The good
news is that, in general, its easy to write the missing syscall or
ioctl wrappers you need, so that you can continue your debugging. If
you send the resulting patches to me, then youll be doing a favour to
all future Valgrind users too.
Note that an "ioctl" is just a special kind of system call, really; so
theres not a lot of need to distinguish them (at least conceptually)
in the discussion that follows.
All this machinery is in coregrind/m_syswrap.
README_MISSING_SYSCALL_OR_IOCTL
Writing your own syscall wrappers (see below for ioctl wrappers)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
92
README_MISSING_SYSCALL_OR_IOCTL
README_MISSING_SYSCALL_OR_IOCTL
94
6. README_DEVELOPERS
Building and not installing it
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To run Valgrind without having to install it, run coregrind/valgrind
with the VALGRIND_LIB environment variable set, where <dir> is the root
of the source tree (and must be an absolute path). Eg:
VALGRIND_LIB=~/grind/head4/.in_place ~/grind/head4/coregrind/valgrind
This allows you to compile and run with "make" instead of "make install",
saving you time.
Or, you can use the vg-in-place script which does that for you.
I recommend compiling with "make --quiet" to further reduce the amount of
output spewed out during compilation, letting you actually see any errors,
warnings, etc.
README_DEVELOPERS
README_DEVELOPERS
Self-hosting
~~~~~~~~~~~~
This section explains :
(A) How to configure Valgrind to run under Valgrind.
Such a setup is called self hosting, or outer/inner setup.
(B) How to run Valgrind regression tests in a self-hosting mode,
e.g. to verify Valgrind has no bugs such as memory leaks.
(C) How to run Valgrind performance tests in a self-hosting mode,
to analyse and optimise the performance of Valgrind and its tools.
(A) How to configure Valgrind to run under Valgrind:
(1) Check out 2 trees, "Inner" and "Outer". Inner runs the app
directly. Outer runs Inner.
(2) Configure inner with --enable-inner and build/install as usual.
(3) Configure Outer normally and build/install as usual.
(4) Choose a very simple program (date) and try
97
README_DEVELOPERS
README_DEVELOPERS
README_DEVELOPERS
--outer-tool=callgrind \
--vg=../inner_tchain --vg=../inner_trunk perf/many-loss-records
produces the files
callgrind.out.inner_tchain.no.many-loss-records.18465
callgrind.outer.log.inner_tchain.no.many-loss-records.18465
callgrind.out.inner_tchain.me.many-loss-records.21899
callgrind.outer.log.inner_tchain.me.many-loss-records.21899
callgrind.out.inner_trunk.no.many-loss-records.21224
callgrind.outer.log.inner_trunk.no.many-loss-records.21224
callgrind.out.inner_trunk.me.many-loss-records.22916
callgrind.outer.log.inner_trunk.me.many-loss-records.22916
100
7. README_PACKAGERS
Greetings, packaging person! This information is aimed at people
building binary distributions of Valgrind.
Thanks for taking the time and effort to make a binary distribution of
Valgrind. The following notes may save you some trouble.
README_PACKAGERS
102
8. README.S390
Requirements
------------ You need GCC 3.4 or later to compile the s390 port.
- To run valgrind a z10 machine or any later model is needed.
Older machine models down to and including z900 may work but have
not been tested extensively.
Limitations
----------- 31-bit client programs are not supported.
- Hexadecimal floating point is not supported.
- memcheck, cachegrind, drd, helgrind, massif, lackey, and none are
supported.
- On machine models predating z10, cachegrind will assume a z10 cache
architecture. Otherwise, cachegrind will query the hosts cache system
and use those parameters.
- callgrind and all experimental tools are currently not supported.
- Some gcc versions use mvc to copy 4/8 byte values. This will affect
certain debug messages. For example, memcheck will complain about
4 one-byte reads/writes instead of just a single read/write.
Hardware facilities
------------------Valgrind does not require that the host machine has the same hardware
facilities as the machine for which the client program was compiled.
This is convenient. The JIT compiler will translate the client instructions
according to the facilities available on the host.
This means, though, that probing for hardware facilities by issuing
instructions from that facility and observing whether SIGILL is thrown
may not work. As a consequence, programs that attempt to do so may
behave differently. It is believed that this is a rare use case.
Recommendations
--------------Applications should be compiled with -fno-builtin to avoid
false positives due to builtin string operations when running memcheck.
Reading Material
---------------(1) Linux for zSeries ELF ABI Supplement
http://refspecs.linuxfoundation.org/ELF/zSeries/index.html
(2) z/Architecture Principles of Operation
http://publibfi.boulder.ibm.com/epubs/pdf/dz9zr009.pdf
(3) z/Architecture Reference Summary
http://publibfi.boulder.ibm.com/epubs/pdf/dz9zs007.pdf
103
README.S390
104
9. README.android
How to cross-compile and run on Android. Please read to the end,
since there are important details further down regarding crash
avoidance and GPU support.
These notes were last updated on 3 Sept 2014, for Valgrind SVN
revision 14439/2941.
These instructions are known to work, or have worked at some time in
the past, for:
arm:
Android
Android
Android
Android
Android
4.0.3
4.0.3
4.0.3
4.1
2.3.4
x86:
Android 4.0.3 running on android x86 emulator.
mips32:
Android
Android
Android
Android
4.1.2
4.2.2
4.3
4.0.4
running
running
running
running
on
on
on
on
README.android
# After this point, you dont need to modify anything. Just copy and
# paste the commands below.
#
#
#
#
#
README.android
# for MIPS32
CPPFLAGS="--sysroot=$NDKROOT/platforms/android-18/arch-mips" \
CFLAGS="--sysroot=$NDKROOT/platforms/android-18/arch-mips" \
./configure --prefix=/data/local/Inst \
--host=mipsel-linux-android --target=mipsel-linux-android \
--with-tmpdir=/sdcard
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
# To run (on the device). There are two things you need to consider:
#
# (1) if you are running on the Android emulator, Valgrind may crash
#
at startup. This is because the emulator (for ARM) may not be
#
simulating a hardware TLS register. To get around this, run
#
Valgrind with:
#
--kernel-variant=android-emulator-no-hw-tls
#
# (2) if you are running a real device, you need to tell Valgrind
#
what GPU it has, so Valgrind knows how to handle custom GPU
#
ioctls. You can choose one of the following:
#
--kernel-variant=android-gpu-sgx5xx
# PowerVR SGX 5XX series
107
README.android
#
--kernel-variant=android-gpu-adreno3xx # Qualcomm Adreno 3XX series
#
If you dont choose one, the program will still run, but Memcheck
#
may report false errors after the program performs GPU-specific ioctls.
#
# Anyway: to run on the device:
#
/data/local/Inst/bin/valgrind [kernel variant args] [the usual args etc]
#
#
#
#
#
#
#
#
#
#
#
108
10. README.android_emulator
How to install and run an android emulator.
mkdir android # or any other place you prefer
cd android
#
#
#
#
#
#
#
#
#
#
versions I used:
jdk-7u4-linux-i586.tar.gz
android-ndk-r8-linux-x86.tar.bz2
android-sdk_r18-linux.tgz
# install jdk
tar xzf jdk-7u4-linux-i586.tar.gz
# install sdk
tar xzf android-sdk_r18-linux.tgz
# install ndk
tar xjf android-ndk-r8-linux-x86.tar.bz2
109
README.android_emulator
# IMPORTANT: when running Valgrind, you may need give it the flag
#
#
--kernel-variant=android-emulator-no-hw-tls
#
# since otherwise it may crash at startup.
# See README.android for details.
110
11. README.mips
Supported platforms
------------------- MIPS32 and MIPS64 platforms are currently supported.
- Both little-endian and big-endian cores are supported.
- MIPS DSP ASE on MIPS32 platforms is supported.
Limitations
----------- Some gdb tests will fail when gdb (GDB) older than 7.5 is used and gdb is
not compiled with --with-expat=yes.
111
README.mips
- You can not compile tests for DSP ASE if you are using gcc (GCC) older
then 4.6.1 due to a bug in the toolchain.
- Older GCC may have issues with some inline assembly blocks. Get a toolchain
based on newer GCC versions, if possible.
112
GNU Licenses
GNU Licenses
Table of Contents
1. The GNU General Public License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2. The GNU Free Documentation License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
cxiv
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
5
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the programs name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type show w.
This is free software, and you are welcome to redistribute it
under certain conditions; type show c for details.
The hypothetical commands show w and show c should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than show w and show c; they could even be
mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
Gnomovision (which makes passes at compilers) written by James Hacker.
<signature of Ty Coon>, 1 April 1989
Ty Coon, President of Vice
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Library General
Public License instead of this License.
0. PREAMBLE
The purpose of this License is to make a manual, textbook, or other
functional and useful document "free" in the sense of freedom: to
assure everyone the effective freedom to copy and redistribute it,
with or without modifying it, either commercially or noncommercially.
Secondarily, this License preserves for the author and publisher a way
to get credit for their work, while not being considered responsible
for modifications made by others.
This License is a kind of "copyleft", which means that derivative
works of the document must themselves be free in the same sense. It
complements the GNU General Public License, which is a copyleft
license designed for free software.
We have designed this License in order to use it for manuals for free
software, because free software needs free documentation: a free
program should come with manuals providing the same freedoms that the
software does. But this License is not limited to software manuals;
it can be used for any textual work, regardless of subject matter or
whether it is published as a printed book. We recommend this License
principally for works whose purpose is instruction or reference.
the text near the most prominent appearance of the works title,
preceding the beginning of the body of the text.
A section "Entitled XYZ" means a named subunit of the Document whose
title either is precisely XYZ or contains XYZ in parentheses following
text that translates XYZ in another language. (Here XYZ stands for a
specific section name mentioned below, such as "Acknowledgements",
"Dedications", "Endorsements", or "History".) To "Preserve the Title"
of such a section when you modify the Document means that it remains a
section "Entitled XYZ" according to this definition.
The Document may include Warranty Disclaimers next to the notice which
states that this License applies to the Document. These Warranty
Disclaimers are considered to be included by reference in this
License, but only as regards disclaiming warranties: any other
implication that these Warranty Disclaimers may have is void and has
no effect on the meaning of this License.
2. VERBATIM COPYING
You may copy and distribute the Document in any medium, either
commercially or noncommercially, provided that this License, the
copyright notices, and the license notice saying this License applies
to the Document are reproduced in all copies, and that you add no other
conditions whatsoever to those of this License. You may not use
technical measures to obstruct or control the reading or further
copying of the copies you make or distribute. However, you may accept
compensation in exchange for copies. If you distribute a large enough
number of copies you must also follow the conditions in section 3.
You may also lend copies, under the same conditions stated above, and
you may publicly display copies.
3. COPYING IN QUANTITY
If you publish printed copies (or copies in media that commonly have
printed covers) of the Document, numbering more than 100, and the
Documents license notice requires Cover Texts, you must enclose the
copies in covers that carry, clearly and legibly, all these Cover
Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on
the back cover. Both covers must also clearly and legibly identify
you as the publisher of these copies. The front cover must present
the full title with all words of the title equally prominent and
visible. You may add other material on the covers in addition.
Copying with changes limited to the covers, as long as they preserve
the title of the Document and satisfy these conditions, can be treated
as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit
legibly, you should put the first ones listed (as many as fit
reasonably) on the actual cover, and continue the rest onto adjacent
pages.
10
4. MODIFICATIONS
You may copy and distribute a Modified Version of the Document under
the conditions of sections 2 and 3 above, provided that you release
the Modified Version under precisely this License, with the Modified
Version filling the role of the Document, thus licensing distribution
and modification of the Modified Version to whoever possesses a copy
of it. In addition, you must do these things in the Modified Version:
A. Use in the Title Page (and on the covers, if any) a title distinct
from that of the Document, and from those of previous versions
(which should, if there were any, be listed in the History section
of the Document). You may use the same title as a previous version
if the original publisher of that version gives permission.
B. List on the Title Page, as authors, one or more persons or entities
responsible for authorship of the modifications in the Modified
Version, together with at least five of the principal authors of the
Document (all of its principal authors, if it has fewer than five),
unless they release you from this requirement.
C. State on the Title page the name of the publisher of the
Modified Version, as the publisher.
D. Preserve all the copyright notices of the Document.
E. Add an appropriate copyright notice for your modifications
adjacent to the other copyright notices.
F. Include, immediately after the copyright notices, a license notice
giving the public permission to use the Modified Version under the
terms of this License, in the form shown in the Addendum below.
G. Preserve in that license notice the full lists of Invariant Sections
and required Cover Texts given in the Documents license notice.
H. Include an unaltered copy of this License.
I. Preserve the section Entitled "History", Preserve its Title, and add
to it an item stating at least the title, year, new authors, and
publisher of the Modified Version as given on the Title Page. If
there is no section Entitled "History" in the Document, create one
stating the title, year, authors, and publisher of the Document as
11
given on its Title Page, then add an item describing the Modified
Version as stated in the previous sentence.
J. Preserve the network location, if any, given in the Document for
public access to a Transparent copy of the Document, and likewise
the network locations given in the Document for previous versions
it was based on. These may be placed in the "History" section.
You may omit a network location for a work that was published at
least four years before the Document itself, or if the original
publisher of the version it refers to gives permission.
K. For any section Entitled "Acknowledgements" or "Dedications",
Preserve the Title of the section, and preserve in the section all
the substance and tone of each of the contributor acknowledgements
and/or dedications given therein.
L. Preserve all the Invariant Sections of the Document,
unaltered in their text and in their titles. Section numbers
or the equivalent are not considered part of the section titles.
M. Delete any section Entitled "Endorsements". Such a section
may not be included in the Modified Version.
N. Do not retitle any existing section to be Entitled "Endorsements"
or to conflict in title with any Invariant Section.
O. Preserve any Warranty Disclaimers.
If the Modified Version includes new front-matter sections or
appendices that qualify as Secondary Sections and contain no material
copied from the Document, you may at your option designate some or all
of these sections as invariant. To do this, add their titles to the
list of Invariant Sections in the Modified Versions license notice.
These titles must be distinct from any other section titles.
You may add a section Entitled "Endorsements", provided it contains
nothing but endorsements of your Modified Version by various
parties--for example, statements of peer review or that the text has
been approved by an organization as the authoritative definition of a
standard.
You may add a passage of up to five words as a Front-Cover Text, and a
passage of up to 25 words as a Back-Cover Text, to the end of the list
of Cover Texts in the Modified Version. Only one passage of
Front-Cover Text and one of Back-Cover Text may be added by (or
through arrangements made by) any one entity. If the Document already
includes a cover text for the same cover, previously added by you or
by arrangement made by the same entity you are acting on behalf of,
you may not add another; but you may replace the old one, on explicit
permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License
give permission to use their names for publicity for or to assert or
imply endorsement of any Modified Version.
5. COMBINING DOCUMENTS
You may combine the Document with other documents released under this
License, under the terms defined in section 4 above for modified
12
6. COLLECTIONS OF DOCUMENTS
You may make a collection consisting of the Document and other documents
released under this License, and replace the individual copies of this
License in the various documents with a single copy that is included in
the collection, provided that you follow the rules of this License for
verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute
it individually under this License, provided you insert a copy of this
License into the extracted document, and follow this License in all
other respects regarding verbatim copying of that document.
13
8. TRANSLATION
Translation is considered a kind of modification, so you may
distribute translations of the Document under the terms of section 4.
Replacing Invariant Sections with translations requires special
permission from their copyright holders, but you may include
translations of some or all Invariant Sections in addition to the
original versions of these Invariant Sections. You may include a
translation of this License, and all the license notices in the
Document, and any Warranty Disclaimers, provided that you also include
the original English version of this License and the original versions
of those notices and disclaimers. In case of a disagreement between
the translation and the original version of this License or a notice
or disclaimer, the original version will prevail.
If a section in the Document is Entitled "Acknowledgements",
"Dedications", or "History", the requirement (section 4) to Preserve
its Title (section 1) will typically require changing the actual
title.
9. TERMINATION
You may not copy, modify, sublicense, or distribute the Document except
as expressly provided for under this License. Any other attempt to
copy, modify, sublicense or distribute the Document is void, and will
automatically terminate your rights under this License. However,
parties who have received copies, or rights, from you under this
License will not have their licenses terminated so long as such
parties remain in full compliance.
the License in the document and put the following copyright and
license notices just after the title page:
Copyright (c) YEAR YOUR NAME.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.2
or any later version published by the Free Software Foundation;
with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
A copy of the license is included in the section entitled "GNU
Free Documentation License".
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts,
replace the "with...Texts." line with this:
with the Invariant Sections being LIST THEIR TITLES, with the
Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.
If you have Invariant Sections without Cover Texts, or some other
combination of the three, merge those two alternatives to suit the
situation.
If your document contains nontrivial examples of program code, we
recommend releasing these examples in parallel under your choice of
free software license, such as the GNU General Public License,
to permit their use in free software.
15