Sophie

Sophie

distrib > Fedora > 14 > x86_64 > media > updates > by-pkgid > 5d8be0fb9099f81363c312730bb9e4e7 > files > 202

kdesdk-4.6.5-2.fc14.x86_64.rpm

<?xml version="1.0" ?>
<!DOCTYPE book PUBLIC "-//KDE//DTD DocBook XML V4.2-Based Variant V1.1//EN" "dtd/kdex.dtd" [
  <!ENTITY kcachegrind '<application>KCachegrind</application>'>
  <!ENTITY cachegrind "<application>Cachegrind</application>">
  <!ENTITY calltree "<application>Calltree</application>">
  <!ENTITY callgrind "<application>Callgrind</application>">
  <!ENTITY valgrind "<application>Valgrind</application>">
  <!ENTITY oprofile "<application>OProfile</application>">
  <!ENTITY EBS "<acronym>EBS</acronym>">
  <!ENTITY TBS "<acronym>TBS</acronym>">
  <!ENTITY kappname "&kcachegrind;">
  <!ENTITY package "kdesdk">
  <!ENTITY % addindex "IGNORE">
  <!ENTITY % English "INCLUDE">
]>

<book lang="&language;">

<bookinfo>
<title>The &kcachegrind; Handbook</title>

<authorgroup>
<author>
<firstname>Josef</firstname>
<surname>Weidendorfer</surname>
<affiliation>
<address><email>Josef.Weidendorfer@gmx.de</email></address>
</affiliation>
<contrib>Original author of the documentation</contrib>
</author>

<author>
<firstname>Federico</firstname>
<surname>Zenith</surname>
<affiliation>
<address><email>federico.zenith@member.fsf.org</email></address>
</affiliation>
<contrib>Updates and corrections</contrib>
</author>

<!-- TRANS:ROLES_OF_TRANSLATORS -->

</authorgroup>

<copyright>
<year>2002-2004</year>
<holder>&Josef.Weidendorfer;</holder>	
</copyright>
<copyright>
<year>2009</year>
<holder>Federico Zenith</holder>
</copyright>
<legalnotice>&FDLNotice;</legalnotice>

<date>2009-10-07</date>
<releaseinfo>0.5.1</releaseinfo>

<abstract>
<para>
&kcachegrind; is a profile data visualization tool, written using the &kde;
environment.
</para>
</abstract>

<keywordset>
<keyword>KDE</keyword>
<keyword>kdesdk</keyword>
<keyword>Cachegrind</keyword>
<keyword>Callgrind</keyword>
<keyword>Valgrind</keyword>
<keyword>Profiling</keyword>
</keywordset>

</bookinfo>


<chapter id="introduction">
<title>Introduction</title>

<para>
&kappname; is a browser for data produced by profiling tools.
This chapter explains what profiling is for, how it is done, and
gives some examples of profiling tools available.
</para>

<sect1 id="introduction-profiling">
<title>Profiling</title>

<para>
When developing a program, one of the last steps often involves performance
optimizations.  As it makes no sense to optimize functions rarely used, because
that would be a waste of time, one needs to know in which part of a program most
of the time is spent.
</para>

<para>
For sequential code, collecting statistical data of the programs runtime
characteristic like time numbers spent in functions and code lines usually is
enough.
This is called Profiling. The program is run under control of a profiling tool,
which gives the summary of an execution run at the end.
In contrast, for parallel code, performance problems typically are caused when
one processor is waiting for data from another. As this waiting time usually
cannot easily attributed, here it is better to generate timestamped event
traces. &kcachegrind; cannot visualize this kind of data.
</para>

<para>
After analyzing the produced profile data, it should be easy to see the hot
spots and bottlenecks of the code: for example, assumptions about call counts
can be checked, and identified code regions can be optimized.
Afterwards, the success of the optimization should be verified with another
profile run.
</para>
</sect1>

<sect1 id="introduction-methods">
<title>Profiling Methods</title>

<para>To exactly measure the time passed or record the events happening during
the execution of a code region (&eg; a function), additional measurement code
needs to be inserted before and after the given region. This code reads the
time, or a global event count, and calculates differences. Thus, the original
code has to be changed before execution. This is called instrumentation.
Instrumentation can be done by the programmer itself, the compiler, or by the
runtime system. As interesting regions usually are nested, the overhead of
measurement always influences the measurement itself. Thus, instrumentation
should be done selectively and results have to be interpreted with care. Of
course, this makes performance analysis by exact measurement a very complex
process.</para>

<para>Exact measurement is possible because of hardware counters (including
counters incrementing on a time tick) provided in modern processors, which are
incremented whenever an event is happening. As we want to attribute events to
code regions, without the counters, we would have to handle every event by
incrementing a counter for the current code region ourself. Doing this in
software is, of course, not possible; but, on the assumption that the event
distribution over source code is similar when looking only at every n-th event
instead of every event, a measurement method whose overhead is tunable has been
developed: it is called Sampling. Time Based Sampling (&TBS;) uses a timer to
regularly look at the program counter to create a histogram over the program
code. Event Based Sampling (&EBS;) exploits the hardware counters of modern
processors, and uses a mode where an interrupt handler is called on counter
underflow to generate a histogram of the corresponding event distribution:
in the handler, the event counter is always reinitialized to the
<symbol>n</symbol> of the sampling method. The advantage of sampling is that the
code does not have to be changed, but it is still a compromise: the above
assumption will be more correct if <symbol>n</symbol> is small, but the smaller
the <symbol>n</symbol>, the higher the overhead of the interrupt handler.</para>

<para>Another measurement method is to simulate things happening in the computer
system when executing a given code, &ie; execution driven simulation. The
simulation is always derived from a more or less accurate machine model;
however, with very detailed machine models, giving very close approximations to
reality, the simulation time can be unacceptably high in practice.
The advantage of simulation is that arbitrarily complex measurement/simulation
code can be inserted in a given code without perturbing results. Doing this
directly before execution (called runtime instrumentation), using the original
binary, is very comfortable for the user: no re-compilation is necessary.
Simulation becomes usable when simulating only parts of a machine with a simple
model; another advantage is that the results produced by simple models are often
easier to understand: often, the problem with real hardware is that results
include overlapping effects from different parts of the machine.</para>
</sect1>

<sect1 id="introduction-tools">
<title>Profiling Tools</title>

<para>
Most known is the GCC profiling tool <application>gprof</application>: one needs
to compile the program with option <option>-pg</option>; running the program
generates a file <filename>gmon.out</filename>, which can be transformed into
human-readable form with <command>gprof</command>.
One disadvantage is the required re-compilation step to prepare the executable,
which has to be statically linked.
The method used here is compiler-generated instrumentation, which measures call
arcs happening among functions and corresponding call counts, in conjunction
with &TBS;, which gives a histogram of time distribution over the code. Using
both pieces of information, it is possible to heuristically calculate inclusive
time of functions, &ie; time spent in a function together with all functions
called from it.
</para>

<para>For exact measurement of events happening, libraries exist with functions
able to read out hardware performance counters. Most known here is the PerfCtr
patch for &Linux;, and the architecture independent libraries PAPI and PCL.
Still, exact measurement needs instrumentation of code, as stated above. Either
one uses the libraries itself or uses automatic instrumentation systems like
ADAPTOR (for FORTRAN source instrumentation) or DynaProf (code injection via
DynInst).</para>

<para>
&oprofile; is a system-wide profiling tool for &Linux; using Sampling.
</para>

<para>
In many aspects, a comfortable way of Profiling is using &cachegrind; or
&callgrind;, which are simulators using the runtime instrumentation framework
&valgrind;. Because there is no need to access hardware counters (often
difficult with today's &Linux; installations), and binaries to be profiled can
be left unmodified, it is a good alternative to other profiling tools.
The disadvantage of simulation - slowdown - can be reduced by doing the
simulation on only the interesting program parts, and perhaps only on a few
iterations of a loop. Without measurement/simulation instrumentation,
&valgrind;'s usage only has a slowdown factor in the range of 3 to 5.
Also, when only the call graph and call counts are of interest, the cache
simulator can be switched off.
</para>

<para>
Cache simulation is the first step in approximating real times, since runtime is
very sensitive to the exploitation of so-called <emphasis>caches</emphasis>,
small and fast buffers which accelerate repeated accesses to the same main
memory cells, on modern systems.
&cachegrind; does cache simulation by catching memory accesses.
The data produced includes the number of instruction/data memory accesses and
first- and second-level cache misses, and relates it to source lines and
functions of the run program.
By combining these miss counts, using miss latencies from typical processors,
an estimation of spent time can be given.
</para>

<para>
&callgrind; is an extension of &cachegrind; that builds up the call graph of a
program on-the-fly, &ie; how the functions call each other and how many events
happen while running a function. Also, the profile data to be collected can
separated by threads and call chain contexts. It can provide profiling data on
an instruction level to allow for annotation of disassembled code.
</para>
</sect1>

<sect1 id="introduction-visualization">
<title>Visualization</title>

<para>
Profiling tools typically produce a large amount of data. The wish to easily
browse down and up the call graph, together with fast switching of the sorting
mode of functions and display of different event types, motivates a &GUI;
application to accomplish this task.
</para>

<para>
&kappname; is a visualization tool for profile data fulfilling these wishes.
Despite being programmed first with browsing the data from &cachegrind; and
&calltree; in mind, there are converters available to be able to display profile
data produced by other tools. In the appendix, a description of the
&cachegrind;/&callgrind; file format is given.
</para>

<para>
Besides a list of functions sorted according exclusive or inclusive cost
metrics, and optionally grouped by source file, shared library or C++ class,
&kappname; features various views for a selected function, namely:
<itemizedlist>
<listitem><para>a call-graph view, which shows a section of the call graph
around the selected function,</para>
</listitem>
<listitem><para>a tree-map view, which allows nested-call relations to be
visualized, together with inclusive cost metric for fast visual detection of
problematic functions,</para>
</listitem>
<listitem><para>source code and disassembler annotation views, allowing to see
details of cost related to source lines and assembler instructions.</para>
</listitem>
</itemizedlist>

</para>
</sect1>
</chapter>

<chapter id="using-kcachegrind">
<title>Using &kcachegrind;</title>

<sect1 id="using-profile">
<title>Generate Data to Visualize</title>

<para>First, one wants to generate performance data by measuring aspects of the
runtime characteristics of an application, using a profiling tool. &kcachegrind;
itself does not include any profiling tool, but is good in being used together
with &callgrind;, and by using a converter, also can be used to visualize data
produced with &oprofile;.  Although the scope of this manual is not to document
profiling with these tools, the next section provides short quickstart tutorials
to get you started.
</para>

<sect2>
<title>&callgrind;</title>

<para>
&callgrind; is a part of <ulink url="http://valgrind.org">&valgrind;</ulink>.
Note that it previously was called &calltree;, but that name was misleading.
</para>

<para>
The most common use is to prefix the command line to start your application with
<userinput><command>valgrind</command> <option>--tool=callgrind</option>
</userinput>, as in:

<blockquote><para><userinput>
<command>valgrind</command> <option>--tool=callgrind</option>
<replaceable>myprogram</replaceable> <replaceable>myargs</replaceable>
</userinput></para></blockquote>

At program termination, a file
<filename>callgrind.out.<replaceable>pid</replaceable></filename> will be
generated, which can be loaded into &kcachegrind;.
</para>

<para>
More advanced use is to dump out profile data whenever a given function of your
application is called. E.g. for &konqueror;, to see profile data only for the
rendering of a Web page, you could decide to dump the data whenever you select
the menu item <menuchoice><guimenu>View</guimenu><guimenuitem>Reload
</guimenuitem></menuchoice>. This corresponds to a call to
<methodname>KonqMainWindow::slotReload</methodname>. Use:

<blockquote><para><userinput>
<command>valgrind</command> <option>--tool=callgrind</option>
<option>--dump-before=KonqMainWindow::slotReload</option>
<replaceable>konqueror</replaceable>
</userinput></para></blockquote>

This will produce multiple profile data files with an additional sequential
number at the end of the filename. A file without such an number at the end
(only ending in the process PID) will also be produced; by loading this file
into &kcachegrind;, all others are loaded too, and can be seen in the
<guilabel>Parts Overview</guilabel> and <guilabel>Parts</guilabel> list.
</para>

</sect2>

<sect2>
<title>&oprofile;</title>

<para>
&oprofile; is available from <ulink url="http://oprofile.sf.net">its home
page</ulink>. Follow the installation instructions on the Web site, but, before
you do, check whether your distribution does not already provide it as package
(like &SuSE;).
</para>

<para>
System-wide profiling is only permitted to the root user, as all actions on the
system can be observed; therefore, the following has to be done as root.
First, configure the profiling process, using the &GUI;
<command>oprof_start</command> or the command-line tool
<command>opcontrol</command>. Standard configuration should be timer mode
(&TBS;, see introduction).
To start the measurement, run <userinput><command>opcontrol</command>
<option>-s</option></userinput>.
Then run the application you are interested in and, afterwards, do a
<userinput><command>opcontrol</command> <option>-d</option></userinput>. This
will write out the measurement results into files under folder <filename
class="directory">/var/lib/oprofile/samples/</filename>.
To be able to visualize the data in &kcachegrind;, do in an empty directory:

<blockquote><para><userinput>
<command>opreport</command> <option>-gdf</option> |
<command>op2callgrind</command>
</userinput></para></blockquote>

This will produce a lot of files, one for every program which was running
on the system. Each one can be loaded into &kcachegrind; on its own.
</para>

</sect2>
</sect1>

<sect1 id="using-basics">
<title>User Interface Basics</title>

<para>
When starting &kcachegrind; with a profile data file as argument, or after
loading one with <menuchoice><guimenu>File</guimenu>
<guimenuitem>Open</guimenuitem></menuchoice>, you will see a navigation panel
containing the function list at the left; and, on the right the main part, an
area with views for a selected function. This view area can be arbitrarily
configured to show multiple views at once.
</para>

<para>
At first start, this area will be
divided into a top and a bottom part, each with different tab-selectable views.
To move views, use the tabs' context menu, and adjust the splitters between
views. To switch quickly between different viewing layouts, use
<menuchoice><shortcut><keycombo action="simul">&Ctrl;<keycap>→</keycap>
</keycombo></shortcut> <guimenu>View</guimenu><guisubmenu>Layout</guisubmenu>
<guimenuitem>Go to Next</guimenuitem></menuchoice> and
<menuchoice><shortcut><keycombo action="simul">&Ctrl;<keycap>←</keycap>
</keycombo></shortcut> <guimenu>View</guimenu><guisubmenu>Layout</guisubmenu>
<guimenuitem>Go to Previous</guimenuitem></menuchoice>.
</para>

<para>
The active event type is important for visualization: for &callgrind;, this is,
for example, cache misses or cycle estimation; for &oprofile;, this is
<quote>Timer</quote> in the simplest case. You can change the event type via a
combobox in the toolbar or in the <guilabel>Event Type</guilabel> view.
A first overview of the runtime characteristics should be given when you select
function <function>main</function> in the left list; look then at the call graph
view. There, you see the calls occurring in your program. Note that the call
graph view only shows functions with high event count.
By double-clicking a function in the graph, it will change to show the called
functions around the selected one.
</para>

<para>
To explore the &GUI; further, in addition to this manual, also have a look at
the documentation section <ulink url="http://kcachegrind.sf.net">on the Web
site</ulink>.
Also, every widget in &kcachegrind; has <quote>What's this</quote> help.
</para>
</sect1>

</chapter>


<chapter id="kcachegrind-concepts">
<title>Basic Concepts</title>

<para>This chapter explains some concepts of the &kcachegrind;, and introduces
terms used in the interface.
</para>

<sect1 id="concepts-model">
<title>The Data Model for Profile Data</title>

<sect2>
<title>Cost Entities</title>

<para>
Cost counts of event types (like L2 Misses) are attributed to cost entities,
which are items with relationship to source code or data structures of a given
program. Cost entities not only can be simple code or data positions, but also
position tuples. For example, a call has a source and a target, or a data
address can have a data type and a code position where its allocation happened.
</para>

<para>
The cost entities known to &kcachegrind; are given in the following.
Simple Positions:
<variablelist>
<varlistentry>
<term>Instruction</term>
<listitem><para>
An assembler instruction at a specified address.
</para></listitem>
</varlistentry>
<varlistentry>
<term>Source Line of a Function</term>
<listitem><para>
All instructions that the compiler (via debug information) maps to a given
source line specified by source file name and line number, and which are
executed in the context of some function. The latter is needed because a source
line inside of an inlined function can appear in the context of multiple
functions. Instructions without any mapping to an actual source line are mapped
to line number 0 in file <filename>???</filename>.
</para></listitem>
</varlistentry>
<varlistentry>
<term>Function</term>
<listitem><para>
All source lines of a given function make up the function itself. A function is
specified by its name and its location in some binary object if available. The
latter is needed because binary objects of a single program each can hold
functions with the same name (these can be accessed &eg; with
<function>dlopen</function> or <function>dlsym</function>; the runtime linker
resolves functions in a given search order of binary objects used). If a
profiling tool cannot detect the symbol name of a function, &eg; because debug
information is not available, either the address of the first executed
instruction typically is used, or <function>???</function>.
</para></listitem>
</varlistentry>
<varlistentry>
<term>Binary Object</term>
<listitem><para>
All functions whose code is inside the range of a given binary object, either
the main executable or a shared library.
</para></listitem>
</varlistentry>
<varlistentry>
<term>Source File</term>
<listitem><para>
All functions whose first instruction is mapped to a line of the given source
file.
</para></listitem>
</varlistentry>
<varlistentry>
<term>Class</term>
<listitem><para>
Symbol names of functions typically are hierarchically ordered in name spaces,
&eg; C++ namespaces, or classes of object-oriented languages; thus, a class can
hold functions of the class or embedded classes itself.
</para></listitem>
</varlistentry>
<varlistentry>
<term>Profile Part</term>
<listitem><para>
Some time section of a profile run, with a given thread ID, process ID, and
command line executed.
</para></listitem>
</varlistentry>
</variablelist>
As can be seen from the list, a set of cost entities often defines another cost
entity; thus, there is a inclusion hierarchy of cost entities.
</para>

<para>
Positions tuples:
<itemizedlist>
<listitem><para>
Call from instruction address to target function.
</para></listitem>
<listitem><para>
Call from source line to target function.
</para></listitem>
<listitem><para>
Call from source function to target function.
</para></listitem>
<listitem><para>
(Un)conditional jump from source to target instruction.
</para></listitem>
<listitem><para>
(Un)conditional jump from source to target line.
</para></listitem>
</itemizedlist>
Jumps between functions are not allowed, as this makes no sense in a call graph;
thus, constructs like exception handling and long jumps in C have to be
translated to popping the call stack as needed.
</para>

</sect2>


<sect2>
<title>Event Types</title>

<para>
Arbitrary event types can be specified in the profile data by giving them a
name. Their cost related to a cost entity is a 64-bit integer.
</para>
<para>
Event types whose costs are specified in a profile data file are called real
events. Additionally, one can specify formulas for event types calculated from
real events, which are called inherited events.
</para>
</sect2>

</sect1>

<sect1 id="concepts-state">
<title>Visualization State</title>

<para>
The visualization state of a &kcachegrind; window includes:
<itemizedlist>
<listitem><para>
the primary and secondary event type chosen for display,
</para></listitem>
<listitem><para>
the function grouping (used in the <guilabel>Function Profile</guilabel> list
and entity coloring),
</para></listitem>
<listitem><para>
the profile parts whose costs are to be included in visualization,
</para></listitem>
<listitem><para>
an active cost entity (&eg; a function selected from the function profile
sidedock),
</para></listitem>
<listitem><para>
a selected cost entity.
</para></listitem>
</itemizedlist>
This state influences the views.
</para>

<para>
Views are always shown for one cost entity, the active one. When a given view
is inappropriate for a cost entity, it is disabled: when selecting &eg; an &ELF;
object in the group list, source annotation makes no sense.
</para>

<para>
For example, for an active function, the callee list shows all the functions
called from the active one: one can select one of these functions without making
it active. Also, if the call graph is shown beside, it will automatically select
the same function.
</para>

</sect1>

<sect1 id="concepts-guiparts">
<title>Parts of the &GUI;</title>

<sect2>
<title>Sidedocks</title>
<para>
Sidedocks are side windows which can be placed at any border of a &kcachegrind;
window. They always contain a list of cost entities sorted in some way.
<itemizedlist>
<listitem><para>
The <guilabel>Function Profile</guilabel> is a list of functions showing
inclusive and exclusive cost, call count, name and position of functions.
</para></listitem>
<listitem><para>
<guilabel>Parts Overview</guilabel>
</para></listitem>
<listitem><para>
<guilabel>Call Stack</guilabel>
</para></listitem>
</itemizedlist>
</para>
</sect2>

<sect2>
<title>View Area</title>
<para>
The view area, typically the right part of a &kcachegrind; main window, is made
up of one (default) or more tabs, lined up either horizontally or vertically.
Each tab holds different views of only one cost entity at a time.
The name of this entity is given at the top of the tab. If there are multiple
tabs, only one is active. The entity name in the active tab is shown in bold,
and determines the active cost entity of the &kcachegrind; window.
</para>
</sect2>

<sect2>
<title>Areas of a Tab</title>
<para>
Each tab can hold up to four view areas, namely Top, Right, Left, and Bottom.
Each area can hold multiple stacked views. The visible part of an area is
selected by a tab bar. The tab bars of the top and right area are at the top;
the tab bars of the left and bottom area are at the bottom. You can specify
which kind of view should go into which area by using the tabs' context menus.
</para>
</sect2>

<sect2>
<title>Synchronized View with Selected Entity in a Tab</title>
<para>
Besides an active entity, each tab has a selected entity. As most view types
show multiple entities with the active one somehow centered, you can change
the selected item by navigating inside a view (by clicking with the mouse
or using the keyboard). Typically, selected items are shown in a highlighted
state. By changing the selected entity in one of the views of a tab, all other
views highlight the new selected entity accordingly.
</para>
</sect2>

<sect2>
<title>Synchronization between Tabs</title>
<para>
If there are multiple tabs, a selection change in one tab leads to an activation
change in the next tab, be it right of the former or under it. This kind of
linkage should, for example, allow for fast browsing in call graphs.
</para>
</sect2>

<sect2>
<title>Layouts</title>
<para>
The layout of all the tabs of a window can be saved (<menuchoice><guimenu>View
</guimenu><guisubmenu>Layout</guisubmenu></menuchoice>). After duplicating the
current layout (<menuchoice><shortcut><keycombo action="simul">&Ctrl;
<keycap>+</keycap></keycombo></shortcut> <guimenu>View</guimenu>
<guisubmenu>Layout</guisubmenu><guimenuitem>Duplicate</guimenuitem>
</menuchoice>)
and changing some sizes or moving a view to another area of a tab, you can
quickly switch between the old and the new layout via <keycombo action="simul">
&Ctrl;<keycap>←</keycap></keycombo> and <keycombo action="simul">&Ctrl;
<keycap>→</keycap></keycombo>. The set of layouts will be stored between
&kcachegrind; sessions of the same profiled command. You can make the current
set of layouts the default one for new &kcachegrind; sessions, or restore the
default layout set.
</para>
</sect2>
</sect1>

<sect1 id="concepts-sidedocks">
<title>Sidedocks</title>

<sect2>
<title>Flat Profile</title>
<para>
The <guilabel>Flat Profile</guilabel> contains a group list and a function list.
The group list contains all groups where cost is spent in, depending on the
chosen group type. The group list is hidden when grouping is switched off.
</para>
<para>
The function list contains the functions of the selected group (or all functions
if grouping is switched off), ordered by some column, &eg; inclusive or self
costs spent therein. There is a maximum number of functions shown in the list,
configurable in <menuchoice><guimenu>Settings</guimenu><guimenuitem>Configure
KCachegrind</guimenuitem></menuchoice>.
</para>
</sect2>

<sect2>
<title>Parts Overview</title>
<para>
In a profile run, multiple profile data files can be produced, which can be
loaded together into &kcachegrind;. The <guilabel>Parts Overview</guilabel>
sidedock shows these, ordered horizontally according to creation time; the
rectangle sizes are proportional to the cost spent each part. You can select one
or several parts to constrain the costs shown in the other &kcachegrind; views
to these parts only.
</para>
<para>
The parts are further subdivided between a partitioning and an inclusive cost
split mode:
<variablelist>
<varlistentry>
<term><guilabel>Partitioning Mode</guilabel></term>
<listitem><para>
The partitioning is shown in groups for a profile data part, according to the
group type selected. For example, if &ELF; object groups are selected, you see
colored rectangles for each used &ELF; object (shared library or executable),
sized according to the cost spent therein.
</para></listitem>
</varlistentry>
<varlistentry>
<term><guilabel>Diagram Mode</guilabel></term>
<listitem><para>
A rectangle showing the inclusive cost of the current active function in the
part is shown. This, again, is split up to show the inclusive costs of its
callees.
</para></listitem>
</varlistentry>
</variablelist>
</para>
</sect2>

<sect2>
<title>Call Stack</title>
<para>
This is a purely fictional <quote>most probable</quote> call stack. It is built
up by starting with the current active function, and adds the callers and
callees with highest cost at the top and to bottom.
</para>
<para>
The <guilabel>Cost</guilabel> and <guilabel>Calls</guilabel> columns show the
cost used for all calls from the function in the line above.
</para>
</sect2>
</sect1>

<sect1 id="concepts-views">
<title>Views</title>

<sect2>
<title>Event Type</title>
<para>
The <guilabel>Event Type</guilabel> list shows all cost types available and the
corresponding self and inclusive cost of the current active function for that
event type.
</para>
<para>
By choosing an event type from the list, you change the type of costs shown all
over &kcachegrind; to the selected one.
</para>
</sect2>

<sect2>
<title>Call Lists</title>
<para>
These lists show calls to and from the current active function. With
<guilabel>All Callers</guilabel> and <guilabel>All Callees</guilabel> are meant
those functions reachable in the caller and callee direction, even when other
functions are in between.
</para>

<para>
Call list views include:
<itemizedlist>
<listitem><para>Direct <guilabel>Callers</guilabel></para></listitem>
<listitem><para>Direct <guilabel>Callees</guilabel></para></listitem>
<listitem><para><guilabel>All Callers</guilabel></para></listitem>
<listitem><para><guilabel>All Callees</guilabel></para></listitem>
</itemizedlist>
</para>
</sect2>

<sect2>
<title>Maps</title>
<para>
A treemap view of the primary event type, up or down the call
hierarchy. Each colored rectangle represents a function; its size is
approximately proportional to the cost spent therein while the active function
is running (however, there are drawing constrains).
</para>
<para>
For the <guilabel>Caller Map</guilabel>, the graph shows the nested hierarchy of
all callers of the currently activated function; for the <guilabel>Callee
Map</guilabel>, it shows that of all callees.
</para>
<para>
Appearance options can be found in the context menu. To get exact size
proportions, choose <guimenuitem>Skip Incorrect Borders</guimenuitem>. As this
mode can be very time-consuming, you may want to limit the maximum drawn
nesting level before. <guilabel>Best</guilabel> determinates the split direction
for children from the aspect ratio of the parent. <guilabel>Always
Best</guilabel> decides on remaining space for each sibling. <guilabel>Ignore
Proportions</guilabel> takes space for function name drawing before drawing
children. Note that size proportions can get heavily wrong.
</para>
<para>
Keyboard navigation is available with the left and right arrow keys for
traversing siblings, and up and down arrow keys to go a nesting level up and
down. &Enter; activates the current item.
</para>
</sect2>

<sect2>
<title>Call Graph</title>
<para>
This view shows the call graph around the active function. The cost shown is
only the cost spent while the active function was actually running, &ie; the
cost shown for <function>main()</function> (if it's visible) should be the same
as the cost of the active function, as that is the part of inclusive cost of
<function>main()</function> spent while the active function was running.
</para>
<para>
For cycles, blue call arrows indicate that this is an artificial call, which
never actually happened, added for correct drawing.
</para>
<para>
If the graph is larger than the drawing area, a bird's eye view is shown on a
side. There are view options similar to those of the call maps; the selected
function is highlighted.
</para>
</sect2>

<sect2>
<title>Annotations</title>
<para>
The annotated source or assembler lists show the source lines or disassembled
instructions of the current active function together with the (self) cost spent
executing the code of a source line or instruction. If there was a call, lines
with details on the call are inserted into the source: the (inclusive) cost
spent inside of the call, the number of calls happening, and the call
destination.
</para>
<para>
Select such a call information line to activate the call destination.
</para>
</sect2>
</sect1>

</chapter>


<chapter id="commands">
<title>Command Reference</title>

<sect1 id="kcachegrind-mainwindow">
<title>The main &kcachegrind; window</title>

<sect2>
<title>The <guimenu>File</guimenu> Menu</title>
<para>
<variablelist>

<varlistentry>
<term><menuchoice>
<shortcut>
<keycombo>&Ctrl;<keycap>N</keycap></keycombo>
</shortcut>
<guimenu>File</guimenu><guimenuitem>New</guimenuitem>
</menuchoice></term>
<listitem><para>
<action>Opens an empty top-level window</action> in which you can
load profile data. This action is not really necessary, as <menuchoice>
<guimenu>File</guimenu><guimenuitem>Open</guimenuitem></menuchoice> gives you a
new top-level window if the current one already shows some data.
</para></listitem>
</varlistentry>

<varlistentry>
<term><menuchoice>
<shortcut>
<keycombo>&Ctrl;<keycap>O</keycap></keycombo>
</shortcut>
<guimenu>File</guimenu><guimenuitem>Open</guimenuitem>
</menuchoice></term>
<listitem>
<para>
<action>Pops up the &kde; file selector</action> to choose a
profile data file to be loaded. If there is some data already shown in the
current top-level window, this will open a new window; if you want to open
additional profile data in the current window, use <menuchoice>
<guimenu>File</guimenu><guimenuitem>Add</guimenuitem></menuchoice>.
</para>
<para>
The name of profile data files usually ends in <literal role="extension"
>.<replaceable>pid</replaceable>.<replaceable>part</replaceable>-<replaceable
>threadID</replaceable></literal>, where <replaceable>part</replaceable> and
<replaceable>threadID</replaceable> are optional. <replaceable>pid</replaceable>
and <replaceable>part</replaceable> are used for multiple profile data files
belonging to one application run.
By loading a file ending only in <literal role="extension"><replaceable
>pid</replaceable></literal>, any existing data files for this run with
additional endings are loaded as well.
</para>
<informalexample><para>
If there exist profile data files <filename>cachegrind.out.123</filename> and
<filename>cachegrind.out.123.1</filename>, by loading the first, the second will
be automatically loaded too.
</para></informalexample></listitem>
</varlistentry>

<varlistentry>
<term><menuchoice>
<guimenu>File</guimenu><guimenuitem>Add</guimenuitem>
</menuchoice></term>
<listitem><para>
<action>Adds a profile data file</action> to the current window.
Using this, you can force multiple data files to be loaded into the same
top-level window even if they are not from the same run, as given by the profile
data file naming convention. For example, this can be used for side-by-side
comparison.
</para></listitem>
</varlistentry>

<varlistentry>
<term><menuchoice>
<shortcut>
<keycombo><keycap>F5</keycap></keycombo>
</shortcut>
<guimenu>File</guimenu><guimenuitem>Reload</guimenuitem>
</menuchoice></term>
<listitem><para>
<action>Reload the profile data</action>. This is useful when another profile
data file was generated for an already loaded application run.
</para></listitem>
</varlistentry>

<varlistentry>
<term><menuchoice>
<shortcut>
<keycombo>&Ctrl;<keycap>Q</keycap></keycombo>
</shortcut>
<guimenu>File</guimenu><guimenuitem>Quit</guimenuitem>
</menuchoice></term>
<listitem><para><action>Quits</action> &kappname;</para></listitem>
</varlistentry>
</variablelist>
</para>

</sect2>

</sect1>
</chapter>

<chapter id="faq">
<title>Questions and Answers</title>

&reporting.bugs;
&updating.documentation;

<qandaset id="faqlist">


<qandaentry>
<question>
<para>
What is &kcachegrind; for? I have no idea.
</para>
</question>
<answer>
<para>
&kcachegrind; is a helpful at a late stage in software development, called
profiling. If you don't develop applications, you don't need &kcachegrind;.
</para>
</answer>
</qandaentry>

<qandaentry>
<question>
<para>
What is the difference between <guilabel>Incl.</guilabel> and
<guilabel>Self</guilabel>?
</para>
</question>
<answer>
<para>These are cost attributes for functions regarding some event type. As
functions can call each other, it makes sense to distinguish the cost of the
function itself (<quote>Self Cost</quote>) and the cost including all called
functions (<quote>Inclusive Cost</quote>). <quote>Self</quote> is sometimes also
referred to as <quote>Exclusive</quote> costs.
</para>
<para>
So, for example, for <function>main()</function>, you will always have an
inclusive cost of almost 100%, whereas the self cost is negligible when the real
work is done in another function.
</para>
</answer>
</qandaentry>

<qandaentry>
<question>
<para>The toolbar and menubar of my &kcachegrind; look spartan. Is this
normal?</para>
</question>
<answer>
<para>
&kcachegrind; has probably been installed incorrectly on your system. It is
recommended to compile it with the installation prefix set to your system-wide
&kde; base folder, like <userinput><command>configure
<option>--prefix=<replaceable>/opt/kde4</replaceable></option></command>;
<command>make install</command></userinput>.
If you choose another folder, like <filename
class="directory">$<envar>HOME</envar>/kde</filename>, you should set the
environment variable <envar>KDEDIR</envar> to this folder before running
&kcachegrind;.
</para>
</answer>
</qandaentry>

<qandaentry>
<question>
<para>
If I double-click on a function down in the <guilabel>Call Graph</guilabel>
view, it shows for function <function>main()</function> the same cost as the
selected function. Isn't this supposed to be constant at 100%?
</para>
</question>
<answer>
<para>
You have activated a function below <function>main()</function>, which obviously
costs less than <function>main()</function> itself. For every function, it is
shown only the part of the cost spent while the <emphasis>activated</emphasis>
function is running; that is, the cost shown for any function can never be
higher than the cost of the activated function.
</para>
</answer>
</qandaentry>


</qandaset>
</chapter>


<glossary>

<glossentry id="costentity">
<glossterm>Cost Entity</glossterm>
<glossdef><para>An abstract item related to source code to which event counts
can be attributed. Dimensions for cost entities are code location (&eg; source
line, function), data location (&eg; accessed data type, data object), execution
location (&eg; thread, process), and tuples or triples of the aforementioned
positions (&eg; calls, object access from statement, evicted data from
cache).</para></glossdef>
</glossentry>

<glossentry id="eventcosts">
<glossterm>Event Costs</glossterm>
<glossdef><para>Sum of events of some event type occurring while the execution
is related to some cost entity. The cost is attributed to the
entity.</para></glossdef>
</glossentry>

<glossentry id="eventtype">
<glossterm>Event Type</glossterm>
<glossdef><para>The kind of event of which costs can be attributed to a cost
entity. There are real event types and inherited event types.</para></glossdef>
</glossentry>

<glossentry id="inheritedeventtype">
<glossterm>Inherited Event Type</glossterm>
<glossdef><para>A virtual event type only visible in the view, defined by a
formula to be calculated from real event types.</para></glossdef>
</glossentry>

<glossentry id="profiledatafile">
<glossterm>Profile Data File</glossterm>
<glossdef><para>A file containing data measured in a profile experiment, or part
of one, or produced by post-processing a trace. Its size is typically linear
with the code size of the program.</para></glossdef>
</glossentry>

<glossentry id="profiledatapart">
<glossterm>Profile Data Part</glossterm>
<glossdef><para>Data from a profile data file.</para></glossdef>
</glossentry>

<glossentry id="profileexperiment">
<glossterm>Profile Experiment</glossterm>
<glossdef><para>A program run supervised by a profiling tool, producing possibly
multiple profile data files from parts or threads of the run.</para></glossdef>
</glossentry>

<glossentry id="profileproject">
<glossterm>Profile Project</glossterm>
<glossdef><para>A configuration for profile experiments used for one program to
profile, perhaps in multiple versions. Comparisons of profile data typically
only makes sense between profile data produced in experiments of one profile
project.</para></glossdef>
</glossentry>

<glossentry id="profiling">
<glossterm>Profiling</glossterm>
<glossdef><para>The process of collecting statistical information about runtime
characteristics of program runs.</para></glossdef>
</glossentry>

<glossentry id="realeventtype">
<glossterm>Real Event Type</glossterm>
<glossdef><para>An event type that can be measured by a tool. This requires the
existence of a sensor for the given event type.</para></glossdef>
</glossentry>

<glossentry id="trace">
<glossterm>Trace</glossterm>
<glossdef><para>A sequence of timestamped events that occurred while tracing a
program run. Its size is typically linear with the execution time of the program
run.</para></glossdef>
</glossentry>

<glossentry id="tracepart">
<glossterm>Trace Part</glossterm>
<glosssee otherterm="profiledatapart"/>
</glossentry>

<glossentry id="tracing">
<glossterm>Tracing</glossterm>
<glossdef><para>The process of supervising a program run and storing its events,
sorted by a timestamp, in an output file, the trace.</para></glossdef>
</glossentry>

</glossary>

<chapter id="credits">

<title>Credits and License</title>

<para>
Thanks to Julian Seward for his excellent &valgrind;, and Nicholas Nethercote
for the &cachegrind; addition. Without these programs, &kcachegrind; would not
exist. Some ideas for this &GUI; were from them, too.
</para>
<para>
Thanks for all the bug reports and suggestions from different users.
</para>

<!-- TRANS:CREDIT_FOR_TRANSLATORS -->
&underFDL;        	 <!-- FDL License -->

</chapter>

<appendix id="installation">
<title>Installation</title>

<sect1 id="getting-kcachegrind">
<title>How to obtain &kcachegrind;</title>

<para>
&kcachegrind; is part of the &package; package of &kde;. For less supported
interim releases, &callgrind; and further documentation, see
<ulink url="http://kcachegrind.sf.net">the Web page</ulink>. Look there for
further installation and compile instructions.
</para>
</sect1>

<sect1 id="requirements">
<title>Requirements</title>

<para>
In order to use &kcachegrind;, you need &kde; 4.x. To generate profile data,
&cachegrind; or &calltree;/&callgrind; is recommended.
</para>
</sect1>

<sect1 id="compilation">
<title>Compilation and Installation</title>

&install.compile.documentation;

</sect1>

<sect1 id="configuration">
<title>Configuration</title>

<para>All configuration options are either in the configuration dialog
or in the context menus of the views.</para>

</sect1>

</appendix>

&documentation.index;
</book>