Sophie

Sophie

distrib > Fedora > 18 > i386 > by-pkgid > 7f671eb35339cf812de52087b0d93519 > files > 249

python3-pytest-2.3.5-3.fc18.noarch.rpm



<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
  "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">


<html xmlns="http://www.w3.org/1999/xhtml">
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
    
    <title>Basic patterns and examples</title>
    
    <link rel="stylesheet" href="../_static/sphinxdoc.css" type="text/css" />
    <link rel="stylesheet" href="../_static/pygments.css" type="text/css" />
    
    <script type="text/javascript">
      var DOCUMENTATION_OPTIONS = {
        URL_ROOT:    '../',
        VERSION:     '2.3.4.1',
        COLLAPSE_INDEX: false,
        FILE_SUFFIX: '.html',
        HAS_SOURCE:  true
      };
    </script>
    <script type="text/javascript" src="../_static/jquery.js"></script>
    <script type="text/javascript" src="../_static/underscore.js"></script>
    <script type="text/javascript" src="../_static/doctools.js"></script>
    <link rel="top" title="None" href="../index.html" />
    <link rel="up" title="Usages and Examples" href="index.html" />
    <link rel="next" title="Parametrizing tests" href="parametrize.html" />
    <link rel="prev" title="Demo of Python failure reports with py.test" href="reportingdemo.html" /> 
  </head>
  <body>
    <div class="related">
      <h3>Navigation</h3>
      <ul>
        <li class="right" style="margin-right: 10px">
          <a href="parametrize.html" title="Parametrizing tests"
             accesskey="N">next</a></li>
        <li class="right" >
          <a href="reportingdemo.html" title="Demo of Python failure reports with py.test"
             accesskey="P">previous</a> |</li>
        <li><a href="../contents.html">pytest-2.3.4.1</a> &raquo;</li>
          <li><a href="index.html" accesskey="U">Usages and Examples</a> &raquo;</li>
 
<g:plusone></g:plusone>

      </ul>
    </div>
      <div class="sphinxsidebar">
        <div class="sphinxsidebarwrapper">
<div id="searchbox" style="display: none">
    <form class="search" action="../search.html" method="get">
      <input type="text" name="q" size="18" />
      <input type="submit" value="Search" />
      <input type="hidden" name="check_keywords" value="yes" />
      <input type="hidden" name="area" value="default" />
    </form>
</div>
<script type="text/javascript">$('#searchbox').show(0);</script>

<h3>quicklinks</h3>
<div style="text-align: left; font-size: 100%; vertical-align: middle;">
<table>
<tr>
<td>
        <a href="../index.html">home</a>
</td><td>
        <a href="../contents.html">TOC/contents</a>
</td></tr><tr><td>
        <a href="../getting-started.html">install</a>
</td><td>
        <a href="../changelog.html">changelog</a>
</td></tr><tr><td>
        <a href="index.html">examples</a>
</td><td>
        <a href="../customize.html">customize</a>
</td></tr><tr><td>
        <a href="https://bitbucket.org/hpk42/pytest/issues?status=new&status=open">issues[bb]</a>
</td><td>
        <a href="../contact.html">contact</a>
</td></tr></table>
</div>

  <h3><a href="../contents.html">Table Of Contents</a></h3>
  <ul>
<li><a class="reference internal" href="#">Basic patterns and examples</a><ul>
<li><a class="reference internal" href="#pass-different-values-to-a-test-function-depending-on-command-line-options">Pass different values to a test function, depending on command line options</a></li>
<li><a class="reference internal" href="#dynamically-adding-command-line-options">Dynamically adding command line options</a></li>
<li><a class="reference internal" href="#control-skipping-of-tests-according-to-command-line-option">Control skipping of tests according to command line option</a></li>
<li><a class="reference internal" href="#writing-well-integrated-assertion-helpers">Writing well integrated assertion helpers</a></li>
<li><a class="reference internal" href="#detect-if-running-from-within-a-py-test-run">Detect if running from within a py.test run</a></li>
<li><a class="reference internal" href="#adding-info-to-test-report-header">Adding info to test report header</a></li>
<li><a class="reference internal" href="#profiling-test-duration">profiling test duration</a></li>
<li><a class="reference internal" href="#incremental-testing-test-steps">incremental testing - test steps</a></li>
<li><a class="reference internal" href="#package-directory-level-fixtures-setups">Package/Directory-level fixtures (setups)</a></li>
<li><a class="reference internal" href="#post-process-test-reports-failures">post-process test reports / failures</a></li>
<li><a class="reference internal" href="#making-test-result-information-available-in-fixtures">Making test result information available in fixtures</a></li>
</ul>
</li>
</ul>

  <h4>Previous topic</h4>
  <p class="topless"><a href="reportingdemo.html"
                        title="previous chapter">Demo of Python failure reports with py.test</a></p>
  <h4>Next topic</h4>
  <p class="topless"><a href="parametrize.html"
                        title="next chapter">Parametrizing tests</a></p>
        </div>
      </div>

    <div class="document">
      <div class="documentwrapper">
        <div class="bodywrapper">
          <div class="body">
            
  <div class="section" id="basic-patterns-and-examples">
<h1>Basic patterns and examples<a class="headerlink" href="#basic-patterns-and-examples" title="Permalink to this headline">¶</a></h1>
<div class="section" id="pass-different-values-to-a-test-function-depending-on-command-line-options">
<h2>Pass different values to a test function, depending on command line options<a class="headerlink" href="#pass-different-values-to-a-test-function-depending-on-command-line-options" title="Permalink to this headline">¶</a></h2>
<p>Suppose we want to write a test that depends on a command line option.
Here is a basic pattern how to achieve this:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="c"># content of test_sample.py</span>
<span class="k">def</span> <span class="nf">test_answer</span><span class="p">(</span><span class="n">cmdopt</span><span class="p">):</span>
    <span class="k">if</span> <span class="n">cmdopt</span> <span class="o">==</span> <span class="s">&quot;type1&quot;</span><span class="p">:</span>
        <span class="k">print</span> <span class="p">(</span><span class="s">&quot;first&quot;</span><span class="p">)</span>
    <span class="k">elif</span> <span class="n">cmdopt</span> <span class="o">==</span> <span class="s">&quot;type2&quot;</span><span class="p">:</span>
        <span class="k">print</span> <span class="p">(</span><span class="s">&quot;second&quot;</span><span class="p">)</span>
    <span class="k">assert</span> <span class="mi">0</span> <span class="c"># to see what was printed</span>
</pre></div>
</div>
<p>For this to work we need to add a command line option and
provide the <tt class="docutils literal"><span class="pre">cmdopt</span></tt> through a <a class="reference internal" href="../fixture.html#fixture-function"><em>fixture function</em></a>:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="c"># content of conftest.py</span>
<span class="kn">import</span> <span class="nn">pytest</span>

<span class="k">def</span> <span class="nf">pytest_addoption</span><span class="p">(</span><span class="n">parser</span><span class="p">):</span>
    <span class="n">parser</span><span class="o">.</span><span class="n">addoption</span><span class="p">(</span><span class="s">&quot;--cmdopt&quot;</span><span class="p">,</span> <span class="n">action</span><span class="o">=</span><span class="s">&quot;store&quot;</span><span class="p">,</span> <span class="n">default</span><span class="o">=</span><span class="s">&quot;type1&quot;</span><span class="p">,</span>
        <span class="n">help</span><span class="o">=</span><span class="s">&quot;my option: type1 or type2&quot;</span><span class="p">)</span>

<span class="nd">@pytest.fixture</span>
<span class="k">def</span> <span class="nf">cmdopt</span><span class="p">(</span><span class="n">request</span><span class="p">):</span>
    <span class="k">return</span> <span class="n">request</span><span class="o">.</span><span class="n">config</span><span class="o">.</span><span class="n">getoption</span><span class="p">(</span><span class="s">&quot;--cmdopt&quot;</span><span class="p">)</span>
</pre></div>
</div>
<p>Let&#8217;s run this without supplying our new option:</p>
<div class="highlight-python"><pre>$ py.test -q test_sample.py
F
================================= FAILURES =================================
_______________________________ test_answer ________________________________

cmdopt = 'type1'

    def test_answer(cmdopt):
        if cmdopt == "type1":
            print ("first")
        elif cmdopt == "type2":
            print ("second")
&gt;       assert 0 # to see what was printed
E       assert 0

test_sample.py:6: AssertionError
----------------------------- Captured stdout ------------------------------
first</pre>
</div>
<p>And now with supplying a command line option:</p>
<div class="highlight-python"><pre>$ py.test -q --cmdopt=type2
F
================================= FAILURES =================================
_______________________________ test_answer ________________________________

cmdopt = 'type2'

    def test_answer(cmdopt):
        if cmdopt == "type1":
            print ("first")
        elif cmdopt == "type2":
            print ("second")
&gt;       assert 0 # to see what was printed
E       assert 0

test_sample.py:6: AssertionError
----------------------------- Captured stdout ------------------------------
second</pre>
</div>
<p>You can see that the command line option arrived in our test.  This
completes the basic pattern.  However, one often rather wants to process
command line options outside of the test and rather pass in different or
more complex objects.</p>
</div>
<div class="section" id="dynamically-adding-command-line-options">
<h2>Dynamically adding command line options<a class="headerlink" href="#dynamically-adding-command-line-options" title="Permalink to this headline">¶</a></h2>
<p>Through <a class="reference internal" href="../customize.html#confval-addopts"><tt class="xref std std-confval docutils literal"><span class="pre">addopts</span></tt></a> you can statically add command line
options for your project.  You can also dynamically modify
the command line arguments before they get processed:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="c"># content of conftest.py</span>
<span class="kn">import</span> <span class="nn">sys</span>
<span class="k">def</span> <span class="nf">pytest_cmdline_preparse</span><span class="p">(</span><span class="n">args</span><span class="p">):</span>
    <span class="k">if</span> <span class="s">&#39;xdist&#39;</span> <span class="ow">in</span> <span class="n">sys</span><span class="o">.</span><span class="n">modules</span><span class="p">:</span> <span class="c"># pytest-xdist plugin</span>
        <span class="kn">import</span> <span class="nn">multiprocessing</span>
        <span class="n">num</span> <span class="o">=</span> <span class="nb">max</span><span class="p">(</span><span class="n">multiprocessing</span><span class="o">.</span><span class="n">cpu_count</span><span class="p">()</span> <span class="o">/</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>
        <span class="n">args</span><span class="p">[:]</span> <span class="o">=</span> <span class="p">[</span><span class="s">&quot;-n&quot;</span><span class="p">,</span> <span class="nb">str</span><span class="p">(</span><span class="n">num</span><span class="p">)]</span> <span class="o">+</span> <span class="n">args</span>
</pre></div>
</div>
<p>If you have the <a class="reference internal" href="../xdist.html#xdist"><em>xdist plugin</em></a> installed
you will now always perform test runs using a number
of subprocesses close to your CPU. Running in an empty
directory with the above conftest.py:</p>
<div class="highlight-python"><pre>$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 0 items

=============================  in 0.00 seconds =============================</pre>
</div>
</div>
<div class="section" id="control-skipping-of-tests-according-to-command-line-option">
<span id="excontrolskip"></span><h2>Control skipping of tests according to command line option<a class="headerlink" href="#control-skipping-of-tests-according-to-command-line-option" title="Permalink to this headline">¶</a></h2>
<p>Here is a <tt class="docutils literal"><span class="pre">conftest.py</span></tt> file adding a <tt class="docutils literal"><span class="pre">--runslow</span></tt> command
line option to control skipping of <tt class="docutils literal"><span class="pre">slow</span></tt> marked tests:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="c"># content of conftest.py</span>

<span class="kn">import</span> <span class="nn">pytest</span>
<span class="k">def</span> <span class="nf">pytest_addoption</span><span class="p">(</span><span class="n">parser</span><span class="p">):</span>
    <span class="n">parser</span><span class="o">.</span><span class="n">addoption</span><span class="p">(</span><span class="s">&quot;--runslow&quot;</span><span class="p">,</span> <span class="n">action</span><span class="o">=</span><span class="s">&quot;store_true&quot;</span><span class="p">,</span>
        <span class="n">help</span><span class="o">=</span><span class="s">&quot;run slow tests&quot;</span><span class="p">)</span>

<span class="k">def</span> <span class="nf">pytest_runtest_setup</span><span class="p">(</span><span class="n">item</span><span class="p">):</span>
    <span class="k">if</span> <span class="s">&#39;slow&#39;</span> <span class="ow">in</span> <span class="n">item</span><span class="o">.</span><span class="n">keywords</span> <span class="ow">and</span> <span class="ow">not</span> <span class="n">item</span><span class="o">.</span><span class="n">config</span><span class="o">.</span><span class="n">getoption</span><span class="p">(</span><span class="s">&quot;--runslow&quot;</span><span class="p">):</span>
        <span class="n">pytest</span><span class="o">.</span><span class="n">skip</span><span class="p">(</span><span class="s">&quot;need --runslow option to run&quot;</span><span class="p">)</span>
</pre></div>
</div>
<p>We can now write a test module like this:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="c"># content of test_module.py</span>

<span class="kn">import</span> <span class="nn">pytest</span>
<span class="n">slow</span> <span class="o">=</span> <span class="n">pytest</span><span class="o">.</span><span class="n">mark</span><span class="o">.</span><span class="n">slow</span>

<span class="k">def</span> <span class="nf">test_func_fast</span><span class="p">():</span>
    <span class="k">pass</span>

<span class="nd">@slow</span>
<span class="k">def</span> <span class="nf">test_func_slow</span><span class="p">():</span>
    <span class="k">pass</span>
</pre></div>
</div>
<p>and when running it will see a skipped &#8220;slow&#8221; test:</p>
<div class="highlight-python"><pre>$ py.test -rs    # "-rs" means report details on the little 's'
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 2 items

test_module.py .s
========================= short test summary info ==========================
SKIP [1] /tmp/doc-exec-278/conftest.py:9: need --runslow option to run

=================== 1 passed, 1 skipped in 0.01 seconds ====================</pre>
</div>
<p>Or run it including the <tt class="docutils literal"><span class="pre">slow</span></tt> marked test:</p>
<div class="highlight-python"><pre>$ py.test --runslow
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 2 items

test_module.py ..

========================= 2 passed in 0.01 seconds =========================</pre>
</div>
</div>
<div class="section" id="writing-well-integrated-assertion-helpers">
<h2>Writing well integrated assertion helpers<a class="headerlink" href="#writing-well-integrated-assertion-helpers" title="Permalink to this headline">¶</a></h2>
<p>If you have a test helper function called from a test you can
use the <tt class="docutils literal"><span class="pre">pytest.fail</span></tt> marker to fail a test with a certain message.
The test support function will not show up in the traceback if you
set the <tt class="docutils literal"><span class="pre">__tracebackhide__</span></tt> option somewhere in the helper function.
Example:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="c"># content of test_checkconfig.py</span>
<span class="kn">import</span> <span class="nn">pytest</span>
<span class="k">def</span> <span class="nf">checkconfig</span><span class="p">(</span><span class="n">x</span><span class="p">):</span>
    <span class="n">__tracebackhide__</span> <span class="o">=</span> <span class="bp">True</span>
    <span class="k">if</span> <span class="ow">not</span> <span class="nb">hasattr</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="s">&quot;config&quot;</span><span class="p">):</span>
        <span class="n">pytest</span><span class="o">.</span><span class="n">fail</span><span class="p">(</span><span class="s">&quot;not configured: </span><span class="si">%s</span><span class="s">&quot;</span> <span class="o">%</span><span class="p">(</span><span class="n">x</span><span class="p">,))</span>

<span class="k">def</span> <span class="nf">test_something</span><span class="p">():</span>
    <span class="n">checkconfig</span><span class="p">(</span><span class="mi">42</span><span class="p">)</span>
</pre></div>
</div>
<p>The <tt class="docutils literal"><span class="pre">__tracebackhide__</span></tt> setting influences py.test showing
of tracebacks: the <tt class="docutils literal"><span class="pre">checkconfig</span></tt> function will not be shown
unless the <tt class="docutils literal"><span class="pre">--fulltrace</span></tt> command line option is specified.
Let&#8217;s run our little function:</p>
<div class="highlight-python"><pre>$ py.test -q test_checkconfig.py
F
================================= FAILURES =================================
______________________________ test_something ______________________________

    def test_something():
&gt;       checkconfig(42)
E       Failed: not configured: 42

test_checkconfig.py:8: Failed</pre>
</div>
</div>
<div class="section" id="detect-if-running-from-within-a-py-test-run">
<h2>Detect if running from within a py.test run<a class="headerlink" href="#detect-if-running-from-within-a-py-test-run" title="Permalink to this headline">¶</a></h2>
<p>Usually it is a bad idea to make application code
behave differently if called from a test.  But if you
absolutely must find out if your application code is
running from a test you can do something like this:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="c"># content of conftest.py</span>

<span class="k">def</span> <span class="nf">pytest_configure</span><span class="p">(</span><span class="n">config</span><span class="p">):</span>
    <span class="kn">import</span> <span class="nn">sys</span>
    <span class="n">sys</span><span class="o">.</span><span class="n">_called_from_test</span> <span class="o">=</span> <span class="bp">True</span>

<span class="k">def</span> <span class="nf">pytest_unconfigure</span><span class="p">(</span><span class="n">config</span><span class="p">):</span>
    <span class="k">del</span> <span class="n">sys</span><span class="o">.</span><span class="n">_called_from_test</span>
</pre></div>
</div>
<p>and then check for the <tt class="docutils literal"><span class="pre">sys._called_from_test</span></tt> flag:</p>
<div class="highlight-python"><pre>if hasattr(sys, '_called_from_test'):
    # called from within a test run
else:
    # called "normally"</pre>
</div>
<p>accordingly in your application.  It&#8217;s also a good idea
to use your own application module rather than <tt class="docutils literal"><span class="pre">sys</span></tt>
for handling flag.</p>
</div>
<div class="section" id="adding-info-to-test-report-header">
<h2>Adding info to test report header<a class="headerlink" href="#adding-info-to-test-report-header" title="Permalink to this headline">¶</a></h2>
<p>It&#8217;s easy to present extra information in a py.test run:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="c"># content of conftest.py</span>

<span class="k">def</span> <span class="nf">pytest_report_header</span><span class="p">(</span><span class="n">config</span><span class="p">):</span>
    <span class="k">return</span> <span class="s">&quot;project deps: mylib-1.1&quot;</span>
</pre></div>
</div>
<p>which will add the string to the test header accordingly:</p>
<div class="highlight-python"><pre>$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
project deps: mylib-1.1
collected 0 items

=============================  in 0.00 seconds =============================</pre>
</div>
<p>You can also return a list of strings which will be considered as several
lines of information.  You can of course also make the amount of reporting
information on e.g. the value of <tt class="docutils literal"><span class="pre">config.option.verbose</span></tt> so that
you present more information appropriately:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="c"># content of conftest.py</span>

<span class="k">def</span> <span class="nf">pytest_report_header</span><span class="p">(</span><span class="n">config</span><span class="p">):</span>
    <span class="k">if</span> <span class="n">config</span><span class="o">.</span><span class="n">option</span><span class="o">.</span><span class="n">verbose</span> <span class="o">&gt;</span> <span class="mi">0</span><span class="p">:</span>
        <span class="k">return</span> <span class="p">[</span><span class="s">&quot;info1: did you know that ...&quot;</span><span class="p">,</span> <span class="s">&quot;did you?&quot;</span><span class="p">]</span>
</pre></div>
</div>
<p>which will add info only when run with &#8220;&#8211;v&#8221;:</p>
<div class="highlight-python"><pre>$ py.test -v
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5 -- /home/hpk/p/pytest/.tox/regen/bin/python
info1: did you know that ...
did you?
collecting ... collected 0 items

=============================  in 0.00 seconds =============================</pre>
</div>
<p>and nothing when run plainly:</p>
<div class="highlight-python"><pre>$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 0 items

=============================  in 0.00 seconds =============================</pre>
</div>
</div>
<div class="section" id="profiling-test-duration">
<h2>profiling test duration<a class="headerlink" href="#profiling-test-duration" title="Permalink to this headline">¶</a></h2>
<p>If you have a slow running large test suite you might want to find
out which tests are the slowest. Let&#8217;s make an artifical test suite:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="c"># content of test_some_are_slow.py</span>

<span class="kn">import</span> <span class="nn">time</span>

<span class="k">def</span> <span class="nf">test_funcfast</span><span class="p">():</span>
    <span class="k">pass</span>

<span class="k">def</span> <span class="nf">test_funcslow1</span><span class="p">():</span>
    <span class="n">time</span><span class="o">.</span><span class="n">sleep</span><span class="p">(</span><span class="mf">0.1</span><span class="p">)</span>

<span class="k">def</span> <span class="nf">test_funcslow2</span><span class="p">():</span>
    <span class="n">time</span><span class="o">.</span><span class="n">sleep</span><span class="p">(</span><span class="mf">0.2</span><span class="p">)</span>
</pre></div>
</div>
<p>Now we can profile which test functions execute the slowest:</p>
<div class="highlight-python"><pre>$ py.test --durations=3
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 3 items

test_some_are_slow.py ...

========================= slowest 3 test durations =========================
0.20s call     test_some_are_slow.py::test_funcslow2
0.10s call     test_some_are_slow.py::test_funcslow1
0.00s setup    test_some_are_slow.py::test_funcfast
========================= 3 passed in 0.31 seconds =========================</pre>
</div>
</div>
<div class="section" id="incremental-testing-test-steps">
<h2>incremental testing - test steps<a class="headerlink" href="#incremental-testing-test-steps" title="Permalink to this headline">¶</a></h2>
<p>Sometimes you may have a testing situation which consists of a series
of test steps.  If one step fails it makes no sense to execute further
steps as they are all expected to fail anyway and their tracebacks
add no insight.  Here is a simple <tt class="docutils literal"><span class="pre">conftest.py</span></tt> file which introduces
an <tt class="docutils literal"><span class="pre">incremental</span></tt> marker which is to be used on classes:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="c"># content of conftest.py</span>

<span class="kn">import</span> <span class="nn">pytest</span>

<span class="k">def</span> <span class="nf">pytest_runtest_makereport</span><span class="p">(</span><span class="n">item</span><span class="p">,</span> <span class="n">call</span><span class="p">):</span>
    <span class="k">if</span> <span class="s">&quot;incremental&quot;</span> <span class="ow">in</span> <span class="n">item</span><span class="o">.</span><span class="n">keywords</span><span class="p">:</span>
        <span class="k">if</span> <span class="n">call</span><span class="o">.</span><span class="n">excinfo</span> <span class="ow">is</span> <span class="ow">not</span> <span class="bp">None</span><span class="p">:</span>
            <span class="n">parent</span> <span class="o">=</span> <span class="n">item</span><span class="o">.</span><span class="n">parent</span>
            <span class="n">parent</span><span class="o">.</span><span class="n">_previousfailed</span> <span class="o">=</span> <span class="n">item</span>

<span class="k">def</span> <span class="nf">pytest_runtest_setup</span><span class="p">(</span><span class="n">item</span><span class="p">):</span>
    <span class="k">if</span> <span class="s">&quot;incremental&quot;</span> <span class="ow">in</span> <span class="n">item</span><span class="o">.</span><span class="n">keywords</span><span class="p">:</span>
        <span class="n">previousfailed</span> <span class="o">=</span> <span class="nb">getattr</span><span class="p">(</span><span class="n">item</span><span class="o">.</span><span class="n">parent</span><span class="p">,</span> <span class="s">&quot;_previousfailed&quot;</span><span class="p">,</span> <span class="bp">None</span><span class="p">)</span>
        <span class="k">if</span> <span class="n">previousfailed</span> <span class="ow">is</span> <span class="ow">not</span> <span class="bp">None</span><span class="p">:</span>
            <span class="n">pytest</span><span class="o">.</span><span class="n">xfail</span><span class="p">(</span><span class="s">&quot;previous test failed (</span><span class="si">%s</span><span class="s">)&quot;</span> <span class="o">%</span><span class="n">previousfailed</span><span class="o">.</span><span class="n">name</span><span class="p">)</span>
</pre></div>
</div>
<p>These two hook implementations work together to abort incremental-marked
tests in a class.  Here is a test module example:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="c"># content of test_step.py</span>

<span class="kn">import</span> <span class="nn">pytest</span>

<span class="nd">@pytest.mark.incremental</span>
<span class="k">class</span> <span class="nc">TestUserHandling</span><span class="p">:</span>
    <span class="k">def</span> <span class="nf">test_login</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
        <span class="k">pass</span>
    <span class="k">def</span> <span class="nf">test_modification</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
        <span class="k">assert</span> <span class="mi">0</span>
    <span class="k">def</span> <span class="nf">test_deletion</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
        <span class="k">pass</span>

<span class="k">def</span> <span class="nf">test_normal</span><span class="p">():</span>
    <span class="k">pass</span>
</pre></div>
</div>
<p>If we run this:</p>
<div class="highlight-python"><pre>$ py.test -rx
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 4 items

test_step.py .Fx.

================================= FAILURES =================================
____________________ TestUserHandling.test_modification ____________________

self = &lt;test_step.TestUserHandling instance at 0x282b8c0&gt;

    def test_modification(self):
&gt;       assert 0
E       assert 0

test_step.py:9: AssertionError
========================= short test summary info ==========================
XFAIL test_step.py::TestUserHandling::()::test_deletion
  reason: previous test failed (test_modification)
============== 1 failed, 2 passed, 1 xfailed in 0.01 seconds ===============</pre>
</div>
<p>We&#8217;ll see that <tt class="docutils literal"><span class="pre">test_deletion</span></tt> was not executed because <tt class="docutils literal"><span class="pre">test_modification</span></tt>
failed.  It is reported as an &#8220;expected failure&#8221;.</p>
</div>
<div class="section" id="package-directory-level-fixtures-setups">
<h2>Package/Directory-level fixtures (setups)<a class="headerlink" href="#package-directory-level-fixtures-setups" title="Permalink to this headline">¶</a></h2>
<p>If you have nested test directories, you can have per-directory fixture scopes
by placing fixture functions in a <tt class="docutils literal"><span class="pre">conftest.py</span></tt> file in that directory
You can use all types of fixtures including <a class="reference internal" href="../fixture.html#autouse-fixtures"><em>autouse fixtures</em></a> which are the equivalent of xUnit&#8217;s setup/teardown
concept.  It&#8217;s however recommended to have explicit fixture references in your
tests or test classes rather than relying on implicitely executing
setup/teardown functions, especially if they are far away from the actual tests.</p>
<p>Here is a an example for making a <tt class="docutils literal"><span class="pre">db</span></tt> fixture available in a directory:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="c"># content of a/conftest.py</span>
<span class="kn">import</span> <span class="nn">pytest</span>

<span class="k">class</span> <span class="nc">DB</span><span class="p">:</span>
    <span class="k">pass</span>

<span class="nd">@pytest.fixture</span><span class="p">(</span><span class="n">scope</span><span class="o">=</span><span class="s">&quot;session&quot;</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">db</span><span class="p">():</span>
    <span class="k">return</span> <span class="n">DB</span><span class="p">()</span>
</pre></div>
</div>
<p>and then a test module in that directory:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="c"># content of a/test_db.py</span>
<span class="k">def</span> <span class="nf">test_a1</span><span class="p">(</span><span class="n">db</span><span class="p">):</span>
    <span class="k">assert</span> <span class="mi">0</span><span class="p">,</span> <span class="n">db</span>  <span class="c"># to show value</span>
</pre></div>
</div>
<p>another test module:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="c"># content of a/test_db2.py</span>
<span class="k">def</span> <span class="nf">test_a2</span><span class="p">(</span><span class="n">db</span><span class="p">):</span>
    <span class="k">assert</span> <span class="mi">0</span><span class="p">,</span> <span class="n">db</span>  <span class="c"># to show value</span>
</pre></div>
</div>
<p>and then a module in a sister directory which will not see
the <tt class="docutils literal"><span class="pre">db</span></tt> fixture:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="c"># content of b/test_error.py</span>
<span class="k">def</span> <span class="nf">test_root</span><span class="p">(</span><span class="n">db</span><span class="p">):</span>  <span class="c"># no db here, will error out</span>
    <span class="k">pass</span>
</pre></div>
</div>
<p>We can run this:</p>
<div class="highlight-python"><pre>$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 7 items

test_step.py .Fx.
a/test_db.py F
a/test_db2.py F
b/test_error.py E

================================== ERRORS ==================================
_______________________ ERROR at setup of test_root ________________________
file /tmp/doc-exec-278/b/test_error.py, line 1
  def test_root(db):  # no db here, will error out
        fixture 'db' not found
        available fixtures: pytestconfig, recwarn, monkeypatch, capfd, capsys, tmpdir
        use 'py.test --fixtures [testpath]' for help on them.

/tmp/doc-exec-278/b/test_error.py:1
================================= FAILURES =================================
____________________ TestUserHandling.test_modification ____________________

self = &lt;test_step.TestUserHandling instance at 0x26145f0&gt;

    def test_modification(self):
&gt;       assert 0
E       assert 0

test_step.py:9: AssertionError
_________________________________ test_a1 __________________________________

db = &lt;conftest.DB instance at 0x26211b8&gt;

    def test_a1(db):
&gt;       assert 0, db  # to show value
E       AssertionError: &lt;conftest.DB instance at 0x26211b8&gt;

a/test_db.py:2: AssertionError
_________________________________ test_a2 __________________________________

db = &lt;conftest.DB instance at 0x26211b8&gt;

    def test_a2(db):
&gt;       assert 0, db  # to show value
E       AssertionError: &lt;conftest.DB instance at 0x26211b8&gt;

a/test_db2.py:2: AssertionError
========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.03 seconds ==========</pre>
</div>
<p>The two test modules in the <tt class="docutils literal"><span class="pre">a</span></tt> directory see the same <tt class="docutils literal"><span class="pre">db</span></tt> fixture instance
while the one test in the sister-directory <tt class="docutils literal"><span class="pre">b</span></tt> doesn&#8217;t see it.  We could of course
also define a <tt class="docutils literal"><span class="pre">db</span></tt> fixture in that sister directory&#8217;s <tt class="docutils literal"><span class="pre">conftest.py</span></tt> file.
Note that each fixture is only instantiated if there is a test actually needing
it (unless you use &#8220;autouse&#8221; fixture which are always executed ahead of the first test
executing).</p>
</div>
<div class="section" id="post-process-test-reports-failures">
<h2>post-process test reports / failures<a class="headerlink" href="#post-process-test-reports-failures" title="Permalink to this headline">¶</a></h2>
<p>If you want to postprocess test reports and need access to the executing
environment you can implement a hook that gets called when the test
&#8220;report&#8221; object is about to be created.  Here we write out all failing
test calls and also access a fixture (if it was used by the test) in
case you want to query/look at it during your post processing.  In our
case we just write some informations out to a <tt class="docutils literal"><span class="pre">failures</span></tt> file:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="c"># content of conftest.py</span>

<span class="kn">import</span> <span class="nn">pytest</span>
<span class="kn">import</span> <span class="nn">os.path</span>

<span class="nd">@pytest.mark.tryfirst</span>
<span class="k">def</span> <span class="nf">pytest_runtest_makereport</span><span class="p">(</span><span class="n">item</span><span class="p">,</span> <span class="n">call</span><span class="p">,</span> <span class="n">__multicall__</span><span class="p">):</span>
    <span class="c"># execute all other hooks to obtain the report object</span>
    <span class="n">rep</span> <span class="o">=</span> <span class="n">__multicall__</span><span class="o">.</span><span class="n">execute</span><span class="p">()</span>

    <span class="c"># we only look at actual failing test calls, not setup/teardown</span>
    <span class="k">if</span> <span class="n">rep</span><span class="o">.</span><span class="n">when</span> <span class="o">==</span> <span class="s">&quot;call&quot;</span> <span class="ow">and</span> <span class="n">rep</span><span class="o">.</span><span class="n">failed</span><span class="p">:</span>
        <span class="n">mode</span> <span class="o">=</span> <span class="s">&quot;a&quot;</span> <span class="k">if</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">exists</span><span class="p">(</span><span class="s">&quot;failures&quot;</span><span class="p">)</span> <span class="k">else</span> <span class="s">&quot;w&quot;</span>
        <span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="s">&quot;failures&quot;</span><span class="p">,</span> <span class="n">mode</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
            <span class="c"># let&#39;s also access a fixture for the fun of it</span>
            <span class="k">if</span> <span class="s">&quot;tmpdir&quot;</span> <span class="ow">in</span> <span class="n">item</span><span class="o">.</span><span class="n">funcargs</span><span class="p">:</span>
                <span class="n">extra</span> <span class="o">=</span> <span class="s">&quot; (</span><span class="si">%s</span><span class="s">)&quot;</span> <span class="o">%</span> <span class="n">item</span><span class="o">.</span><span class="n">funcargs</span><span class="p">[</span><span class="s">&quot;tmpdir&quot;</span><span class="p">]</span>
            <span class="k">else</span><span class="p">:</span>
                <span class="n">extra</span> <span class="o">=</span> <span class="s">&quot;&quot;</span>

            <span class="n">f</span><span class="o">.</span><span class="n">write</span><span class="p">(</span><span class="n">rep</span><span class="o">.</span><span class="n">nodeid</span> <span class="o">+</span> <span class="n">extra</span> <span class="o">+</span> <span class="s">&quot;</span><span class="se">\n</span><span class="s">&quot;</span><span class="p">)</span>
    <span class="k">return</span> <span class="n">rep</span>
</pre></div>
</div>
<p>if you then have failing tests:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="c"># content of test_module.py</span>
<span class="k">def</span> <span class="nf">test_fail1</span><span class="p">(</span><span class="n">tmpdir</span><span class="p">):</span>
    <span class="k">assert</span> <span class="mi">0</span>
<span class="k">def</span> <span class="nf">test_fail2</span><span class="p">():</span>
    <span class="k">assert</span> <span class="mi">0</span>
</pre></div>
</div>
<p>and run them:</p>
<div class="highlight-python"><pre>$ py.test test_module.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 2 items

test_module.py FF

================================= FAILURES =================================
________________________________ test_fail1 ________________________________

tmpdir = local('/tmp/pytest-326/test_fail10')

    def test_fail1(tmpdir):
&gt;       assert 0
E       assert 0

test_module.py:2: AssertionError
________________________________ test_fail2 ________________________________

    def test_fail2():
&gt;       assert 0
E       assert 0

test_module.py:4: AssertionError
========================= 2 failed in 0.02 seconds =========================</pre>
</div>
<p>you will have a &#8220;failures&#8221; file which contains the failing test ids:</p>
<div class="highlight-python"><pre>$ cat failures
test_module.py::test_fail1 (/tmp/pytest-326/test_fail10)
test_module.py::test_fail2</pre>
</div>
</div>
<div class="section" id="making-test-result-information-available-in-fixtures">
<h2>Making test result information available in fixtures<a class="headerlink" href="#making-test-result-information-available-in-fixtures" title="Permalink to this headline">¶</a></h2>
<p>If you want to make test result reports available in fixture finalizers
here is a little example implemented via a local plugin:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="c"># content of conftest.py</span>

<span class="kn">import</span> <span class="nn">pytest</span>

<span class="nd">@pytest.mark.tryfirst</span>
<span class="k">def</span> <span class="nf">pytest_runtest_makereport</span><span class="p">(</span><span class="n">item</span><span class="p">,</span> <span class="n">call</span><span class="p">,</span> <span class="n">__multicall__</span><span class="p">):</span>
    <span class="c"># execute all other hooks to obtain the report object</span>
    <span class="n">rep</span> <span class="o">=</span> <span class="n">__multicall__</span><span class="o">.</span><span class="n">execute</span><span class="p">()</span>

    <span class="c"># set an report attribute for each phase of a call, which can</span>
    <span class="c"># be &quot;setup&quot;, &quot;call&quot;, &quot;teardown&quot;</span>

    <span class="nb">setattr</span><span class="p">(</span><span class="n">item</span><span class="p">,</span> <span class="s">&quot;rep_&quot;</span> <span class="o">+</span> <span class="n">rep</span><span class="o">.</span><span class="n">when</span><span class="p">,</span> <span class="n">rep</span><span class="p">)</span>
    <span class="k">return</span> <span class="n">rep</span>


<span class="nd">@pytest.fixture</span>
<span class="k">def</span> <span class="nf">something</span><span class="p">(</span><span class="n">request</span><span class="p">):</span>
    <span class="k">def</span> <span class="nf">fin</span><span class="p">():</span>
        <span class="c"># request.node is an &quot;item&quot; because we use the default</span>
        <span class="c"># &quot;function&quot; scope</span>
        <span class="k">if</span> <span class="n">request</span><span class="o">.</span><span class="n">node</span><span class="o">.</span><span class="n">rep_setup</span><span class="o">.</span><span class="n">failed</span><span class="p">:</span>
            <span class="k">print</span> <span class="s">&quot;setting up a test failed!&quot;</span><span class="p">,</span> <span class="n">request</span><span class="o">.</span><span class="n">node</span><span class="o">.</span><span class="n">nodeid</span>
        <span class="k">elif</span> <span class="n">request</span><span class="o">.</span><span class="n">node</span><span class="o">.</span><span class="n">rep_setup</span><span class="o">.</span><span class="n">passed</span><span class="p">:</span>
            <span class="k">if</span> <span class="n">request</span><span class="o">.</span><span class="n">node</span><span class="o">.</span><span class="n">rep_call</span><span class="o">.</span><span class="n">failed</span><span class="p">:</span>
                <span class="k">print</span> <span class="s">&quot;executing test failed&quot;</span><span class="p">,</span> <span class="n">request</span><span class="o">.</span><span class="n">node</span><span class="o">.</span><span class="n">nodeid</span>
    <span class="n">request</span><span class="o">.</span><span class="n">addfinalizer</span><span class="p">(</span><span class="n">fin</span><span class="p">)</span>
</pre></div>
</div>
<p>if you then have failing tests:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="c"># content of test_module.py</span>

<span class="kn">import</span> <span class="nn">pytest</span>

<span class="nd">@pytest.fixture</span>
<span class="k">def</span> <span class="nf">other</span><span class="p">():</span>
    <span class="k">assert</span> <span class="mi">0</span>

<span class="k">def</span> <span class="nf">test_setup_fails</span><span class="p">(</span><span class="n">something</span><span class="p">,</span> <span class="n">other</span><span class="p">):</span>
    <span class="k">pass</span>

<span class="k">def</span> <span class="nf">test_call_fails</span><span class="p">(</span><span class="n">something</span><span class="p">):</span>
    <span class="k">assert</span> <span class="mi">0</span>

<span class="k">def</span> <span class="nf">test_fail2</span><span class="p">():</span>
    <span class="k">assert</span> <span class="mi">0</span>
</pre></div>
</div>
<p>and run it:</p>
<div class="highlight-python"><pre>$ py.test -s test_module.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 3 items

test_module.py EFF

================================== ERRORS ==================================
____________________ ERROR at setup of test_setup_fails ____________________

    @pytest.fixture
    def other():
&gt;       assert 0
E       assert 0

test_module.py:6: AssertionError
================================= FAILURES =================================
_____________________________ test_call_fails ______________________________

something = None

    def test_call_fails(something):
&gt;       assert 0
E       assert 0

test_module.py:12: AssertionError
________________________________ test_fail2 ________________________________

    def test_fail2():
&gt;       assert 0
E       assert 0

test_module.py:15: AssertionError
==================== 2 failed, 1 error in 0.01 seconds =====================
setting up a test failed! test_module.py::test_setup_fails
executing test failed test_module.py::test_call_fails</pre>
</div>
<p>You&#8217;ll see that the fixture finalizers could use the precise reporting
information.</p>
</div>
</div>


          </div>
        </div>
      </div>
      <div class="clearer"></div>
    </div>
    <div class="related">
      <h3>Navigation</h3>
      <ul>
        <li class="right" style="margin-right: 10px">
          <a href="parametrize.html" title="Parametrizing tests"
             >next</a></li>
        <li class="right" >
          <a href="reportingdemo.html" title="Demo of Python failure reports with py.test"
             >previous</a> |</li>
        <li><a href="../contents.html">pytest-2.3.4.1</a> &raquo;</li>
          <li><a href="index.html" >Usages and Examples</a> &raquo;</li>
 
<g:plusone></g:plusone>

      </ul>
    </div>

    <div class="footer">
        &copy; Copyright 2012, holger krekel.
      Created using <a href="http://sphinx.pocoo.org/">Sphinx</a> 1.1.3.
    </div>
<script type="text/javascript">

  var _gaq = _gaq || [];
  _gaq.push(['_setAccount', 'UA-7597274-13']);
  _gaq.push(['_trackPageview']);

  (function() {
    var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
    ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
    var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
  })();

</script>
<script type="text/javascript" src="https://apis.google.com/js/plusone.js"></script>

  </body>
</html>