Sophie

Sophie

distrib > Fedora > 18 > i386 > by-pkgid > c004b048a08dced45cdde088d7d8ef62 > files > 162

pytest-2.3.5-3.fc18.noarch.rpm


.. _paramexamples:

Parametrizing tests
=================================================

.. currentmodule:: _pytest.python

py.test allows to easily parametrize test functions.
For basic docs, see :ref:`parametrize-basics`.

In the following we provide some examples using
the builtin mechanisms.

Generating parameters combinations, depending on command line
----------------------------------------------------------------------------

.. regendoc:wipe

Let's say we want to execute a test with different computation
parameters and the parameter range shall be determined by a command
line argument.  Let's first write a simple (do-nothing) computation test::

    # content of test_compute.py

    def test_compute(param1):
        assert param1 < 4

Now we add a test configuration like this::

    # content of conftest.py

    def pytest_addoption(parser):
        parser.addoption("--all", action="store_true",
            help="run all combinations")

    def pytest_generate_tests(metafunc):
        if 'param1' in metafunc.fixturenames:
            if metafunc.config.option.all:
                end = 5
            else:
                end = 2
            metafunc.parametrize("param1", range(end))

This means that we only run 2 tests if we do not pass ``--all``::

    $ py.test -q test_compute.py
    ..

We run only two computations, so we see two dots.
let's run the full monty::

    $ py.test -q --all
    ....F
    ================================= FAILURES =================================
    _____________________________ test_compute[4] ______________________________
    
    param1 = 4
    
        def test_compute(param1):
    >       assert param1 < 4
    E       assert 4 < 4
    
    test_compute.py:3: AssertionError

As expected when running the full range of ``param1`` values
we'll get an error on the last one.

A quick port of "testscenarios"
------------------------------------

.. _`test scenarios`: http://pypi.python.org/pypi/testscenarios/

Here is a quick port to run tests configured with `test scenarios`_,
an add-on from Robert Collins for the standard unittest framework. We
only have to work a bit to construct the correct arguments for pytest's
:py:func:`Metafunc.parametrize`::

    # content of test_scenarios.py

    def pytest_generate_tests(metafunc):
        idlist = []
        argvalues = []
        for scenario in metafunc.cls.scenarios:
            idlist.append(scenario[0])
            items = scenario[1].items()
            argnames = [x[0] for x in items]
            argvalues.append(([x[1] for x in items]))
        metafunc.parametrize(argnames, argvalues, ids=idlist, scope="class")

    scenario1 = ('basic', {'attribute': 'value'})
    scenario2 = ('advanced', {'attribute': 'value2'})

    class TestSampleWithScenarios:
        scenarios = [scenario1, scenario2]

        def test_demo1(self, attribute):
            assert isinstance(attribute, str)

        def test_demo2(self, attribute):
            assert isinstance(attribute, str)

this is a fully self-contained example which you can run with::

    $ py.test test_scenarios.py
    =========================== test session starts ============================
    platform linux2 -- Python 2.7.3 -- pytest-2.3.5
    collected 4 items
    
    test_scenarios.py ....
    
    ========================= 4 passed in 0.01 seconds =========================

If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function::


    $ py.test --collectonly test_scenarios.py
    =========================== test session starts ============================
    platform linux2 -- Python 2.7.3 -- pytest-2.3.5
    collected 4 items
    <Module 'test_scenarios.py'>
      <Class 'TestSampleWithScenarios'>
        <Instance '()'>
          <Function 'test_demo1[basic]'>
          <Function 'test_demo2[basic]'>
          <Function 'test_demo1[advanced]'>
          <Function 'test_demo2[advanced]'>
    
    =============================  in 0.01 seconds =============================

Note that we told ``metafunc.parametrize()`` that your scenario values
should be considered class-scoped.  With pytest-2.3 this leads to a
resource-based ordering.

Deferring the setup of parametrized resources
---------------------------------------------------

.. regendoc:wipe

The parametrization of test functions happens at collection
time.  It is a good idea to setup expensive resources like DB
connections or subprocess only when the actual test is run.
Here is a simple example how you can achieve that, first
the actual test requiring a ``db`` object::

    # content of test_backends.py

    import pytest
    def test_db_initialized(db):
        # a dummy test
        if db.__class__.__name__ == "DB2":
            pytest.fail("deliberately failing for demo purposes")

We can now add a test configuration that generates two invocations of
the ``test_db_initialized`` function and also implements a factory that
creates a database object for the actual test invocations::

    # content of conftest.py
    import pytest

    def pytest_generate_tests(metafunc):
        if 'db' in metafunc.fixturenames:
            metafunc.parametrize("db", ['d1', 'd2'], indirect=True)

    class DB1:
        "one database object"
    class DB2:
        "alternative database object"

    @pytest.fixture
    def db(request):
        if request.param == "d1":
            return DB1()
        elif request.param == "d2":
            return DB2()
        else:
            raise ValueError("invalid internal test config")

Let's first see how it looks like at collection time::

    $ py.test test_backends.py --collectonly
    =========================== test session starts ============================
    platform linux2 -- Python 2.7.3 -- pytest-2.3.5
    collected 2 items
    <Module 'test_backends.py'>
      <Function 'test_db_initialized[d1]'>
      <Function 'test_db_initialized[d2]'>
    
    =============================  in 0.00 seconds =============================

And then when we run the test::

    $ py.test -q test_backends.py
    .F
    ================================= FAILURES =================================
    _________________________ test_db_initialized[d2] __________________________
    
    db = <conftest.DB2 instance at 0x2038f80>
    
        def test_db_initialized(db):
            # a dummy test
            if db.__class__.__name__ == "DB2":
    >           pytest.fail("deliberately failing for demo purposes")
    E           Failed: deliberately failing for demo purposes
    
    test_backends.py:6: Failed

The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed.  Our ``db`` fixture function has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase.

.. regendoc:wipe

Parametrizing test methods through per-class configuration
--------------------------------------------------------------

.. _`unittest parameterizer`: http://code.google.com/p/unittest-ext/source/browse/trunk/params.py


Here is an example ``pytest_generate_function`` function implementing a
parametrization scheme similar to Michael Foord's `unittest
parameterizer`_ but in a lot less code::

    # content of ./test_parametrize.py
    import pytest

    def pytest_generate_tests(metafunc):
        # called once per each test function
        funcarglist = metafunc.cls.params[metafunc.function.__name__]
        argnames = list(funcarglist[0])
        metafunc.parametrize(argnames, [[funcargs[name] for name in argnames]
                for funcargs in funcarglist])

    class TestClass:
        # a map specifying multiple argument sets for a test method
        params = {
            'test_equals': [dict(a=1, b=2), dict(a=3, b=3), ],
            'test_zerodivision': [dict(a=1, b=0), ],
        }

        def test_equals(self, a, b):
            assert a == b

        def test_zerodivision(self, a, b):
            pytest.raises(ZeroDivisionError, "a/b")

Our test generator looks up a class-level definition which specifies which
argument sets to use for each test function.  Let's run it::

    $ py.test -q
    F..
    ================================= FAILURES =================================
    ________________________ TestClass.test_equals[1-2] ________________________
    
    self = <test_parametrize.TestClass instance at 0x1338f80>, a = 1, b = 2
    
        def test_equals(self, a, b):
    >       assert a == b
    E       assert 1 == 2
    
    test_parametrize.py:18: AssertionError

Indirect parametrization with multiple fixtures
--------------------------------------------------------------

Here is a stripped down real-life example of using parametrized
testing for testing serialization of objects between different python
interpreters.  We define a ``test_basic_objects`` function which
is to be run with different sets of arguments for its three arguments:

* ``python1``: first python interpreter, run to pickle-dump an object to a file
* ``python2``: second interpreter, run to pickle-load an object from a file
* ``obj``: object to be dumped/loaded

.. literalinclude:: multipython.py

Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize)::

   . $ py.test -rs -q multipython.py
   ............sss............sss............sss............ssssssssssssssssss
   ========================= short test summary info ==========================
   SKIP [27] /home/hpk/p/pytest/doc/en/example/multipython.py:21: 'python2.8' not found

Indirect parametrization of optional implementations/imports
--------------------------------------------------------------------

If you want to compare the outcomes of several implementations of a given
API, you can write test functions that receive the already imported implementations
and get skipped in case the implementation is not importable/available.  Let's
say we have a "base" implementation and the other (possibly optimized ones) 
need to provide similar results::

    # content of conftest.py

    import pytest

    @pytest.fixture(scope="session")
    def basemod(request):
        return pytest.importorskip("base")

    @pytest.fixture(scope="session", params=["opt1", "opt2"])
    def optmod(request):
        return pytest.importorskip(request.param)

And then a base implementation of a simple function::

    # content of base.py
    def func1():
        return 1

And an optimized version::

    # content of opt1.py
    def func1():
        return 1.0001

And finally a little test module::

    # content of test_module.py

    def test_func1(basemod, optmod):
        assert round(basemod.func1(), 3) == round(optmod.func1(), 3)


If you run this with reporting for skips enabled::

    $ py.test -rs test_module.py
    =========================== test session starts ============================
    platform linux2 -- Python 2.7.3 -- pytest-2.3.5
    collected 2 items
    
    test_module.py .s
    ========================= short test summary info ==========================
    SKIP [1] /tmp/doc-exec-275/conftest.py:10: could not import 'opt2'
    
    =================== 1 passed, 1 skipped in 0.01 seconds ====================

You'll see that we don't have a ``opt2`` module and thus the second test run
of our ``test_func1`` was skipped.  A few notes:

- the fixture functions in the ``conftest.py`` file are "session-scoped" because we
  don't need to import more than once 

- if you have multiple test functions and a skipped import, you will see
  the ``[1]`` count increasing in the report

- you can put :ref:`@pytest.mark.parametrize <@pytest.mark.parametrize>` style
  parametrization on the test functions to parametrize input/output 
  values as well.