Les tests unitaires¶
Django est livré avec sa propre suite de tests, dans le répertoire tests
de la base de code. Notre politique est de nous assurer que tous les tests passent en tout temps.
Nous apprécions toute contribution à la suite de tests !
Les tests de Django utilisent tous l’infrastructure de tests livrée avec Django pour tester les applications. Consultez Écriture et lancement de tests pour une explication sur la façon d’écrire de nouveaux tests.
Lancement des tests unitaires¶
Démarrage rapide¶
Tout d’abord, créez votre fork de Django sur GitHub.
Ensuite, créez et activez un environnement virtuel. Si vous n’êtes pas familier avec cette procédure, lisez le tutoriel de contribution.
Puis, créez un clone local de votre fork, installez quelques dépendances et lancez la suite de tests
$ git clone [email protected]:YourGitHubName/django.git django-repo
$ cd django-repo/tests
$ pip install -e ..
$ pip install -r requirements/py3.txt
$ ./runtests.py
L’installation des dépendances nécessite probablement l’installation de paquets de votre système d’exploitation. Il est généralement possible de savoir quels paquets installer en faisant une recherche Web à propos des dernières lignes du message d’erreur. Essayez d’ajouter le nom de votre système d’exploitation à la requête si nécessaire.
Si vous avez des soucis d’installation des dépendances, vous pouvez omettre cette étape. Voir Running all the tests pour des détails sur l’installation des dépendances facultatives des tests. Si l’une des dépendances facultatives n’est pas installée, les tests qui l’exigent seront passés (skipped).
Le lancement des tests exige un module de réglages Django définissant la base de données à utiliser. Pour faciliter le démarrage, Django fournit et utilise un module de réglages d’exemple utilisant la base de données SQLite. Voir Utilisation d’un autre module de réglages settings pour apprendre comment utiliser un module de réglages différent pour lancer les tests avec une autre base de données.
Utilisateurs de Windows
Nous recommandons quelque chose comme Git Bash pour lancer les tests en utilisant l’approche ci-dessus.
Des problèmes ? Voir Dépannage pour certaines problématiques courantes.
Lancement des tests avec tox
¶
Tox is a tool for running tests in different
virtual environments. Django includes a basic tox.ini
that automates some
checks that our build server performs on pull requests. To run the unit tests
and other checks (such as import sorting, the
documentation spelling checker, and
code formatting), install and run the tox
command from any place in the Django source tree:
$ pip install tox
$ tox
By default, tox
runs the test suite with the bundled test settings file for
SQLite, flake8
, isort
, and the documentation spelling checker. In
addition to the system dependencies noted elsewhere in this documentation,
the command python3
must be on your path and linked to the appropriate
version of Python. A list of default environments can be seen as follows:
$ tox -l
py3
flake8
docs
isort
Testing other Python versions and database backends¶
In addition to the default environments, tox
supports running unit tests
for other versions of Python and other database backends. Since Django’s test
suite doesn’t bundle a settings file for database backends other than SQLite,
however, you must create and provide your own test settings. For example, to run the tests on Python 3.5
using PostgreSQL:
$ tox -e py35-postgres -- --settings=my_postgres_settings
This command sets up a Python 3.5 virtual environment, installs Django’s
test suite dependencies (including those for PostgreSQL), and calls
runtests.py
with the supplied arguments (in this case,
--settings=my_postgres_settings
).
The remainder of this documentation shows commands for running tests without
tox
, however, any option passed to runtests.py
can also be passed to
tox
by prefixing the argument list with --
, as above.
Tox also respects the DJANGO_SETTINGS_MODULE
environment variable, if set.
For example, the following is equivalent to the command above:
$ DJANGO_SETTINGS_MODULE=my_postgres_settings tox -e py35-postgres
Running the JavaScript tests¶
Django includes a set of JavaScript unit tests for
functions in certain contrib apps. The JavaScript tests aren’t run by default
using tox
because they require Node.js to be installed and aren’t
necessary for the majority of patches. To run the JavaScript tests using
tox
:
$ tox -e javascript
This command runs npm install
to ensure test requirements are up to
date and then runs npm test
.
Utilisation d’un autre module de réglages settings
¶
Le module de réglages inclus (tests/test_sqlite.py
) permet de lancer la suite de tests avec SQLite. Si vous souhaitez lancer les tests avec une autre base de données, vous devez définir votre propre fichier de réglages. Certains tests, comme ceux de contrib.postgres
, sont spécifique à un moteur de base de données spécifique et seront omis quand une autre base de données est utilisée.
Pour lancer les tests avec des réglages différents, assurez-vous que le module se trouve dans le chemin PYTHONPATH
et indiquez ce module avec --settings
.
Le réglage DATABASES
dans tout module de réglages pour les tests doit définir deux bases de données :
- Une base de données
default
. Celle-ci doit utiliser le moteur que vous souhaitez tester de manière principale. - Une base de données avec l’alias
other
. Celle-ci est utilisée pour tester les requêtes qui sont dirigées vers d’autres bases de données. Elle devrait utiliser le même moteur que la base de donnéesdefault
, mais elle doit avoir un autre nom.
Si vous utilisez un moteur autre que SQLite, il est nécessaire de fournir d’autres détails pour chaque base de données :
- L’option
USER
doit indiquer un compte utilisateur existant pour la base de données. Cet utilisateur a besoin de la permission d’exécuterCREATE DATABASE
afin de pouvoir créer la base de données de test. - L’option
PASSWORD
doit indiquer le mot de passe à utiliser pour l’utilisateurUSER
.
Test databases get their names by prepending test_
to the value of the
NAME
settings for the databases defined in DATABASES
.
These test databases are deleted when the tests are finished.
You will also need to ensure that your database uses UTF-8 as the default
character set. If your database server doesn’t use UTF-8 as a default charset,
you will need to include a value for CHARSET
in the
test settings dictionary for the applicable database.
Running only some of the tests¶
Django’s entire test suite takes a while to run, and running every single test
could be redundant if, say, you just added a test to Django that you want to
run quickly without running everything else. You can run a subset of the unit
tests by appending the names of the test modules to runtests.py
on the
command line.
For example, if you’d like to run tests only for generic relations and internationalization, type:
$ ./runtests.py --settings=path.to.settings generic_relations i18n
How do you find out the names of individual tests? Look in tests/
— each
directory name there is the name of a test.
If you just want to run a particular class of tests, you can specify a list of
paths to individual test classes. For example, to run the TranslationTests
of the i18n
module, type:
$ ./runtests.py --settings=path.to.settings i18n.tests.TranslationTests
Going beyond that, you can specify an individual test method like this:
$ ./runtests.py --settings=path.to.settings i18n.tests.TranslationTests.test_lazy_objects
Running the Selenium tests¶
Some tests require Selenium and a Web browser. To run these tests, you must
install the selenium package and run the tests with the
--selenium=<BROWSERS>
option. For example, if you have Firefox and Google
Chrome installed:
$ ./runtests.py --selenium=firefox,chrome
See the selenium.webdriver package for the list of available browsers.
Specifying --selenium
automatically sets --tags=selenium
to run only
the tests that require selenium.
Running all the tests¶
If you want to run the full suite of tests, you’ll need to install a number of dependencies:
- argon2-cffi 16.1.0+
- bcrypt
- docutils
- geoip2
- jinja2 2.7+
- numpy
- Pillow
- PyYAML
- pytz (required)
- setuptools
- memcached, plus a supported Python binding
- gettext (gettext et Windows)
- selenium
- sqlparse
You can find these dependencies in pip requirements files inside the
tests/requirements
directory of the Django source tree and install them
like so:
$ pip install -r tests/requirements/py3.txt
Si vous rencontrez une erreur durant l’installation, il est possible qu’une dépendance pour un ou plusieurs paquets Python manque sur votre système. Consultez la documentation du paquet problématique ou recherchez sur le Web avec le message d’erreur obtenu.
You can also install the database adapter(s) of your choice using
oracle.txt
, mysql.txt
, or postgres.txt
.
If you want to test the memcached cache backend, you’ll also need to define
a CACHES
setting that points at your memcached instance.
To run the GeoDjango tests, you will need to setup a spatial database and install the Geospatial libraries.
Each of these dependencies is optional. If you’re missing any of them, the associated tests will be skipped.
La couverture de code¶
Contributors are encouraged to run coverage on the test suite to identify areas that need additional tests. The coverage tool installation and use is described in testing code coverage.
Coverage should be run in a single process to obtain accurate statistics. To run coverage on the Django test suite using the standard test settings:
$ coverage run ./runtests.py --settings=test_sqlite --parallel=1
After running coverage, generate the html report by running:
$ coverage html
When running coverage for the Django tests, the included .coveragerc
settings file defines coverage_html
as the output directory for the report
and also excludes several directories not relevant to the results
(test code or external code included in Django).
Contrib apps¶
Les tests des applications contribuées se trouvent dans le répertoire tests/
, typiquement sous <nom_app>_tests
. Par exemple, les tests de contrib.auth
se trouvent dans tests/auth_tests
.
Dépannage¶
Nombreux échecs de tests avec UnicodeEncodeError
¶
Si le paquet locales
n’est pas installé, certains tests échoueront avec une exception UnicodeEncodeError
.
Il est possible de corriger cela par exemple sur les systèmes basés sur Debian en lançant
$ apt-get install locales
$ dpkg-reconfigure locales
Il est possible de corriger cela sur les systèmes macOS en configurant la locale du shell
$ export LANG="en_US.UTF-8"
$ export LC_ALL="en_US.UTF-8"
Lancez la commande locale
pour confirmer la modification. Une autre possibilité est d’ajouter ces commandes d’exportation au fichier de démarrage du shell (par ex. ~/.bashrc
pour Bash) afin d’éviter de devoir les retaper.
Tests that only fail in combination¶
In case a test passes when run in isolation but fails within the whole suite, we have some tools to help analyze the problem.
The --bisect
option of runtests.py
will run the failing test while
halving the test set it is run together with on each iteration, often making
it possible to identify a small number of tests that may be related to the
failure.
For example, suppose that the failing test that works on its own is
ModelTest.test_eq
, then using:
$ ./runtests.py --bisect basic.tests.ModelTest.test_eq
will try to determine a test that interferes with the given one. First, the test is run with the first half of the test suite. If a failure occurs, the first half of the test suite is split in two groups and each group is then run with the specified test. If there is no failure with the first half of the test suite, the second half of the test suite is run with the specified test and split appropriately as described earlier. The process repeats until the set of failing tests is minimized.
The --pair
option runs the given test alongside every other test from the
suite, letting you check if another test has side-effects that cause the
failure. So:
$ ./runtests.py --pair basic.tests.ModelTest.test_eq
will pair test_eq
with every test label.
With both --bisect
and --pair
, if you already suspect which cases
might be responsible for the failure, you may limit tests to be cross-analyzed
by specifying further test labels after
the first one:
$ ./runtests.py --pair basic.tests.ModelTest.test_eq queries transactions
You can also try running any set of tests in reverse using the --reverse
option in order to verify that executing tests in a different order does not
cause any trouble:
$ ./runtests.py basic --reverse
Seeing the SQL queries run during a test¶
If you wish to examine the SQL being run in failing tests, you can turn on
SQL logging using the --debug-sql
option. If you
combine this with --verbosity=2
, all SQL queries will be output:
$ ./runtests.py basic --debug-sql
Seeing the full traceback of a test failure¶
By default tests are run in parallel with one process per core. When the tests
are run in parallel, however, you’ll only see a truncated traceback for any
test failures. You can adjust this behavior with the --parallel
option:
$ ./runtests.py basic --parallel=1
You can also use the DJANGO_TEST_PROCESSES
environment variable for this
purpose.
Tips for writing tests¶
Isolating model registration¶
To avoid polluting the global apps
registry and prevent
unnecessary table creation, models defined in a test method should be bound to
a temporary Apps
instance:
from django.apps.registry import Apps
from django.db import models
from django.test import SimpleTestCase
class TestModelDefinition(SimpleTestCase):
def test_model_definition(self):
test_apps = Apps(['app_label'])
class TestModel(models.Model):
class Meta:
apps = test_apps
...
-
django.test.utils.
isolate_apps
(*app_labels, attr_name=None, kwarg_name=None)¶
Since this pattern involves a lot of boilerplate, Django provides the
isolate_apps()
decorator. It’s used like this:
from django.db import models
from django.test import SimpleTestCase
from django.test.utils import isolate_apps
class TestModelDefinition(SimpleTestCase):
@isolate_apps('app_label')
def test_model_definition(self):
class TestModel(models.Model):
pass
...
Setting app_label
Models defined in a test method with no explicit
app_label
are automatically assigned the
label of the app in which their test class is located.
In order to make sure the models defined within the context of
isolate_apps()
instances are correctly
installed, you should pass the set of targeted app_label
as arguments:
from django.db import models
from django.test import SimpleTestCase
from django.test.utils import isolate_apps
class TestModelDefinition(SimpleTestCase):
@isolate_apps('app_label', 'other_app_label')
def test_model_definition(self):
# This model automatically receives app_label='app_label'
class TestModel(models.Model):
pass
class OtherAppModel(models.Model):
class Meta:
app_label = 'other_app_label'
...
The decorator can also be applied to classes:
from django.db import models
from django.test import SimpleTestCase
from django.test.utils import isolate_apps
@isolate_apps('app_label')
class TestModelDefinition(SimpleTestCase):
def test_model_definition(self):
class TestModel(models.Model):
pass
...
The temporary Apps
instance used to isolate model registration can be
retrieved as an attribute when used as a class decorator by using the
attr_name
parameter:
from django.db import models
from django.test import SimpleTestCase
from django.test.utils import isolate_apps
@isolate_apps('app_label', attr_name='apps')
class TestModelDefinition(SimpleTestCase):
def test_model_definition(self):
class TestModel(models.Model):
pass
self.assertIs(self.apps.get_model('app_label', 'TestModel'), TestModel)
Or as an argument on the test method when used as a method decorator by using
the kwarg_name
parameter:
from django.db import models
from django.test import SimpleTestCase
from django.test.utils import isolate_apps
class TestModelDefinition(SimpleTestCase):
@isolate_apps('app_label', kwarg_name='apps')
def test_model_definition(self, apps):
class TestModel(models.Model):
pass
self.assertIs(apps.get_model('app_label', 'TestModel'), TestModel)