|
From: Michael D. <md...@st...> - 2009-10-09 15:52:53
|
I went to create a new image comparison test related to the hatching bug reported this morning. I added my test to the bottom of test_simplification.py, and ran all the tests as follows: python -c "import matplotlib; matplotlib.test()" Unfortunately, it doesn't seem to be running the new test at all. If I put "assert False" at the top of the test, and even that doesn't fail. If I remove the "image_comparison" decorator, however, the test will fail. Maybe this is because the baseline image doesn't exist yet? In the past, I've just run the tests, got a mismatch (because no baseline existed), and copied the current image to the baseline image and checked that in. Am I using the wrong workflow, or is this a bug? Cheers, Mike -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA |
|
From: Jouni K. S. <jk...@ik...> - 2009-10-09 16:04:49
|
Michael Droettboom <md...@st...> writes: > Unfortunately, it doesn't seem to be running the new test at all. If I > put "assert False" at the top of the test, and even that doesn't fail. > If I remove the "image_comparison" decorator, however, the test will > fail. Maybe this is because the baseline image doesn't exist yet? Oh, right. My fault: when I implemented the pdf comparison, I made it run the test for only those formats for which a baseline image exists, to avoid causing spurious failures when a test is not even meant to be run for all backends. I guess this should be done in some other way, and perhaps the usual case is to test all backends for which the output can be compared. -- Jouni K. Seppänen http://www.iki.fi/jks |
|
From: Jouni K. S. <jk...@ik...> - 2009-10-09 16:17:36
|
Jouni K. Seppänen <jk...@ik...> writes: > Oh, right. My fault: when I implemented the pdf comparison, I made it > run the test for only those formats for which a baseline image exists, I committed a change to make it run both png and pdf tests all the time. When we add new formats (comparing postscript files could easily be done using the same ghostscript command as used for pdf files, and some svg renderer could also be added) and new tests, we'll have to think about if we want to run all tests on all backends, since the amount of data in the repository will start growing pretty fast. -- Jouni K. Seppänen http://www.iki.fi/jks |
|
From: Andrew S. <str...@as...> - 2009-10-09 16:27:22
|
Jouni K. Seppänen wrote: > Jouni K. Seppänen <jk...@ik...> writes: > > >> Oh, right. My fault: when I implemented the pdf comparison, I made it >> run the test for only those formats for which a baseline image exists, >> > > I committed a change to make it run both png and pdf tests all the time. > Thanks for the fix, Jouni. (My svn commit was rejected because you did exactly the same thing as me.) > When we add new formats (comparing postscript files could easily be done > using the same ghostscript command as used for pdf files, and some svg > renderer could also be added) "inkscape input.svg --export-png=output.png" works very well as an svg renderer. > and new tests, we'll have to think about > if we want to run all tests on all backends, since the amount of data in > the repository will start growing pretty fast. > As far as the test data -- I agree this is an issue. One point in favor of the status quo is that it's really nice to have the test data included with the source code so there are no configuration hassles. I'm not sure how well the buildbot infrastructure would cope with anything else. For example, to my knowledge, there is no Buildbot precedent to automatically pull from two branches to execute a single test run. But in general I think this does bear thinking about. -Andrew |
|
From: Michael D. <md...@st...> - 2009-10-09 16:36:21
|
Andrew Straw wrote: > Jouni K. Seppänen wrote: > >> Jouni K. Seppänen <jk...@ik...> writes: >> >> When we add new formats (comparing postscript files could easily be done >> using the same ghostscript command as used for pdf files, and some svg >> renderer could also be added) >> > > "inkscape input.svg --export-png=output.png" works very well as an svg > renderer. > I'd also like to run SVG through xmllint against the SVG schema as another sanity check. I may get to this if I can find the time. > >> and new tests, we'll have to think about >> if we want to run all tests on all backends, since the amount of data in >> the repository will start growing pretty fast. >> >> > As far as the test data -- I agree this is an issue. One point in favor > of the status quo is that it's really nice to have the test data > included with the source code so there are no configuration hassles. I'm > not sure how well the buildbot infrastructure would cope with anything > else. For example, to my knowledge, there is no Buildbot precedent to > automatically pull from two branches to execute a single test run. But > in general I think this does bear thinking about. > An easy improvement may be having an extra kwarg on the image_comparison decorator to select a subset of backends. For example, many of the ones in test_simplification.py only apply to the Agg backend. While I'm sharing my wish list out loud, I think it would also be highly cool to get the native Mac OS backend in the buildbot tests, as that's one I can't test easily myself. Cheers, Mike -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA |
|
From: Andrew S. <str...@as...> - 2009-10-09 17:46:07
Attachments:
0001-don-t-test-simplification.patch
|
Michael Droettboom wrote: >> "inkscape input.svg --export-png=output.png" works very well as an svg >> renderer. >> > I'd also like to run SVG through xmllint against the SVG schema as > another sanity check. I may get to this if I can find the time. That'd be great. I just installed inkscape and xmllint on the non-bare buildslave machine. >> As far as the test data -- I agree this is an issue. One point in favor >> of the status quo is that it's really nice to have the test data >> included with the source code so there are no configuration hassles. I'm >> not sure how well the buildbot infrastructure would cope with anything >> else. For example, to my knowledge, there is no Buildbot precedent to >> automatically pull from two branches to execute a single test run. But >> in general I think this does bear thinking about. >> > An easy improvement may be having an extra kwarg on the image_comparison > decorator to select a subset of backends. For example, many of the ones > in test_simplification.py only apply to the Agg backend. Done in r7863. To make use of it, do something like the following patch (and don't forget to delete the baseline .pdf files from the repository): -@image_comparison(baseline_images=['simplify_curve']) +@image_comparison(baseline_images=['simplify_curve'],extensions=['png']) > While I'm sharing my wish list out loud, I think it would also be highly > cool to get the native Mac OS backend in the buildbot tests, as that's > one I can't test easily myself. That would require the Mac OS X buildslave to start working again too, as I assume the backend actually requires the OS. And that would require building on Snow Leopard to work, as I understand it. -Andrew |
|
From: Andrew S. <str...@as...> - 2009-10-12 15:29:04
|
Michael Droettboom wrote: > I've committed support for comparing SVG files using Inkscape and > verifying them against the official SVG DTD using xmllint. > Man, are we standards compliant around here or what? :) Cool. > Michael Droettboom wrote: > >> Andrew Straw wrote: >> >> >>> Done in r7863. To make use of it, do something like the following patch >>> (and don't forget to delete the baseline .pdf files from the repository): >>> >>> -@image_comparison(baseline_images=['simplify_curve']) >>> +@image_comparison(baseline_images=['simplify_curve'],extensions=['png']) >>> >>> >>> >> Great! >> >> > This is a nice feature. However, in hindsight, I may not use it right > away -- I actually found a bug in the SVG backend using one of the tests > I assumed would only affect the Agg backend. :) > I think it's good not to use the feature very much. I've already found it handy when developing against a test -- you only need to generate that test's image once. > A couple more comments about the test framework -- which has already > paid for itself ten times over. In Numpy (and a number of local Python > projects), I can 'cd' to the tests directory and do something like: > > nosetests test_simplification.py:test_hatch_simplify > > and run on particular test, or a single file of tests. It's a huge time > saver when trying to fix a bug. However, with matplotlib I get: > > > nosetests test_simplification.py > E > ====================================================================== > ERROR: Failure: ImportError (cannot import name cbook) > <snip> > I suspect this is something peculiar to how matplotlib gets imported. > Yes, it would be very nice, I absolutely agree. I'm not sure what's going on, either, but I agree that it would be nice to fix. See below for an idea. > Also, I have a quad-core machine, so I put this in my .noserc, which > will run tests in parallel: > > [nosetests] > processes=4 > > Unfortunately, due to however matplotlib is delegating to nose, this > doesn't seem to get picked up. > > I don't know if I'll get a chance to look at these things right away, > but thought I'd share in case the solutions are obvious to anyone else > (which I know isn't good form, but hey... ;) > My guess is that this may actually be related to the first issue. On this second issue, though, I have a specific idea -- in order for MPL to pickup the nose plugin, I had to do the song and dance in test() of matplotlib/__init__.py in which I create a nose.config.Config instance. I suspect this is why your processes argument isn't getting through -- we're completely bypassing any local nose config and creating ours programattically. I'm definitely not in favor the big song and dance, so if you can rip it out and still get the plugin to load, that would be super. Once that is figured out, presumably the direct call to "nosetests test_simplification.py:test_hatch_simplify" will also load the nose plugins and thus not exhibit weird behavior when a known failure is encountered. I almost certainly won't get a chance to look at these right away, so if anyone wants to go spelunking in the nose/mpl interaction, feel free. -Andrew |
|
From: Michael D. <md...@st...> - 2009-10-14 17:44:00
|
Andrew Straw wrote: > Michael Droettboom wrote: > >> A couple more comments about the test framework -- which has already >> paid for itself ten times over. In Numpy (and a number of local >> Python projects), I can 'cd' to the tests directory and do something >> like: >> >> nosetests test_simplification.py:test_hatch_simplify >> >> and run on particular test, or a single file of tests. It's a huge >> time saver when trying to fix a bug. However, with matplotlib I get: >> >> > nosetests test_simplification.py >> E >> ====================================================================== >> ERROR: Failure: ImportError (cannot import name cbook) >> <snip> >> I suspect this is something peculiar to how matplotlib gets imported. >> > > Yes, it would be very nice, I absolutely agree. I'm not sure what's > going on, either, but I agree that it would be nice to fix. See below > for an idea. > > >> Also, I have a quad-core machine, so I put this in my .noserc, which >> will run tests in parallel: >> >> [nosetests] >> processes=4 >> >> Unfortunately, due to however matplotlib is delegating to nose, this >> doesn't seem to get picked up. >> >> I don't know if I'll get a chance to look at these things right away, >> but thought I'd share in case the solutions are obvious to anyone >> else (which I know isn't good form, but hey... ;) >> > My guess is that this may actually be related to the first issue. On > this second issue, though, I have a specific idea -- in order for MPL > to pickup the nose plugin, I had to do the song and dance in test() of > matplotlib/__init__.py in which I create a nose.config.Config > instance. I suspect this is why your processes argument isn't getting > through -- we're completely bypassing any local nose config and > creating ours programattically. I'm definitely not in favor the big > song and dance, so if you can rip it out and still get the plugin to > load, that would be super. > > Once that is figured out, presumably the direct call to "nosetests > test_simplification.py:test_hatch_simplify" will also load the nose > plugins and thus not exhibit weird behavior when a known failure is > encountered. > > I almost certainly won't get a chance to look at these right away, so > if anyone wants to go spelunking in the nose/mpl interaction, feel free. I have a partial solution to these problems. You can now do (from any directory) nosetests matplotlib.tests and this automatically picks up nose parameters, so you can do multiprocessing, pdb, coverage and other nose niceties. Strangely, I still can't make running "nosetests" from the lib/matplotlib/tests directory work. The imports seem to get all screwed up in that case, presumably because nose is messing with sys.path. Unfortunately, nosetests -P (which is supposed to not touch sys.path) doesn't seem to help. I made sure that "import matplotlib; matplotlib.test()" still works, so the buildbots should be unaffected. As a side note, I disabled the xmllint testing since curl (which xmllint uses to fetch the SVG DTD) has some problems with caching in a multiprocess situation. It gives random "Operation in progress" errors. Mike -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA |
|
From: Michael D. <md...@st...> - 2009-10-09 18:35:04
|
Andrew Straw wrote: >>> As far as the test data -- I agree this is an issue. One point in favor >>> of the status quo is that it's really nice to have the test data >>> included with the source code so there are no configuration hassles. I'm >>> not sure how well the buildbot infrastructure would cope with anything >>> else. For example, to my knowledge, there is no Buildbot precedent to >>> automatically pull from two branches to execute a single test run. But >>> in general I think this does bear thinking about. >>> >>> >> An easy improvement may be having an extra kwarg on the image_comparison >> decorator to select a subset of backends. For example, many of the ones >> in test_simplification.py only apply to the Agg backend. >> > > Done in r7863. To make use of it, do something like the following patch > (and don't forget to delete the baseline .pdf files from the repository): > > -@image_comparison(baseline_images=['simplify_curve']) > +@image_comparison(baseline_images=['simplify_curve'],extensions=['png']) > Great! > >> While I'm sharing my wish list out loud, I think it would also be highly >> cool to get the native Mac OS backend in the buildbot tests, as that's >> one I can't test easily myself. >> > > That would require the Mac OS X buildslave to start working again too, > as I assume the backend actually requires the OS. And that would require > building on Snow Leopard to work, as I understand it. > Oh yeah. Forgot that detail. Well -- something to think about when the other pieces fall into place. Mike |
|
From: Michael D. <md...@st...> - 2009-10-12 13:43:01
|
I've committed support for comparing SVG files using Inkscape and
verifying them against the official SVG DTD using xmllint.
Michael Droettboom wrote:
> Andrew Straw wrote:
>
>> Done in r7863. To make use of it, do something like the following patch
>> (and don't forget to delete the baseline .pdf files from the repository):
>>
>> -@image_comparison(baseline_images=['simplify_curve'])
>> +@image_comparison(baseline_images=['simplify_curve'],extensions=['png'])
>>
>>
> Great!
>
This is a nice feature. However, in hindsight, I may not use it right
away -- I actually found a bug in the SVG backend using one of the tests
I assumed would only affect the Agg backend. :)
A couple more comments about the test framework -- which has already
paid for itself ten times over. In Numpy (and a number of local Python
projects), I can 'cd' to the tests directory and do something like:
nosetests test_simplification.py:test_hatch_simplify
and run on particular test, or a single file of tests. It's a huge time
saver when trying to fix a bug. However, with matplotlib I get:
> nosetests test_simplification.py
E
======================================================================
ERROR: Failure: ImportError (cannot import name cbook)
----------------------------------------------------------------------
Traceback (most recent call last):
File
"/wonkabar/data1/usr/lib/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py",
line 379, in loadTestsFromName
addr.filename, addr.module)
File
"/wonkabar/data1/usr/lib/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py",
line 39, in importFromPath
return self.importFromDir(dir_path, fqname)
File
"/wonkabar/data1/usr/lib/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py",
line 86, in importFromDir
mod = load_module(part_fqname, fh, filename, desc)
File
"/wonkabar/data1/builds/matplotlib/build/lib.linux-i686-2.5/matplotlib/tests/test_simplification.py",
line 4, in <module>
import matplotlib.pyplot as plt
File
"/wonkabar/data1/builds/matplotlib/build/lib.linux-i686-2.5/matplotlib/pyplot.py",
line 6, in <module>
from matplotlib import docstring
File
"/wonkabar/data1/builds/matplotlib/build/lib.linux-i686-2.5/matplotlib/docstring.py",
line 1, in <module>
from matplotlib import cbook
ImportError: cannot import name cbook
----------------------------------------------------------------------
Ran 1 test in 0.009s
FAILED (errors=1)
I suspect this is something peculiar to how matplotlib gets imported.
Also, I have a quad-core machine, so I put this in my .noserc, which
will run tests in parallel:
[nosetests]
processes=4
Unfortunately, due to however matplotlib is delegating to nose, this
doesn't seem to get picked up.
I don't know if I'll get a chance to look at these things right away,
but thought I'd share in case the solutions are obvious to anyone else
(which I know isn't good form, but hey... ;)
Cheers,
Mike
--
Michael Droettboom
Science Software Branch
Operations and Engineering Division
Space Telescope Science Institute
Operated by AURA for NASA
|
|
From: Andrew S. <str...@as...> - 2009-10-12 16:12:54
|
Michael Droettboom wrote: > I've committed support for comparing SVG files using Inkscape and > verifying them against the official SVG DTD using xmllint. I just noticed the test_axes/hexbin_extent.svg baseline image is more than 5 MB. I wonder if we can somehow simplify the svg generated to reduce the file size or if we should perhaps just not do the svg extension on this test? -Andrew |
|
From: Michael D. <md...@st...> - 2009-10-12 16:31:09
|
I suspect for that one we can just do without it. It isn't really testing anything SVG-specific. Mike Andrew Straw wrote: > Michael Droettboom wrote: >> I've committed support for comparing SVG files using Inkscape and >> verifying them against the official SVG DTD using xmllint. > I just noticed the test_axes/hexbin_extent.svg baseline image is more > than 5 MB. I wonder if we can somehow simplify the svg generated to > reduce the file size or if we should perhaps just not do the svg > extension on this test? > > -Andrew -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA |
|
From: Jouni K. S. <jk...@ik...> - 2009-10-12 17:35:29
|
Andrew Straw <str...@as...> writes: > I just noticed the test_axes/hexbin_extent.svg baseline image is more > than 5 MB. I wonder if we can somehow simplify the svg generated to > reduce the file size or if we should perhaps just not do the svg > extension on this test? How about keeping them gzipped? If you gzip an svg file and name it something.svgz, inkscape seems to open it fine. -- Jouni K. Seppänen http://www.iki.fi/jks |