blob: 000bf9aacca4e8786ee5eee9349b06bf448d4fc9 [file] [log] [blame] [view]
Kai Ninomiyaa6429fb32018-03-30 01:30:561# GPU Testing
2
3This set of pages documents the setup and operation of the GPU bots and try
4servers, which verify the correctness of Chrome's graphically accelerated
5rendering pipeline.
6
7[TOC]
8
9## Overview
10
11The GPU bots run a different set of tests than the majority of the Chromium
12test machines. The GPU bots specifically focus on tests which exercise the
13graphics processor, and whose results are likely to vary between graphics card
14vendors.
15
16Most of the tests on the GPU bots are run via the [Telemetry framework].
17Telemetry was originally conceived as a performance testing framework, but has
18proven valuable for correctness testing as well. Telemetry directs the browser
19to perform various operations, like page navigation and test execution, from
20external scripts written in Python. The GPU bots launch the full Chromium
21browser via Telemetry for the majority of the tests. Using the full browser to
22execute tests, rather than smaller test harnesses, has yielded several
23advantages: testing what is shipped, improved reliability, and improved
24performance.
25
26[Telemetry framework]: https://github.com/catapult-project/catapult/tree/master/telemetry
27
28A subset of the tests, called "pixel tests", grab screen snapshots of the web
29page in order to validate Chromium's rendering architecture end-to-end. Where
30necessary, GPU-specific results are maintained for these tests. Some of these
31tests verify just a few pixels, using handwritten code, in order to use the
32same validation for all brands of GPUs.
33
34The GPU bots use the Chrome infrastructure team's [recipe framework], and
35specifically the [`chromium`][recipes/chromium] and
36[`chromium_trybot`][recipes/chromium_trybot] recipes, to describe what tests to
37execute. Compared to the legacy master-side buildbot scripts, recipes make it
38easy to add new steps to the bots, change the bots' configuration, and run the
39tests locally in the same way that they are run on the bots. Additionally, the
40`chromium` and `chromium_trybot` recipes make it possible to send try jobs which
41add new steps to the bots. This single capability is a huge step forward from
42the previous configuration where new steps were added blindly, and could cause
43failures on the tryservers. For more details about the configuration of the
44bots, see the [GPU bot details].
45
46[recipe framework]: https://chromium.googlesource.com/external/github.com/luci/recipes-py/+/master/doc/user_guide.md
47[recipes/chromium]: https://chromium.googlesource.com/chromium/tools/build/+/master/scripts/slave/recipes/chromium.py
48[recipes/chromium_trybot]: https://chromium.googlesource.com/chromium/tools/build/+/master/scripts/slave/recipes/chromium_trybot.py
49[GPU bot details]: gpu_testing_bot_details.md
50
51The physical hardware for the GPU bots lives in the Swarming pool\*. The
52Swarming infrastructure ([new docs][new-testing-infra], [older but currently
53more complete docs][isolated-testing-infra]) provides many benefits:
54
55* Increased parallelism for the tests; all steps for a given tryjob or
56 waterfall build run in parallel.
57* Simpler scaling: just add more hardware in order to get more capacity. No
58 manual configuration or distribution of hardware needed.
59* Easier to run certain tests only on certain operating systems or types of
60 GPUs.
61* Easier to add new operating systems or types of GPUs.
62* Clearer description of the binary and data dependencies of the tests. If
63 they run successfully locally, they'll run successfully on the bots.
64
65(\* All but a few one-off GPU bots are in the swarming pool. The exceptions to
66the rule are described in the [GPU bot details].)
67
68The bots on the [chromium.gpu.fyi] waterfall are configured to always test
69top-of-tree ANGLE. This setup is done with a few lines of code in the
70[tools/build workspace]; search the code for "angle".
71
72These aspects of the bots are described in more detail below, and in linked
73pages. There is a [presentation][bots-presentation] which gives a brief
74overview of this documentation and links back to various portions.
75
76<!-- XXX: broken link -->
77[new-testing-infra]: https://github.com/luci/luci-py/wiki
78[isolated-testing-infra]: https://www.chromium.org/developers/testing/isolated-testing/infrastructure
Kenneth Russell8a386d42018-06-02 09:48:0179[chromium.gpu]: https://ci.chromium.org/p/chromium/g/chromium.gpu/console
80[chromium.gpu.fyi]: https://ci.chromium.org/p/chromium/g/chromium.gpu.fyi/console
Kai Ninomiyaa6429fb32018-03-30 01:30:5681[tools/build workspace]: https://code.google.com/p/chromium/codesearch#chromium/build/scripts/slave/recipe_modules/chromium_tests/chromium_gpu_fyi.py
82[bots-presentation]: https://docs.google.com/presentation/d/1BC6T7pndSqPFnituR7ceG7fMY7WaGqYHhx5i9ECa8EI/edit?usp=sharing
83
84## Fleet Status
85
86Please see the [GPU Pixel Wrangling instructions] for links to dashboards
87showing the status of various bots in the GPU fleet.
88
89[GPU Pixel Wrangling instructions]: pixel_wrangling.md#Fleet-Status
90
91## Using the GPU Bots
92
93Most Chromium developers interact with the GPU bots in two ways:
94
951. Observing the bots on the waterfalls.
962. Sending try jobs to them.
97
98The GPU bots are grouped on the [chromium.gpu] and [chromium.gpu.fyi]
99waterfalls. Their current status can be easily observed there.
100
101To send try jobs, you must first upload your CL to the codereview server. Then,
102either clicking the "CQ dry run" link or running from the command line:
103
104```sh
105git cl try
106```
107
108Sends your job to the default set of try servers.
109
110The GPU tests are part of the default set for Chromium CLs, and are run as part
111of the following tryservers' jobs:
112
Stephen Martinis089f5f02019-02-12 02:42:24113* [linux-rel], formerly on the `tryserver.chromium.linux` waterfall
114* [mac-rel], formerly on the `tryserver.chromium.mac` waterfall
115* [win7-rel], formerly on the `tryserver.chromium.win` waterfall
Kai Ninomiyaa6429fb32018-03-30 01:30:56116
Stephen Martinis089f5f02019-02-12 02:42:24117[linux-rel]: https://ci.chromium.org/p/chromium/builders/luci.chromium.try/linux-rel?limit=100
118[mac-rel]: https://ci.chromium.org/p/chromium/builders/luci.chromium.try/mac-rel?limit=100
119[win7-rel]: https://ci.chromium.org/p/chromium/builders/luci.chromium.try/win7-rel?limit=100
Kai Ninomiyaa6429fb32018-03-30 01:30:56120
121Scan down through the steps looking for the text "GPU"; that identifies those
122tests run on the GPU bots. For each test the "trigger" step can be ignored; the
123step further down for the test of the same name contains the results.
124
125It's usually not necessary to explicitly send try jobs just for verifying GPU
126tests. If you want to, you must invoke "git cl try" separately for each
127tryserver master you want to reference, for example:
128
129```sh
Stephen Martinis089f5f02019-02-12 02:42:24130git cl try -b linux-rel
131git cl try -b mac-rel
132git cl try -b win7-rel
Kai Ninomiyaa6429fb32018-03-30 01:30:56133```
134
135Alternatively, the Gerrit UI can be used to send a patch set to these try
136servers.
137
138Three optional tryservers are also available which run additional tests. As of
139this writing, they ran longer-running tests that can't run against all Chromium
140CLs due to lack of hardware capacity. They are added as part of the included
141tryservers for code changes to certain sub-directories.
142
Corentin Wallezb78c44a2018-04-12 14:29:47143* [linux_optional_gpu_tests_rel] on the [luci.chromium.try] waterfall
144* [mac_optional_gpu_tests_rel] on the [luci.chromium.try] waterfall
145* [win_optional_gpu_tests_rel] on the [luci.chromium.try] waterfall
Kai Ninomiyaa6429fb32018-03-30 01:30:56146
Corentin Wallezb78c44a2018-04-12 14:29:47147[linux_optional_gpu_tests_rel]: https://ci.chromium.org/p/chromium/builders/luci.chromium.try/linux_optional_gpu_tests_rel
148[mac_optional_gpu_tests_rel]: https://ci.chromium.org/p/chromium/builders/luci.chromium.try/mac_optional_gpu_tests_rel
149[win_optional_gpu_tests_rel]: https://ci.chromium.org/p/chromium/builders/luci.chromium.try/win_optional_gpu_tests_rel
Kenneth Russell42732952018-06-27 02:08:42150[luci.chromium.try]: https://ci.chromium.org/p/chromium/g/luci.chromium.try/builders
Kai Ninomiyaa6429fb32018-03-30 01:30:56151
152Tryservers for the [ANGLE project] are also present on the
153[tryserver.chromium.angle] waterfall. These are invoked from the Gerrit user
154interface. They are configured similarly to the tryservers for regular Chromium
155patches, and run the same tests that are run on the [chromium.gpu.fyi]
156waterfall, in the same way (e.g., against ToT ANGLE).
157
158If you find it necessary to try patches against other sub-repositories than
159Chromium (`src/`) and ANGLE (`src/third_party/angle/`), please
160[file a bug](http://crbug.com/new) with component Internals\>GPU\>Testing.
161
162[ANGLE project]: https://chromium.googlesource.com/angle/angle/+/master/README.md
163[tryserver.chromium.angle]: https://build.chromium.org/p/tryserver.chromium.angle/waterfall
164[file a bug]: http://crbug.com/new
165
166## Running the GPU Tests Locally
167
168All of the GPU tests running on the bots can be run locally from a Chromium
169build. Many of the tests are simple executables:
170
171* `angle_unittests`
Kai Ninomiyaa6429fb32018-03-30 01:30:56172* `gl_tests`
173* `gl_unittests`
174* `tab_capture_end2end_tests`
175
176Some run only on the chromium.gpu.fyi waterfall, either because there isn't
177enough machine capacity at the moment, or because they're closed-source tests
178which aren't allowed to run on the regular Chromium waterfalls:
179
180* `angle_deqp_gles2_tests`
181* `angle_deqp_gles3_tests`
182* `angle_end2end_tests`
183* `audio_unittests`
184
185The remaining GPU tests are run via Telemetry. In order to run them, just
186build the `chrome` target and then
187invoke `src/content/test/gpu/run_gpu_integration_test.py` with the appropriate
188argument. The tests this script can invoke are
189in `src/content/test/gpu/gpu_tests/`. For example:
190
191* `run_gpu_integration_test.py context_lost --browser=release`
Kai Ninomiyaa6429fb32018-03-30 01:30:56192* `run_gpu_integration_test.py webgl_conformance --browser=release --webgl-conformance-version=1.0.2`
193* `run_gpu_integration_test.py maps --browser=release`
194* `run_gpu_integration_test.py screenshot_sync --browser=release`
195* `run_gpu_integration_test.py trace_test --browser=release`
196
Brian Sheedyc4650ad02019-07-29 17:31:38197The pixel tests are a bit special. See
198[the section on running them locally](#Running-the-pixel-tests-locally) for
199details.
200
Kenneth Russellfa3ffde2018-10-24 21:24:38201If you're testing on Android and have built and deployed
202`ChromePublic.apk` to the device, use `--browser=android-chromium` to
203invoke it.
204
Kai Ninomiyaa6429fb32018-03-30 01:30:56205**Note:** If you are on Linux and see this test harness exit immediately with
206`**Non zero exit code**`, it's probably because of some incompatible Python
207packages being installed. Please uninstall the `python-egenix-mxdatetime` and
Kenneth Russellfa3ffde2018-10-24 21:24:38208`python-logilab-common` packages in this case; see [Issue
209716241](http://crbug.com/716241). This should not be happening any more since
210the GPU tests were switched to use the infra team's `vpython` harness.
Kai Ninomiyaa6429fb32018-03-30 01:30:56211
Kenneth Russellfa3ffde2018-10-24 21:24:38212You can run a subset of tests with this harness:
Kai Ninomiyaa6429fb32018-03-30 01:30:56213
214* `run_gpu_integration_test.py webgl_conformance --browser=release
215 --test-filter=conformance_attribs`
216
217Figuring out the exact command line that was used to invoke the test on the
Kenneth Russellfa3ffde2018-10-24 21:24:38218bots can be a little tricky. The bots all run their tests via Swarming and
Kai Ninomiyaa6429fb32018-03-30 01:30:56219isolates, meaning that the invocation of a step like `[trigger]
220webgl_conformance_tests on NVIDIA GPU...` will look like:
221
222* `python -u
223 'E:\b\build\slave\Win7_Release__NVIDIA_\build\src\tools\swarming_client\swarming.py'
224 trigger --swarming https://chromium-swarm.appspot.com
225 --isolate-server https://isolateserver.appspot.com
226 --priority 25 --shards 1 --task-name 'webgl_conformance_tests on NVIDIA GPU...'`
227
228You can figure out the additional command line arguments that were passed to
229each test on the bots by examining the trigger step and searching for the
230argument separator (<code> -- </code>). For a recent invocation of
231`webgl_conformance_tests`, this looked like:
232
233* `webgl_conformance --show-stdout '--browser=release' -v
234 '--extra-browser-args=--enable-logging=stderr --js-flags=--expose-gc'
235 '--isolated-script-test-output=${ISOLATED_OUTDIR}/output.json'`
236
Kenneth Russellfa3ffde2018-10-24 21:24:38237You can leave off the --isolated-script-test-output argument, because that's
238used only by wrapper scripts, so this would leave a full command line of:
Kai Ninomiyaa6429fb32018-03-30 01:30:56239
240* `run_gpu_integration_test.py
241 webgl_conformance --show-stdout '--browser=release' -v
242 '--extra-browser-args=--enable-logging=stderr --js-flags=--expose-gc'`
243
244The Maps test requires you to authenticate to cloud storage in order to access
245the Web Page Reply archive containing the test. See [Cloud Storage Credentials]
246for documentation on setting this up.
247
248[Cloud Storage Credentials]: gpu_testing_bot_details.md#Cloud-storage-credentials
249
Kenneth Russellfa3ffde2018-10-24 21:24:38250### Running the pixel tests locally
Kai Ninomiyaa6429fb32018-03-30 01:30:56251
Brian Sheedyc4650ad02019-07-29 17:31:38252The pixel tests are a special case because they use an external Skia service
253called Gold to handle image approval and storage. See
254[GPU Pixel Testing With Gold] for specifics.
Kenneth Russellfa3ffde2018-10-24 21:24:38255
Brian Sheedyc4650ad02019-07-29 17:31:38256[GPU Pixel Testing With Gold]: gpu_pixel_testing_with_gold.md
Kenneth Russellfa3ffde2018-10-24 21:24:38257
Brian Sheedyc4650ad02019-07-29 17:31:38258TL;DR is that the pixel tests use a binary called `goldctl` to download and
259upload data when running pixel tests.
Kenneth Russellfa3ffde2018-10-24 21:24:38260
Brian Sheedyc4650ad02019-07-29 17:31:38261Normally, `goldctl` uploads images and image metadata to the Gold server when
262used. This is not desirable when running locally for a couple reasons:
Kenneth Russellfa3ffde2018-10-24 21:24:38263
Brian Sheedyc4650ad02019-07-29 17:31:382641. Uploading requires the user to be whitelisted on the server, and whitelisting
265everyone who wants to run the tests locally is not a viable solution.
2662. Images produced during local runs are usually slightly different from those
267that are produced on the bots due to hardware/software differences. Thus, most
268images uploaded to Gold from local runs would likely only ever actually be used
269by tests run on the machine that initially generated those images, which just
270adds noise to the list of approved images.
Kenneth Russellfa3ffde2018-10-24 21:24:38271
Brian Sheedyc4650ad02019-07-29 17:31:38272Additionally, the tests normally rely on the Gold server for viewing images
273produced by a test run. This does not work if the data is not actually uploaded.
Kenneth Russellfa3ffde2018-10-24 21:24:38274
Brian Sheedyc4650ad02019-07-29 17:31:38275In order to get around both of these issues, simply pass the `--local-run` flag
276to the tests. This will disable uploading, but otherwise go through the same
277steps as a test normally would. Each test will also print out a `file://` URL to
278the image it produces and a link to all approved images for that test in Gold.
Kenneth Russellfa3ffde2018-10-24 21:24:38279
Brian Sheedyc4650ad02019-07-29 17:31:38280Because the image produced by the test locally is likely slightly different from
281any of the approved images in Gold, local test runs are likely to fail during
282the comparison step. In order to cut down on the amount of noise, you can also
283pass the `--no-skia-gold-failure` flag to not fail the test on a failed image
284comparison. When using `--no-skia-gold-failure`, you'll also need to pass the
285`--passthrough` flag in order to actually see the link output.
Kenneth Russellfa3ffde2018-10-24 21:24:38286
Brian Sheedyc4650ad02019-07-29 17:31:38287Example usage:
288`run_gpu_integration_test.py pixel --no-skia-gold-failure --local-run
289--passthrough --build-revision aabbccdd`
Kenneth Russellfa3ffde2018-10-24 21:24:38290
Brian Sheedyc4650ad02019-07-29 17:31:38291Note that `aabbccdd` must be replaced with an actual Chromium src revision
292(typically whatever revision origin/master is currently synced to) in order for
293the tests to work. This can be done automatically using:
294``run_gpu_integration_test.py pixel --no-skia-gold-failure --local-run
295--passthrough --build-revision `git rev-parse origin/master` ``
Kai Ninomiyaa6429fb32018-03-30 01:30:56296
Kai Ninomiyaa6429fb32018-03-30 01:30:56297## Running Binaries from the Bots Locally
298
299Any binary run remotely on a bot can also be run locally, assuming the local
300machine loosely matches the architecture and OS of the bot.
301
302The easiest way to do this is to find the ID of the swarming task and use
303"swarming.py reproduce" to re-run it:
304
305* `./src/tools/swarming_client/swarming.py reproduce -S https://chromium-swarm.appspot.com [task ID]`
306
307The task ID can be found in the stdio for the "trigger" step for the test. For
308example, look at a recent build from the [Mac Release (Intel)] bot, and
309look at the `gl_unittests` step. You will see something like:
310
Yves Gereya702f6222019-01-24 11:07:30311[Mac Release (Intel)]: https://ci.chromium.org/p/chromium/builders/luci.chromium.ci/Mac%20Release%20%28Intel%29/
Kai Ninomiyaa6429fb32018-03-30 01:30:56312
313```
314Triggered task: gl_unittests on Intel GPU on Mac/Mac-10.12.6/[TRUNCATED_ISOLATE_HASH]/Mac Release (Intel)/83664
315To collect results, use:
316 swarming.py collect -S https://chromium-swarm.appspot.com --json /var/folders/[PATH_TO_TEMP_FILE].json
317Or visit:
318 https://chromium-swarm.appspot.com/user/task/[TASK_ID]
319```
320
321There is a difference between the isolate's hash and Swarming's task ID. Make
322sure you use the task ID and not the isolate's hash.
323
324As of this writing, there seems to be a
325[bug](https://github.com/luci/luci-py/issues/250)
326when attempting to re-run the Telemetry based GPU tests in this way. For the
327time being, this can be worked around by instead downloading the contents of
328the isolate. To do so, look more deeply into the trigger step's log:
329
330* <code>python -u
331 /b/build/slave/Mac_10_10_Release__Intel_/build/src/tools/swarming_client/swarming.py
332 trigger [...more args...] --tag data:[ISOLATE_HASH] [...more args...]
333 [ISOLATE_HASH] -- **[...TEST_ARGS...]**</code>
334
335As of this writing, the isolate hash appears twice in the command line. To
336download the isolate's contents into directory `foo` (note, this is in the
337"Help" section associated with the page for the isolate's task, but I'm not
338sure whether that's accessible only to Google employees or all members of the
339chromium.org organization):
340
341* `python isolateserver.py download -I https://isolateserver.appspot.com
342 --namespace default-gzip -s [ISOLATE_HASH] --target foo`
343
344`isolateserver.py` will tell you the approximate command line to use. You
345should concatenate the `TEST_ARGS` highlighted in red above with
346`isolateserver.py`'s recommendation. The `ISOLATED_OUTDIR` variable can be
347safely replaced with `/tmp`.
348
349Note that `isolateserver.py` downloads a large number of files (everything
350needed to run the test) and may take a while. There is a way to use
351`run_isolated.py` to achieve the same result, but as of this writing, there
352were problems doing so, so this procedure is not documented at this time.
353
354Before attempting to download an isolate, you must ensure you have permission
355to access the isolate server. Full instructions can be [found
356here][isolate-server-credentials]. For most cases, you can simply run:
357
358* `./src/tools/swarming_client/auth.py login
359 --service=https://isolateserver.appspot.com`
360
361The above link requires that you log in with your @google.com credentials. It's
362not known at the present time whether this works with @chromium.org accounts.
363Email kbr@ if you try this and find it doesn't work.
364
365[isolate-server-credentials]: gpu_testing_bot_details.md#Isolate-server-credentials
366
367## Running Locally Built Binaries on the GPU Bots
368
369See the [Swarming documentation] for instructions on how to upload your binaries to the isolate server and trigger execution on Swarming.
370
Sunny Sachanandani8d071572019-06-13 20:17:58371Be sure to use the correct swarming dimensions for your desired GPU e.g. "1002:6613" instead of "AMD Radeon R7 240 (1002:6613)" which is how it appears on swarming task page. You can query bots in the Chrome-GPU pool to find the correct dimensions:
372
373* `python tools\swarming_client\swarming.py bots -S chromium-swarm.appspot.com -d pool Chrome-GPU`
374
Kai Ninomiyaa6429fb32018-03-30 01:30:56375[Swarming documentation]: https://www.chromium.org/developers/testing/isolated-testing/for-swes#TOC-Run-a-test-built-locally-on-Swarming
376
Kenneth Russell42732952018-06-27 02:08:42377## Moving Test Binaries from Machine to Machine
378
379To create a zip archive of your personal Chromium build plus all of
380the Telemetry-based GPU tests' dependencies, which you can then move
381to another machine for testing:
382
3831. Build Chrome (into `out/Release` in this example).
3841. `python tools/mb/mb.py zip out/Release/ telemetry_gpu_integration_test out/telemetry_gpu_integration_test.zip`
385
386Then copy telemetry_gpu_integration_test.zip to another machine. Unzip
387it, and cd into the resulting directory. Invoke
388`content/test/gpu/run_gpu_integration_test.py` as above.
389
390This workflow has been tested successfully on Windows with a
391statically-linked Release build of Chrome.
392
393Note: on one macOS machine, this command failed because of a broken
394`strip-json-comments` symlink in
395`src/third_party/catapult/common/node_runner/node_runner/node_modules/.bin`. Deleting
396that symlink allowed it to proceed.
397
398Note also: on the same macOS machine, with a component build, this
399command failed to zip up a working Chromium binary. The browser failed
400to start with the following error:
401
402`[0626/180440.571670:FATAL:chrome_main_delegate.cc(1057)] Check failed: service_manifest_data_pack_.`
403
404In a pinch, this command could be used to bundle up everything, but
405the "out" directory could be deleted from the resulting zip archive,
406and the Chromium binaries moved over to the target machine. Then the
407command line arguments `--browser=exact --browser-executable=[path]`
408can be used to launch that specific browser.
409
410See the [user guide for mb](../../tools/mb/docs/user_guide.md#mb-zip), the
411meta-build system, for more details.
412
Kai Ninomiyaa6429fb32018-03-30 01:30:56413## Adding New Tests to the GPU Bots
414
415The goal of the GPU bots is to avoid regressions in Chrome's rendering stack.
416To that end, let's add as many tests as possible that will help catch
417regressions in the product. If you see a crazy bug in Chrome's rendering which
418would be easy to catch with a pixel test running in Chrome and hard to catch in
419any of the other test harnesses, please, invest the time to add a test!
420
421There are a couple of different ways to add new tests to the bots:
422
4231. Adding a new test to one of the existing harnesses.
4242. Adding an entire new test step to the bots.
425
426### Adding a new test to one of the existing test harnesses
427
428Adding new tests to the GTest-based harnesses is straightforward and
429essentially requires no explanation.
430
431As of this writing it isn't as easy as desired to add a new test to one of the
432Telemetry based harnesses. See [Issue 352807](http://crbug.com/352807). Let's
433collectively work to address that issue. It would be great to reduce the number
434of steps on the GPU bots, or at least to avoid significantly increasing the
435number of steps on the bots. The WebGL conformance tests should probably remain
436a separate step, but some of the smaller Telemetry based tests
437(`context_lost_tests`, `memory_test`, etc.) should probably be combined into a
438single step.
439
440If you are adding a new test to one of the existing tests (e.g., `pixel_test`),
441all you need to do is make sure that your new test runs correctly via isolates.
442See the documentation from the GPU bot details on [adding new isolated
Daniel Bratellf73f0df2018-09-24 13:52:49443tests][new-isolates] for the gn args and authentication needed to upload
Kai Ninomiyaa6429fb32018-03-30 01:30:56444isolates to the isolate server. Most likely the new test will be Telemetry
445based, and included in the `telemetry_gpu_test_run` isolate. You can then
446invoke it via:
447
448* `./src/tools/swarming_client/run_isolated.py -s [HASH]
449 -I https://isolateserver.appspot.com -- [TEST_NAME] [TEST_ARGUMENTS]`
450
451[new-isolates]: gpu_testing_bot_details.md#Adding-a-new-isolated-test-to-the-bots
452
453o## Adding new steps to the GPU Bots
454
455The tests that are run by the GPU bots are described by a couple of JSON files
456in the Chromium workspace:
457
458* [`chromium.gpu.json`](https://chromium.googlesource.com/chromium/src/+/master/testing/buildbot/chromium.gpu.json)
459* [`chromium.gpu.fyi.json`](https://chromium.googlesource.com/chromium/src/+/master/testing/buildbot/chromium.gpu.fyi.json)
460
461These files are autogenerated by the following script:
462
Kenneth Russell8a386d42018-06-02 09:48:01463* [`generate_buildbot_json.py`](https://chromium.googlesource.com/chromium/src/+/master/testing/buildbot/generate_buildbot_json.py)
Kai Ninomiyaa6429fb32018-03-30 01:30:56464
Kenneth Russell8a386d42018-06-02 09:48:01465This script is documented in
466[`testing/buildbot/README.md`](https://chromium.googlesource.com/chromium/src/+/master/testing/buildbot/README.md). The
467JSON files are parsed by the chromium and chromium_trybot recipes, and describe
468two basic types of tests:
Kai Ninomiyaa6429fb32018-03-30 01:30:56469
470* GTests: those which use the Googletest and Chromium's `base/test/launcher/`
471 frameworks.
Kenneth Russell8a386d42018-06-02 09:48:01472* Isolated scripts: tests whose initial entry point is a Python script which
473 follows a simple convention of command line argument parsing.
474
475The majority of the GPU tests are however:
476
477* Telemetry based tests: an isolated script test which is built on the
478 Telemetry framework and which launches the entire browser.
Kai Ninomiyaa6429fb32018-03-30 01:30:56479
480A prerequisite of adding a new test to the bots is that that test [run via
Kenneth Russell8a386d42018-06-02 09:48:01481isolates][new-isolates]. Once that is done, modify `test_suites.pyl` to add the
482test to the appropriate set of bots. Be careful when adding large new test steps
483to all of the bots, because the GPU bots are a limited resource and do not
484currently have the capacity to absorb large new test suites. It is safer to get
485new tests running on the chromium.gpu.fyi waterfall first, and expand from there
486to the chromium.gpu waterfall (which will also make them run against every
Stephen Martinis089f5f02019-02-12 02:42:24487Chromium CL by virtue of the `linux-rel`, `mac-rel`, `win7-rel` and
488`android-marshmallow-arm64-rel` tryservers' mirroring of the bots on this
489waterfall so be careful!).
Kai Ninomiyaa6429fb32018-03-30 01:30:56490
491Tryjobs which add new test steps to the chromium.gpu.json file will run those
492new steps during the tryjob, which helps ensure that the new test won't break
493once it starts running on the waterfall.
494
495Tryjobs which modify chromium.gpu.fyi.json can be sent to the
496`win_optional_gpu_tests_rel`, `mac_optional_gpu_tests_rel` and
497`linux_optional_gpu_tests_rel` tryservers to help ensure that they won't
498break the FYI bots.
499
Kenneth Russellfa3ffde2018-10-24 21:24:38500## Debugging Pixel Test Failures on the GPU Bots
501
Brian Sheedyc4650ad02019-07-29 17:31:38502If pixel tests fail on the bots, the build step will contain either one or more
503links titled `gold_triage_link for <test name>` or a single link titled
504`Too many artifacts produced to link individually, click for links`, which
505itself will contain links. In either case, these links will direct to Gold
506pages showing the image produced by the image and the approved image that most
507closely matches it.
Kenneth Russellfa3ffde2018-10-24 21:24:38508
Brian Sheedyc4650ad02019-07-29 17:31:38509Note that for the tests which programatically check colors in certain regions of
510the image (tests with `expected_colors` fields in [pixel_test_pages]), there
511likely won't be a closest approved image since those tests only upload data to
512Gold in the event of a failure.
Kenneth Russellfa3ffde2018-10-24 21:24:38513
Brian Sheedyc4650ad02019-07-29 17:31:38514[pixel_test_pages]: https://cs.chromium.org/chromium/src/content/test/gpu/gpu_tests/pixel_test_pages.py
Kenneth Russellfa3ffde2018-10-24 21:24:38515
Kai Ninomiyaa6429fb32018-03-30 01:30:56516## Updating and Adding New Pixel Tests to the GPU Bots
517
Brian Sheedyc4650ad02019-07-29 17:31:38518If your CL adds a new pixel test or modifies existing ones, it's likely that
519you will have to approve new images. Simply run your CL through the CQ and
520follow the steps outline [here][pixel wrangling triage] under the "Check if any
521pixel test failures are actual failures or need to be rebaselined." step.
Kai Ninomiyaa6429fb32018-03-30 01:30:56522
Brian Sheedyc4650ad02019-07-29 17:31:38523[pixel wrangling triage]: pixel_wrangling.md#How-to-Keep-the-Bots-Green
Kai Ninomiyaa6429fb32018-03-30 01:30:56524
Brian Sheedyc4650ad02019-07-29 17:31:38525Once your CL passes the CQ, you should be mostly good to go, although you should
526keep an eye on the waterfall bots for a short period after your CL lands in case
527any configurations not covered by the CQ need to have images approved, as well.
Kai Ninomiyaa6429fb32018-03-30 01:30:56528
529## Stamping out Flakiness
530
531It's critically important to aggressively investigate and eliminate the root
532cause of any flakiness seen on the GPU bots. The bots have been known to run
533reliably for days at a time, and any flaky failures that are tolerated on the
534bots translate directly into instability of the browser experienced by
535customers. Critical bugs in subsystems like WebGL, affecting high-profile
536products like Google Maps, have escaped notice in the past because the bots
537were unreliable. After much re-work, the GPU bots are now among the most
538reliable automated test machines in the Chromium project. Let's keep them that
539way.
540
541Flakiness affecting the GPU tests can come in from highly unexpected sources.
542Here are some examples:
543
544* Intermittent pixel_test failures on Linux where the captured pixels were
545 black, caused by the Display Power Management System (DPMS) kicking in.
546 Disabled the X server's built-in screen saver on the GPU bots in response.
547* GNOME dbus-related deadlocks causing intermittent timeouts ([Issue
548 309093](http://crbug.com/309093) and related bugs).
549* Windows Audio system changes causing intermittent assertion failures in the
550 browser ([Issue 310838](http://crbug.com/310838)).
551* Enabling assertion failures in the C++ standard library on Linux causing
552 random assertion failures ([Issue 328249](http://crbug.com/328249)).
553* V8 bugs causing random crashes of the Maps pixel test (V8 issues
554 [3022](https://code.google.com/p/v8/issues/detail?id=3022),
555 [3174](https://code.google.com/p/v8/issues/detail?id=3174)).
556* TLS changes causing random browser process crashes ([Issue
557 264406](http://crbug.com/264406)).
558* Isolated test execution flakiness caused by failures to reliably clean up
559 temporary directories ([Issue 340415](http://crbug.com/340415)).
560* The Telemetry-based WebGL conformance suite caught a bug in the memory
561 allocator on Android not caught by any other bot ([Issue
562 347919](http://crbug.com/347919)).
563* context_lost test failures caused by the compositor's retry logic ([Issue
564 356453](http://crbug.com/356453)).
565* Multiple bugs in Chromium's support for lost contexts causing flakiness of
566 the context_lost tests ([Issue 365904](http://crbug.com/365904)).
567* Maps test timeouts caused by Content Security Policy changes in Blink
568 ([Issue 395914](http://crbug.com/395914)).
569* Weak pointer assertion failures in various webgl\_conformance\_tests caused
570 by changes to the media pipeline ([Issue 399417](http://crbug.com/399417)).
571* A change to a default WebSocket timeout in Telemetry causing intermittent
572 failures to run all WebGL conformance tests on the Mac bots ([Issue
573 403981](http://crbug.com/403981)).
574* Chrome leaking suspended sub-processes on Windows, apparently a preexisting
575 race condition that suddenly showed up ([Issue
576 424024](http://crbug.com/424024)).
577* Changes to Chrome's cross-context synchronization primitives causing the
578 wrong tiles to be rendered ([Issue 584381](http://crbug.com/584381)).
579* A bug in V8's handling of array literals causing flaky failures of
580 texture-related WebGL 2.0 tests ([Issue 606021](http://crbug.com/606021)).
581* Assertion failures in sync point management related to lost contexts that
582 exposed a real correctness bug ([Issue 606112](http://crbug.com/606112)).
583* A bug in glibc's `sem_post`/`sem_wait` primitives breaking V8's parallel
584 garbage collection ([Issue 609249](http://crbug.com/609249)).
Kenneth Russelld5efb3f2018-05-11 01:40:45585* A change to Blink's memory purging primitive which caused intermittent
586 timeouts of WebGL conformance tests on all platforms ([Issue
587 840988](http://crbug.com/840988)).
Kai Ninomiyaa6429fb32018-03-30 01:30:56588
589If you notice flaky test failures either on the GPU waterfalls or try servers,
590please file bugs right away with the component Internals>GPU>Testing and
591include links to the failing builds and copies of the logs, since the logs
592expire after a few days. [GPU pixel wranglers] should give the highest priority
593to eliminating flakiness on the tree.
594
595[GPU pixel wranglers]: pixel_wrangling.md