Writing tests for the Web Audio API

April 30, 2014

Introduction

In this post I want to give a brief overview of how you can help the adoption of the Web Audio API by writing tests for the W3C’s official test suite. Writing tests helps the adoption of the API in three ways

  1. It makes it easier for a wider range of browser vendors to support the API
  2. It makes it easier for an individual vendor to implement the standard correctly
  3. It makes it easier for new additions to the specification to be included and approved

The test suite is still very incomplete, and there’s a lot of room for improvement of the testing process as well as increasing the test coverage. But, if you know a little JavaScript, it’s easy to help, so let’s get started!

Getting started

The W3C’s test suite for the “web platform” (the suite of technologies, Web Audio included, that make up the modern web) is on GitHub. So go ahead and clone it

git clone https://github.com/w3c/web-platform-tests

Or, if you think you might contribute, you may find it easier to fork the repository into your own GitHub account, and clone your fork.

The repository requires git submodules, which you can update within your checkout

git submodule update --init --recursive

The repo has the latest instructions for getting started.

Running a Web Audio test locally

You need to arrange for the contents of the repository to be served up by a local webserver. There’s some instructions for doing that using the included serve.py python script and some simple edits to your /etc/hosts file - or you may have your own preferred way. Once you have the server up and running, try and run the test for the Web Audio API GainNode, on my machine it was at http://web-platform.test:55521/webaudio/the-audio-api/the-gainnode-interface/test.html.

The test suite is also mirrored to w3c-test.org, including the GainNode test and the rest of the web audio tests.

Understanding the Web Audio API tests

The Web Audio API test suite is very minimal at the moment, but that’s where you can help.

The Web Audio API is under development, so that latest version of the editor’s draft of the specification is what we should be writing our tests against. Go and take a look at the specification if you’re not familiar with it.

Notice that the specification is grouped into sections, (for example ยง4.7 The GainNode interface). The directory structure of the tests repo reflects this structure, so we have /webaudio/the-audio-api/the-gainnode-interface.

Tests come in two different flavours:

We’ll look at both of these types of tests in turn.

Writing functional tests

Functional tests assert that a audio processing node performs its processing correctly. The process for writing a test is as follows:

  1. Find an area of the specification that doesn’t have tests.
  2. Read the specification and see if it could be tested as written. If you feel a test cannot be written against the current version of the spec, for example if there’s not enough information in the spec to determine precisely what the output should be, that’s great! You can help to improve the spec.
  3. Write the test

Let’s look at step 3 in more detail. Tests are written using the W3C’s testharness.js framework. Take a look at that documentation to familiarise yourself.

Don’t reinvent the wheel. If you’re considering writing a functional test for a node, both the Mozilla and Webkit source code already have a number of tests that you can port over, or use for inspiration:

As an example, consider the GainNode test in the W3C test suite. You’ll find it at /webaudio/the-audio-api/the-gainnode-interface/test.html, or here on w3c-test.org.

This test works as follows:

  1. Create an AudioBuffer with a series of sine wave ‘notes’ of gradually decreasing amplitude. This is the expected output.
  2. Recreate this using a GainNode with gradually decreasing gain value.
  3. Record the output of the audio graph created in 2. in an offlineAudioContext.
  4. Assert that recorded output matches the expected output.

This test was based on the corresponding test of the GainNode in the Webkit test suite, but uses a generated buffer rather than a WAVE file as the expected output. The reason for this is to allow the tests to run faster than real time. If we were to create a node graph in a regular AudioContext, and then capture the output in a buffer using a scriptProcessorNode, for example, the test would take at least as long to run as the audio generated. Using an offlineAudioContext allows the implementation to generate the output as fast as it can.

In some cases it will be impossible to use offlineAudioContext, such as when writing tests for the various streaming sources.

Writing IDL tests

The IDL tests generate automated tests from the Web IDL descriptions of the interfaces of each of the functions provided by the Web Audio API, and used in the specification.

In the W3C test suite we have a Ruby script (at /webaudio/refresh_idl.rb) which extracts the IDL descriptions from the specification, and updates the corresponding tests. It’s still quite a manual process at the moment, and I would appreciate any improvements you can suggest.

Contributing your test

The W3C test suite accepts contributions in the form of GitHub pull requests. Each pull request has to be reviewed by a peer. At the moment, I am the test coordinator for the Web Audio tests, so it is likely to be me that does the review and merge, but anyone who would like to help will be very welcome.

If you need any help, please get in touch with me in the comments below, on the public audio mailing list or by raising an issue with the webaudio label in GitHub.

Improve the specification

When starting to write a test for a part of the specification you may encounter a situation where there’s not enough information in the spec to determine precisely what the output should be. In these cases you can help to improve the specification:

Discuss

blog comments powered by Disqus

Share