test 0.12.29+1

  • README.md
  • Installing
  • Versions
  • 49

test provides a standard way of writing and running tests in Dart.

Writing Tests

Tests are specified using the top-level test() function, and test assertions are made using expect():

import "package:test/test.dart";

void main() {
  test("String.split() splits the string on the delimiter", () {
    var string = "foo,bar,baz";
    expect(string.split(","), equals(["foo", "bar", "baz"]));

  test("String.trim() removes surrounding whitespace", () {
    var string = "  foo ";
    expect(string.trim(), equals("foo"));

Tests can be grouped together using the group() function. Each group's description is added to the beginning of its test's descriptions.

import "package:test/test.dart";

void main() {
  group("String", () {
    test(".split() splits the string on the delimiter", () {
      var string = "foo,bar,baz";
      expect(string.split(","), equals(["foo", "bar", "baz"]));

    test(".trim() removes surrounding whitespace", () {
      var string = "  foo ";
      expect(string.trim(), equals("foo"));

  group("int", () {
    test(".remainder() returns the remainder of division", () {
      expect(11.remainder(3), equals(2));

    test(".toRadixString() returns a hex string", () {
      expect(11.toRadixString(16), equals("b"));

Any matchers from the matcher package can be used with expect() to do complex validations:

import "package:test/test.dart";

void main() {
  test(".split() splits the string on the delimiter", () {
    expect("foo,bar,baz", allOf([

You can use the setUp() and tearDown() functions to share code between tests. The setUp() callback will run before every test in a group or test suite, and tearDown() will run after. tearDown() will run even if a test fails, to ensure that it has a chance to clean up after itself.

import "package:test/test.dart";

void main() {
  var server;
  var url;
  setUp(() async {
    server = await HttpServer.bind('localhost', 0);
    url = Uri.parse("http://${server.address.host}:${server.port}");

  tearDown(() async {
    await server.close(force: true);
    server = null;
    url = null;

  // ...

Running Tests

A single test file can be run just using pub run test path/to/test.dart.

Single file being run via

Many tests can be run at a time using pub run test path/to/dir.

Directory being run via

It's also possible to run a test on the Dart VM only by invoking it using dart path/to/test.dart, but this doesn't load the full test runner and will be missing some features.

The test runner considers any file that ends with _test.dart to be a test file. If you don't pass any paths, it will run all the test files in your test/ directory, making it easy to test your entire application at once.

You can select specific tests cases to run by name using pub run test -n "test name". The string is interpreted as a regular expression, and only tests whose description (including any group descriptions) match that regular expression will be run. You can also use the -N flag to run tests whose names contain a plain-text string.

By default, tests are run in the Dart VM, but you can run them in the browser as well by passing pub run test -p chrome path/to/test.dart. test will take care of starting the browser and loading the tests, and all the results will be reported on the command line just like for VM tests. In fact, you can even run tests on both platforms with a single command: pub run test -p "chrome,vm" path/to/test.dart.

Restricting Tests to Certain Platforms

Some test files only make sense to run on particular platforms. They may use dart:html or dart:io, they might test Windows' particular filesystem behavior, or they might use a feature that's only available in Chrome. The @TestOn annotation makes it easy to declare exactly which platforms a test file should run on. Just put it at the top of your file, before any library or import declarations:


import "dart:io";

import "package:test/test.dart";

void main() {
  // ...

The string you pass to @TestOn is what's called a "platform selector", and it specifies exactly which platforms a test can run on. It can be as simple as the name of a platform, or a more complex Dart-like boolean expression involving these platform names.

You can also declare that your entire package only works on certain platforms by adding a test_on field to your package config file.

Platform Selectors

Platform selectors use the boolean selector syntax defined in the boolean_selector package, which is a subset of Dart's expression syntax that only supports boolean operations. The following identifiers are defined:

  • vm: Whether the test is running on the command-line Dart VM.

  • dartium: Whether the test is running on Dartium.

  • content-shell: Whether the test is running on the headless Dartium content shell.

  • chrome: Whether the test is running on Google Chrome.

  • phantomjs: Whether the test is running on PhantomJS.

  • firefox: Whether the test is running on Mozilla Firefox.

  • safari: Whether the test is running on Apple Safari.

  • ie: Whether the test is running on Microsoft Internet Explorer.

  • node: Whether the test is running on Node.js.

  • dart-vm: Whether the test is running on the Dart VM in any context, including Dartium. It's identical to !js.

  • browser: Whether the test is running in any browser.

  • js: Whether the test has been compiled to JS. This is identical to !dart-vm.

  • blink: Whether the test is running in a browser that uses the Blink rendering engine.

  • windows: Whether the test is running on Windows. If vm is false, this will be false as well.

  • mac-os: Whether the test is running on Mac OS. If vm is false, this will be false as well.

  • linux: Whether the test is running on Linux. If vm is false, this will be false as well.

  • android: Whether the test is running on Android. If vm is false, this will be false as well, which means that this won't be true if the test is running on an Android browser.

  • ios: Whether the test is running on iOS. If vm is false, this will be false as well, which means that this won't be true if the test is running on an iOS browser.

  • posix: Whether the test is running on a POSIX operating system. This is equivalent to !windows.

For example, if you wanted to run a test on every browser but Chrome, you would write @TestOn("browser && !chrome").

Running Tests on Dartium

Tests can be run on Dartium by passing the -p dartium flag. If you're using Mac OS, you can install Dartium using Homebrew. Otherwise, make sure there's an executable called dartium (on Mac OS or Linux) or dartium.exe (on Windows) on your system path.

Similarly, tests can be run on the headless Dartium content shell by passing -p content-shell. The content shell is installed along with Dartium when using Homebrew. Otherwise, you can downloaded it manually from this page; if you do, make sure the executable named content_shell (on Mac OS or Linux) or content_shell.exe (on Windows) is on your system path. Note content_shell on linux requires the font packages ttf-kochi-mincho and ttf-kochi-gothic.

In the future, there will be a more explicit way to configure the location of both the Dartium and content shell executables.

Running Tests on Node.js

The test runner also supports compiling tests to JavaScript and running them on Node.js by passing --platform node. Note that Node has access to neither dart:html nor dart:io, so any platform-specific APIs will have to be invoked using the js package. However, it may be useful when testing APIs that are meant to be used by JavaScript code.

The test runner looks for an executable named node (on Mac OS or Linux) or node.exe (on Windows) on your system path. When compiling Node.js tests, it passes -Dnode=true, so tests can determine whether they're running on Node using const bool.fromEnvironment("node").

If a top-level node_modules directory exists, tests running on Node.js can import modules from it.

Asynchronous Tests

Tests written with async/await will work automatically. The test runner won't consider the test finished until the returned Future completes.

import "dart:async";

import "package:test/test.dart";

void main() {
  test("new Future.value() returns the value", () async {
    var value = await new Future.value(10);
    expect(value, equals(10));

There are also a number of useful functions and matchers for more advanced asynchrony. The completion() matcher can be used to test Futures; it ensures that the test doesn't finish until the Future completes, and runs a matcher against that Future's value.

import "dart:async";

import "package:test/test.dart";

void main() {
  test("new Future.value() returns the value", () {
    expect(new Future.value(10), completion(equals(10)));

The throwsA() matcher and the various throwsExceptionType matchers work with both synchronous callbacks and asynchronous Futures. They ensure that a particular type of exception is thrown:

import "dart:async";

import "package:test/test.dart";

void main() {
  test("new Future.error() throws the error", () {
    expect(new Future.error("oh no"), throwsA(equals("oh no")));
    expect(new Future.error(new StateError("bad state")), throwsStateError);

The expectAsync() function wraps another function and has two jobs. First, it asserts that the wrapped function is called a certain number of times, and will cause the test to fail if it's called too often; second, it keeps the test from finishing until the function is called the requisite number of times.

import "dart:async";

import "package:test/test.dart";

void main() {
  test("Stream.fromIterable() emits the values in the iterable", () {
    var stream = new Stream.fromIterable([1, 2, 3]);

    stream.listen(expectAsync1((number) {
      expect(number, inInclusiveRange(1, 3));
    }, count: 3));

Stream Matchers

The test package provides a suite of powerful matchers for dealing with asynchronous streams. They're expressive and composable, and make it easy to write complex expectations about the values emitted by a stream. For example:

import "dart:async";

import "package:test/test.dart";

void main() {
  test("process emits status messages", () {
    // Dummy data to mimic something that might be emitted by a process.
    var stdoutLines = new Stream.fromIterable([
      "Loading took 150ms.",

    expect(stdoutLines, emitsInOrder([
      // Values match individual events.

      // Matchers also run against individual events.
      startsWith("Loading took"),

      // Stream matchers can be nested. This asserts that one of two events are
      // emitted after the "Loading took" line.
      emitsAnyOf(["Succeeded!", "Failed!"]),

      // By default, more events are allowed after the matcher finishes
      // matching. This asserts instead that the stream emits a done event and
      // nothing else.

A stream matcher can also match the async package's StreamQueue class, which allows events to be requested from a stream rather than pushed to the consumer. The matcher will consume the matched events, but leave the rest of the queue alone so that it can still be used by the test, unlike a normal Stream which can only have one subscriber. For example:

import "dart:async";

import "package:async/async.dart";
import "package:test/test.dart";

void main() {
  test("process emits a WebSocket URL", () async {
    // Wrap the Stream in a StreamQueue so that we can request events.
    var stdout = new StreamQueue(new Stream.fromIterable([
      "WebSocket URL:",
      "Waiting for connection..."

    // Ignore lines from the process until it's about to emit the URL.
    await expect(stdout, emitsThrough("WebSocket URL:"));

    // Parse the next line as a URL.
    var url = Uri.parse(await stdout.next);
    expect(url.host, equals('localhost'));

    // You can match against the same StreamQueue multiple times.
    await expect(stdout, emits("Waiting for connection..."));

The following built-in stream matchers are available:

  • emits() matches a single data event.
  • emitsError() matches a single error event.
  • emitsDone matches a single done event.
  • mayEmit() consumes events if they match an inner matcher, without requiring them to match.
  • mayEmitMultiple() works like mayEmit(), but it matches events against the matcher as many times as possible.
  • emitsAnyOf() consumes events matching one (or more) of several possible matchers.
  • emitsInOrder() consumes events matching multiple matchers in a row.
  • emitsInAnyOrder() works like emitsInOrder(), but it allows the matchers to match in any order.
  • neverEmits() matches a stream that finishes without matching an inner matcher.

You can also define your own custom stream matchers by calling new StreamMatcher().

Running Tests With Custom HTML

By default, the test runner will generate its own empty HTML file for browser tests. However, tests that need custom HTML can create their own files. These files have three requirements:

  • They must have the same name as the test, with .dart replaced by .html.

  • They must contain a link tag with rel="x-dart-test" and an href attribute pointing to the test script.

  • They must contain <script src="packages/test/dart.js"></script>.

For example, if you had a test called custom_html_test.dart, you might write the following HTML file:

<!doctype html>
<!-- custom_html_test.html -->
    <title>Custom HTML Test</title>
    <link rel="x-dart-test" href="custom_html_test.dart">
    <script src="packages/test/dart.js"></script>
    // ...

Configuring Tests

Skipping Tests

If a test, group, or entire suite isn't working yet and you just want it to stop complaining, you can mark it as "skipped". The test or tests won't be run, and, if you supply a reason why, that reason will be printed. In general, skipping tests indicates that they should run but is temporarily not working. If they're is fundamentally incompatible with a platform, @TestOn/testOn should be used instead.

To skip a test suite, put a @Skip annotation at the top of the file:

@Skip("currently failing (see issue 1234)")

import "package:test/test.dart";

void main() {
  // ...

The string you pass should describe why the test is skipped. You don't have to include it, but it's a good idea to document why the test isn't running.

Groups and individual tests can be skipped by passing the skip parameter. This can be either true or a String describing why the test is skipped. For example:

import "package:test/test.dart";

void main() {
  group("complicated algorithm tests", () {
    // ...
  }, skip: "the algorithm isn't quite right");

  test("error-checking test", () {
    // ...
  }, skip: "TODO: add error-checking.");


By default, tests will time out after 30 seconds of inactivity. However, this can be configured on a per-test, -group, or -suite basis. To change the timeout for a test suite, put a @Timeout annotation at the top of the file:

@Timeout(const Duration(seconds: 45))

import "package:test/test.dart";

void main() {
  // ...

In addition to setting an absolute timeout, you can set the timeout relative to the default using @Timeout.factor. For example, @Timeout.factor(1.5) will set the timeout to one and a half times as long as the default—45 seconds.

Timeouts can be set for tests and groups using the timeout parameter. This parameter takes a Timeout object just like the annotation. For example:

import "package:test/test.dart";

void main() {
  group("slow tests", () {
    // ...

    test("even slower test", () {
      // ...
    }, timeout: new Timeout.factor(2))
  }, timeout: new Timeout(new Duration(minutes: 1)));

Nested timeouts apply in order from outermost to innermost. That means that "even slower test" will take two minutes to time out, since it multiplies the group's timeout by 2.

Platform-Specific Configuration

Sometimes a test may need to be configured differently for different platforms. Windows might run your code slower than other platforms, or your DOM manipulation might not work right on Safari yet. For these cases, you can use the @OnPlatform annotation and the onPlatform named parameter to test() and group(). For example:

@OnPlatform(const {
  // Give Windows some extra wiggle-room before timing out.
  "windows": const Timeout.factor(2)

import "package:test/test.dart";

void main() {
  test("do a thing", () {
    // ...
  }, onPlatform: {
    "safari": new Skip("Safari is currently broken (see #1234)")

Both the annotation and the parameter take a map. The map's keys are platform selectors which describe the platforms for which the specialized configuration applies. Its values are instances of some of the same annotation classes that can be used for a suite: Skip and Timeout. A value can also be a list of these values.

If multiple platforms match, the configuration is applied in order from first to last, just as they would in nested groups. This means that for configuration like duration-based timeouts, the last matching value wins.

You can also set up global platform-specific configuration using the package configuration file.

Tagging Tests

Tags are short strings that you can associate with tests, groups, and suites. They don't have any built-in meaning, but they're very useful nonetheless: you can associate your own custom configuration with them, or you can use them to easily filter tests so you only run the ones you need to.

Tags are defined using the @Tags annotation for suites and the tags named parameter to test() and group(). For example:

@Tags(const ["browser"])

import "package:test/test.dart";

void main() {
  test("successfully launches Chrome", () {
    // ...
  }, tags: "chrome");

  test("launches two browsers at once", () {
    // ...
  }, tags: ["chrome", "firefox"]);

If the test runner encounters a tag that wasn't declared in the package configuration file, it'll print a warning, so be sure to include all your tags there. You can also use the file to provide default configuration for tags, like giving all browser tests twice as much time before they time out.

Tests can be filtered based on their tags by passing command line flags. The --tags or -t flag will cause the test runner to only run tests with the given tags, and the --exclude-tags or -x flag will cause it to only run tests without the given tags. These flags also support boolean selector syntax. For example, you can pass --tags "(chrome || firefox) && !slow" to select quick Chrome or Firefox tests.

Note that tags must be valid Dart identifiers, although they may also contain hyphens.

Whole-Package Configuration

For configuration that applies across multiple files, or even the entire package, test supports a configuration file called dart_test.yaml. At its simplest, this file can contain the same sort of configuration that can be passed as command-line arguments:

# This package's tests are very slow. Double the default timeout.
timeout: 2x

# This is a browser-only package, so test on content shell by default.
platforms: [content-shell]

The configuration file sets new defaults. These defaults can still be overridden by command-line arguments, just like the built-in defaults. In the example above, you could pass --platform chrome to run on Chrome instead of the Dartium content shell.

A configuration file can do much more than just set global defaults. See the full documentation for more details.


Tests can be debugged interactively using browsers' built-in development tools, including Observatory when you're using Dartium. Currently there's no support for interactively debugging command-line VM tests, but it will be added in the future.

The first step when debugging is to pass the --pause-after-load flag to the test runner. This pauses the browser after each test suite has loaded, so that you have time to open the development tools and set breakpoints. For Dartium, the test runner will print the Observatory URL for you. For PhantomJS, it will print the remote debugger URL. For content shell, it'll print both!

Once you've set breakpoints, either click the big arrow in the middle of the web page or press Enter in your terminal to start the tests running. When you hit a breakpoint, the runner will open its own debugging console in the terminal that controls how tests are run. You can type "restart" there to re-run your test as many times as you need to figure out what's going on.

Normally, browser tests are run in hidden iframes. However, when debugging, the iframe for the current test suite is expanded to fill the browser window so you can see and interact with any HTML it renders. Note that the Dart animation may still be visible behind the iframe; to hide it, just add a background-color to the page's HTML.

Browser/VM Hybrid Tests

Code that's written for the browser often needs to talk to some kind of server. Maybe you're testing the HTML served by your app, or maybe you're writing a library that communicates over WebSockets. We call tests that run code on both the browser and the VM hybrid tests.

Hybrid tests use one of two functions: spawnHybridCode() and spawnHybridUri(). Both of these spawn Dart VM isolates that can import dart:io and other VM-only libraries. The only difference is where the code from the isolate comes from: spawnHybridCode() takes a chunk of actual Dart code, whereas spawnHybridUri() takes a URL. They both return a StreamChannel that communicates with the hybrid isolate. For example:

// ## test/web_socket_server.dart

// The library loaded by spawnHybridUri() can import any packages that your
// package depends on, including those that only work on the VM.
import "package:shelf/shelf_io.dart" as io;
import "package:shelf_web_socket/shelf_web_socket.dart";
import "package:stream_channel/stream_channel.dart";

// Once the hybrid isolate starts, it will call the special function
// hybridMain() with a StreamChannel that's connected to the channel
// returned spawnHybridCode().
hybridMain(StreamChannel channel) async {
  // Start a WebSocket server that just sends "hello!" to its clients.
  var server = await io.serve(webSocketHandler((webSocket) {
  }), 'localhost', 0);

  // Send the port number of the WebSocket server to the browser test, so
  // it knows what to connect to.

// ## test/web_socket_test.dart


import "dart:html";

import "package:test/test.dart";

void main() {
  test("connects to a server-side WebSocket", () async {
    // Each spawnHybrid function returns a StreamChannel that communicates with
    // the hybrid isolate. You can close this channel to kill the isolate.
    var channel = spawnHybridUri("web_socket_server.dart");

    // Get the port for the WebSocket server from the hybrid isolate.
    var port = await channel.stream.first;

    var socket = new WebSocket('ws://localhost:$port');
    var message = await socket.onMessage.first;
    expect(message.data, equals("hello!"));

A diagram showing a test in a browser communicating with a Dart VM isolate outside the browser.

Note: If you write hybrid tests, be sure to add a dependency on the stream_channel package, since you're using its API!

Support for Other Packages


The term_glyph package provides getters for Unicode glyphs with ASCII alternatives. test ensures that it's configured to produce ASCII when the user is running on Windows, where Unicode isn't supported. This ensures that testing libraries can use Unicode on POSIX operating systems without breaking Windows users.


Packages using the barback transformer system may need to test code that's created or modified using transformers. The test runner handles this using the --pub-serve option, which tells it to load the test code from a pub serve instance rather than from the filesystem.

Before using the --pub-serve option, add the test/pub_serve transformer to your pubspec.yaml. This transformer adds the necessary bootstrapping code that allows the test runner to load your tests properly:

- test/pub_serve:
    $include: test/**_test{.*,}.dart

Note that if you're using the test runner along with polymer, you have to make sure that the test/pub_serve transformer comes after the polymer transformer:

- polymer
- test/pub_serve:
    $include: test/**_test{.*,}.dart

Then, start up pub serve. Make sure to pay attention to which port it's using to serve your test/ directory:

$ pub serve
Loading source assets...
Loading test/pub_serve transformers...
Serving my_app web on http://localhost:8080
Serving my_app test on http://localhost:8081
Build completed successfully

In this case, the port is 8081. In another terminal, pass this port to --pub-serve and otherwise invoke pub run test as normal:

$ pub run test --pub-serve=8081 -p chrome
"pub serve" is compiling test/my_app_test.dart...
"pub serve" is compiling test/utils_test.dart...
00:00 +42: All tests passed!

Further Reading

Check out the API docs for detailed information about all the functions available to tests.

The test runner also supports a machine-readable JSON-based reporter. This reporter allows the test runner to be wrapped and its progress presented in custom ways (for example, in an IDE). See the protocol documentation for more details.


  • Fix strong mode runtime cast failures.


  • Node.js tests can now import modules from a top-level node_modules directory, if one exists.

  • Raw console.log() calls no longer crash Node.js tests.

  • When a browser crashes, include its standard output in the error message.


  • Add a pumpEventQueue() function to make it easy to wait until all asynchronous tasks are complete.

  • Add a neverCalled getter that returns a function that causes the test to fail if it's ever called.


  • Increase the timeout for loading tests to 12 minutes.


  • When addTearDown() is called within a call to setUpAll(), it runs its callback after all tests instead of running it after the setUpAll() callback.

  • When running in an interactive terminal, the test runner now prints status lines as wide as the terminal and no wider.


  • Fix lower bound on package stack_trace. Now 1.6.0.
  • Manually close browser process streams to prevent test hangs.


  • The spawnHybridUri() function now allows root-relative URLs, which are interpreted as relative to the root of the package.


  • Add a override_platforms configuration field which allows test platforms' settings (such as browsers' executables) to be overridden by the user.

  • Add a define_platforms configuration field which makes it possible to define new platforms that use the same logic as existing ones but have different settings.


  • spawnHybridUri() now interprets relative URIs correctly in browser tests.


  • Declare support for async 2.0.0.


  • Small refactoring to make the package compatible with strong-mode compliant Zone API. No user-visible change.


  • Expose a way for tests to forward a loadException to the server.


  • Drain browser process stdout and stdin. This resolves test flakiness, especially in Travis with the Precise image.


  • Extend deserializeTimeout.


  • Only force exit if FORCE_TEST_EXIT is set in the environment.


  • Widen version constraint on analyzer.


  • Add a node platform for compiling tests to JavaScript and running them on Node.js.


  • Remove unused imports.


  • Add a fold_stack_frames field for dart_test.yaml. This will allow users to customize which packages' frames are folded.


  • Properly allocate ports when debugging Chrome and Dartium in an IPv6-only environment.


  • Support args 1.0.0.

  • Run tear-down callbacks in the same error zone as the test function. This makes it possible to safely share Futures and Streams between tests and their tear-downs.


  • Add a retry option to test() and group() functions, as well as @Retry() annotation for test files and a retry configuration field for dart_test.yaml. A test with reties enabled will be re-run if it fails for a reason other than a TestFailure.

  • Add a --no-retry runner flag that disables retries of failing tests.

  • Fix a "concurrent modification during iteration" error when calling addTearDown() from within a tear down.


  • Add a doesNotComplete matcher that asserts that a Future never completes.

  • throwsA() and all related matchers will now match functions that return Futures that emit exceptions.

  • Respect onPlatform for groups.

  • Only print browser load errors once per browser.

  • Gracefully time out when attempting to deserialize a test suite.


  • Upgrade to package:matcher 0.12.1


  • Now support v0.30.0 of pkg/analyzer

  • The test executable now does a "hard exit" when complete to ensure lingering isolates or async code don't block completion. This may affect users trying to use the Dart service protocol or observatory.


  • Refactor bootstrapping to simplify the test/pub_serve transformer.


  • Refactor for internal tools.


  • Introduce new flag --chain-stack-traces to conditionally chain stack traces.


  • Fixed more blockers for compiling with dev_compiler.

  • Dartfmt the entire repo.

  • Note: 0.12.20+5-0.12.20+7 were tagged but not officially published.


  • Fixed strong-mode errors and other blockers for compiling with dev_compiler.


  • --pause-after-load no longer deadlocks with recent versions of Chrome.

  • Fix Dartified stack traces for JS-compiled tests run through pub serve.


  • Print "[E]" after test failures to make them easier to identify visually and via automated search.


  • Tighten the dependency on stream_channel to reflect the APIs being used.

  • Use a 1024 x 768 iframe for browser tests.


  • Breaking change: The expect() method no longer returns a Future, since this broke backwards-compatibility in cases where a void function was returning an expect() (such as void foo() => expect(...)). Instead, a new expectLater() function has been added that return a Future that completes when the matcher has finished running.

  • The verbose parameter to expect() and the formatFailure() function are deprecated.


  • Make sure asynchronous matchers that can fail synchronously, such as throws*() and prints(), can be used with synchronous matcher operators like isNot().


  • Added the StreamMatcher class, as well as several built-in stream matchers: emits(), emitsError(), emitsDone, mayEmit(), mayEmitMultiple(), emitsAnyOf(), emitsInOrder(), emitsInAnyOrder(), and neverEmits().

  • expect() now returns a Future for the asynchronous matchers completes, completion(), throws*(), and prints().

  • Add a printOnFailure() method for providing debugging information that's only printed when a test fails.

  • Automatically configure the term_glyph package to use ASCII glyphs when the test runner is running on Windows.

  • Deprecate the throws matcher in favor of throwsA().

  • Deprecate the Throws class. These matchers should only be constructed via throwsA().


  • Fix the deprecated expectAsync() function. The deprecation caused it to fail to support functions that take arguments.


  • Add an addTearDown() function, which allows tests to register additional tear-down callbacks as they're running.

  • Add the spawnHybridUri() and spawnHybridCode() functions, which allow browser tests to run code on the VM.

  • Fix the new expectAsync functions so that they don't produce analysis errors when passed callbacks with optional arguments.


  • Internal changes only.


  • Fix Dartium debugging on Windows.


  • Fix a bug where tags couldn't be marked as skipped.


  • Deprecate expectAsync and expectAsyncUntil, since they currently can't be made to work cleanly in strong mode. They are replaced with separate methods for each number of callback arguments:
    • expectAsync0, expectAsync1, ... expectAsync6, and
    • expectAsyncUntil0, expectAsyncUntil1, ... expectAsyncUntil6.


  • Allow tools to interact with browser debuggers using the JSON reporter.


  • Fix a race condition that could cause the runner to stall for up to three seconds after completing.


  • Make test iframes visible when debugging.


  • Throw a better error if a group body is asynchronous.


  • Widen version constraint on analyzer.


  • Make test suites with thousands of tests load much faster on the VM (and possibly other platforms).


  • Fix a bug where tags would be dropped when on_platform was defined in a config file.


  • Fix a broken link in the --help documentation.


  • Internal-only change.


  • Widen version constraint on analyzer.


  • Move nestingMiddleware to lib/src/util/path_handler.dart to enable a cleaner separation between test-runner files and test writing files.


  • Support running without a packages/ directory.


  • Declare support for version 1.19 of the Dart SDK.


  • Add a skip parameter to expect(). Marking a single expect as skipped will cause the test itself to be marked as skipped.

  • Add a --run-skipped parameter and run_skipped configuration field that cause tests to be run even if they're marked as skipped.


  • Narrow the constraint on yaml.


  • Add test and group location information to the JSON reporter.


  • Declare support for version 1.18 of the Dart SDK.

  • Use the latest collection package.


  • Compatibility with an upcoming release of the collection package.


  • Internal changes only.


  • Fix all strong-mode errors and warnings.


  • Declare support for version 1.17 of the Dart SDK.


  • Add support for a global configuration file. On Windows, this file defaults to %LOCALAPPDATA%\DartTest.yaml. On Unix, it defaults to ~/.dart_test.yaml. It can also be explicitly set using the DART_TEST_CONFIG environment variable. See the configuration documentation for details.

  • The --name and --plain-name arguments may be passed more than once, and may be passed together. A test must match all name constraints in order to be run.

  • Add names and plain_names fields to the package configuration file. These allow presets to control which tests are run based on their names.

  • Add include_tags and exclude_tags fields to the package configuration file. These allow presets to control which tests are run based on their tags.

  • Add a pause_after_load field to the package configuration file. This allows presets to enable debugging mode.


  • Add support for test presets. These are defined using the presets field in the package configuration file. They can be selected by passing --preset or -P, or by using the add_presets field in the package configuration file.

  • Add an on_os field to the package configuration file that allows users to select different configuration for different operating systems.

  • Add an on_platform field to the package configuration file that allows users to configure all tests differently depending on which platform they run on.

  • Add an ios platform selector variable. This variable will only be true when the test executable itself is running on iOS, not when it's running browser tests on an iOS browser.


  • Update to shelf_web_socket 0.2.0.


  • Purely internal change.


  • Add a tags field to the package configuration file that allows users to provide configuration for specific tags.

  • The --tags and --exclude-tags command-line flags now allow boolean selector syntax. For example, you can now pass --tags "(chrome || firefox) && !slow" to select quick Chrome or Firefox tests.


  • Re-add help output separators.

  • Tighten the constraint on args.


  • Temporarily remove separators from the help output. Version 0.12.8 was erroneously released without an appropriate args constraint for the features it used; this version will help ensure that users who can't use args 0.13.1 will get a working version of test.


  • Add support for a package-level configuration file called dart_test.yaml.


  • Add SuiteEvent to the JSON reporter, which reports data about the suites in which tests are run.

  • Add AllSuitesEvent to the JSON reporter, which reports the total number of suites that will be run.

  • Add Group.testCount to the JSON reporter, which reports the total number of tests in each group.


  • Organize the --help output into sections.

  • Add a --timeout flag.


  • Add the ability to re-run tests while debugging. When the browser is paused at a breakpoint, the test runner will open an interactive console on the command line that can be used to restart the test.

  • Add support for passing any object as a description to test() and group(). These objects will be converted to strings.

  • Add the ability to tag tests. Tests with specific tags may be run by passing the --tags command-line argument, or excluded by passing the --exclude-tags parameter.

    This feature is not yet complete. For now, tags are only intended to be added temporarily to enable use-cases like focusing on a specific test or group. Further development can be followed on the issue tracker.

  • Wait for a test's tear-down logic to run, even if it times out.


  • Declare compatibility with http_parser 2.0.0.


  • Declare compatibility with http_multi_server 2.0.0.


  • Add a machine-readable JSON reporter. For details, see the protocol documentation.

  • Skipped groups now properly print skip messages.


  • Declare compatibility with Dart 1.14 and 1.15.


  • Fixed a deadlock bug when using setUpAll() and tearDownAll().


  • Add setUpAll() and tearDownAll() methods that run callbacks before and after all tests in a group or suite. Note that these methods are for special cases and should be avoided—they make it very easy to accidentally introduce dependencies between tests. Use setUp() and tearDown() instead if possible.

  • Allow setUp() and tearDown() to be called multiple times within the same group.

  • When a tearDown() callback runs after a signal has been caught, it can now schedule out-of-band asynchronous callbacks normally rather than having them throw exceptions.

  • Don't show package warnings when compiling tests with dart2js. This was accidentally enabled in 0.12.2, but was never intended.


  • If a tearDown() callback throws an error, outer tearDown() callbacks are still executed.


  • Don't compile tests to JavaScript when running via pub serve on Dartium or content shell.


  • Support http_parser 1.0.0.


  • Fix a broken link in the README.


  • Internal changes only.


  • Widen the Dart SDK constraint to include 1.13.0.


  • Make source maps work properly in the browser when not using --pub-serve.


  • Fix a memory leak when running many browser tests where old test suites failed to be unloaded when they were supposed to.


  • Require Dart SDK >= 1.11.0 and shelf >= 0.6.0, allowing test to remove various hacks and workarounds.


  • Add a --pause-after-load flag that pauses the test runner after each suite is loaded so that breakpoints and other debugging annotations can be added. Currently this is only supported on browsers.

  • Add a Timeout.none value indicating that a test should never time out.

  • The dart-vm platform selector variable is now true for Dartium and content shell.

  • The compact reporter no longer prints status lines that only update the clock if they would get in the way of messages or errors from a test.

  • The expanded reporter no longer double-prints the descriptions of skipped tests.


  • Widen the constraint on analyzer to include 0.26.0.


  • Fix an uncaught error that could crop up when killing the test runner process at the wrong time.


  • Add a missing dependency on the collection package.


This version was unpublished due to issue 287.

  • Properly report load errors caused by failing to start browsers.

  • Substantially increase browser timeouts. These timeouts are the cause of a lot of flakiness, and now that they don't block test running there's less harm in making them longer.


This version was unpublished due to issue 287.

  • Fix a crash when skipping tests because their platforms don't match.


This version was unpublished due to issue 287.

  • The compact reporter will update the timer every second, rather than only updating it occasionally.

  • The compact reporter will now print the full, untruncated test name before any errors or prints emitted by a test.

  • The expanded reporter will now always print the full, untruncated test name.


This version was unpublished due to issue 287.

  • Limit the number of test suites loaded at once. This helps ensure that the test runner won't run out of memory when running many test suites that each load a large amount of code.


This version was unpublished due to issue 287.

  • Improve the display of syntax errors in VM tests.

  • Work around a Firefox bug. Computed styles now work in tests on Firefox.

  • Fix a bug where VM tests would be loaded from the wrong URLs on Windows (or in special circumstances on other operating systems).


  • Fix a bug that caused the test runner to crash on Windows because symlink resolution failed.


  • If a future matched against the completes or completion() matcher throws an error, that error is printed directly rather than being wrapped in a string. This allows such errors to be captured using the Zone API and improves formatting.

  • Improve support for Polymer tests. This fixes a flaky time-out error and adds support for Dartifying JavaScript stack traces when running Polymer tests via pub serve.

  • In order to be more extensible, all exception handling within tests now uses the Zone API.

  • Add a heartbeat to reset a test's timeout whenever the test interacts with the test infrastructure.

  • expect(), expectAsync(), and expectAsyncUntil() throw more useful errors if called outside a test body.


  • Convert JavaScript stack traces into Dart stack traces using source maps. This can be disabled with the new --js-trace flag.

  • Improve the browser test suite timeout logic to avoid timeouts when running many browser suites at once.


  • Add a --verbose-trace flag to include core library frames in stack traces.


Test Runner

0.12.0 adds support for a test runner, which can be run via pub run test:test (or pub run test in Dart 1.10). By default it runs all files recursively in the test/ directory that end in _test.dart and aren't in a packages/ directory.

The test runner supports running tests on the Dart VM and many different browsers. Test files can use the @TestOn annotation to declare which platforms they support. For more information on this and many more new features, see the README.

Removed and Changed APIs

As part of moving to a runner-based model, most test configuration is moving out of the test file and into the runner. As such, many ancillary APIs have been removed. These APIs include skip_ and solo_ functions, Configuration and all its subclasses, TestCase, TestFunction, testConfiguration, formatStacks, filterStacks, groupSep, logMessage, testCases, BREATH_INTERVAL, currentTestCase, PASS, FAIL, ERROR, filterTests, runTests, ensureInitialized, setSoloTest, enableTest, disableTest, and withTestEnvironment.

FailureHandler, DefaultFailureHandler, configureExpectFailureHandler, and getOrCreateExpectFailureHandler which used to be exported from the matcher package have also been removed. They existed to enable integration between test and matcher that has been streamlined.

A number of APIs from matcher have been into test, including: completes, completion, ErrorFormatter, expect,fail, prints, TestFailure, Throws, and all of the throws methods. Some of these have changed slightly:

  • expect no longer has a named failureHandler argument.

  • expect added an optional formatter argument.

  • completion argument id renamed to description.


  • Fix some strong mode warnings we missed in the vm_config.dart and html_config.dart libraries.


  • Fix a bug introduced in 0.11.6+2 in which operator matchers broke when taking lists of matchers.


  • Fix all strong mode warnings.


  • Give tests more time to start running.


  • Merge in the last 0.11.x release of matcher to allow projects to use both test and unittest without conflicts.

  • Fix running individual tests with HtmlIndividualConfiguration when the test name contains URI-escaped values and is provided with the group query parameter.


  • Internal code cleanups and documentation improvements.


  • Bumped the version constraint for matcher.


  • Bump the version constraint for matcher.


  • Narrow the constraint on matcher to ensure that new features are reflected in unittest's version.


  • Prints a warning instead of throwing an error when setting the test configuration after it has already been set. The first configuration is always used.


  • Fix bug in withTestEnvironment where test cases were not reinitialized if called multiple times.


  • Add reason named argument to expectAsync and expectAsyncUntil, which has the same definition as expect's reason argument.
  • Added support for private test environments.


  • Refactored package tests.


  • Release test functions after each test is run.



  • Updated maximum matcher version.


  • Removed unused files from tests and standardized remaining test file names.


  • Widen the version constraint for stack_trace.


  • Deprecated methods have been removed:
    • expectAsync0, expectAsync1, and expectAsync2 - use expectAsync instead
    • expectAsyncUntil0, expectAsyncUntil1, and expectAsyncUntil2 - use expectAsyncUntil instead
    • guardAsync - no longer needed
    • protectAsync0, protectAsync1, and protectAsync2 - no longer needed
  • matcher.dart and mirror_matchers.dart have been removed. They are now in the matcher package.
  • mock.dart has been removed. It is now in the mock package.


  • Fixed deprecation message for mock.


  • Moved to triple-slash for all doc comments.


    • matcher.dart and mirror_matchers.dart are now in the matcher package.
    • mock.dart is now in the mock package.
  • equals now allows a nested matcher as an expected list element or map value when doing deep matching.
  • expectAsync and expectAsyncUntil now support up to 6 positional arguments and correctly handle functions with optional positional arguments with default values.


  • Each test is run in a separate Zone. This ensures that any exceptions that occur is async operations are reported back to the source test case.
  • DEPRECATED guardAsync, protectAsync0, protectAsync1, and protectAsync2
    • Running each test in a Zone addresses the need for these methods.
  • NEW! expectAsync replaces the now deprecated expectAsync0, expectAsync1 and expectAsync2
  • NEW! expectAsyncUntil replaces the now deprecated expectAsyncUntil0, expectAsyncUntil1 and expectAsyncUntil2
  • TestCase:
    • Removed properties: setUp, tearDown, testFunction
    • enabled is now get-only
    • Removed methods: pass, fail, error
  • interactive_html_config.dart has been removed.
  • runTests, tearDown, setUp, test, group, solo_test, and solo_group now throw a StateError if called while tests are running.
  • rerunTests has been removed.

Use this package as a library

1. Depend on it

Add this to your package's pubspec.yaml file:

  test: ^0.12.29+1

2. Install it

You can install packages from the command line:

with pub:

$ pub get

Alternatively, your editor might support pub get. Check the docs for your editor to learn more.

3. Import it

Now in your Dart code, you can use:

import 'package:test/test.dart';
Version Uploaded Documentation Archive
1.5.1 Nov 5, 2018 Go to the documentation of test 1.5.1 Download test 1.5.1 archive
1.5.0 Nov 3, 2018 Go to the documentation of test 1.5.0 Download test 1.5.0 archive
1.4.0 Oct 30, 2018 Go to the documentation of test 1.4.0 Download test 1.4.0 archive
1.3.4 Oct 2, 2018 Go to the documentation of test 1.3.4 Download test 1.3.4 archive
1.3.3 Sep 17, 2018 Go to the documentation of test 1.3.3 Download test 1.3.3 archive
1.3.2 Sep 13, 2018 Go to the documentation of test 1.3.2 Download test 1.3.2 archive
1.3.1 Sep 11, 2018 Go to the documentation of test 1.3.1 Download test 1.3.1 archive
1.3.0 Jul 16, 2018 Go to the documentation of test 1.3.0 Download test 1.3.0 archive
1.2.0 Jun 27, 2018 Go to the documentation of test 1.2.0 Download test 1.2.0 archive
1.0.0 Jun 18, 2018 Go to the documentation of test 1.0.0 Download test 1.0.0 archive

All 142 versions...

Describes how popular the package is relative to other packages. [more]
Code health derived from static analysis. [more]
Reflects how tidy and up-to-date the package is. [more]
Weighted score of the above. [more]
Learn more about scoring.

The package version is not analyzed, because it does not support Dart 2. Until this is resolved, the package will receive a health and maintenance score of 0.

Analysis issues and suggestions

Support Dart 2 in pubspec.yaml.

The SDK constraint in pubspec.yaml doesn't allow the Dart 2.0.0 release. For information about upgrading it to be Dart 2 compatible, please see https://www.dartlang.org/dart-2#migration.


Package Constraint Resolved Available
Direct dependencies
Dart SDK >=1.23.0 <2.0.0