This section describes how to set up a fully functioning instance of Treeherder. If you only want to hack on the UI, you can just setup a standalone webserver which accesses the server backend using node.js, which is much simpler. See the UI installation section.


  • If you are new to Mozilla or the A-Team, read the A-Team Bootcamp.
  • Install Git, Virtualbox and Vagrant (latest versions recommended).
  • Clone the treeherder repo from GitHub.
  • Linux only: An nfsd server is required. You can install this on Ubuntu by running apt-get install nfs-common nfs-kernel-server

Setting up Vagrant

  • Open a shell, cd into the root of the Treeherder repository, and type:

    > vagrant up --provision

    It will typically take 5 to 30 minutes for the vagrant provision to complete, depending on your network performance. If you experience any errors, see the troubleshooting page. It is very important that the provisioning process complete successfully before trying to interact with your test instance of treeherder: some things might superficially seem to work a partially configured machine, but it is almost guaranteed that some things will break in hard-to-diagnose ways if vagrant provision is not run to completion.

  • Once the virtual machine is set up, connect to it using:

    > vagrant ssh

    A python virtual environment will be activated on login, and the working directory will be the treeherder source directory shared from the host machine.

  • For the full list of available Vagrant commands (for example, suspending the VM when you are finished for the day), see their command line documentation.

  • If you just wish to run the tests, you can stop now without performing the remaining steps.

Starting a local Treeherder instance

  • Start a gunicorn instance inside the Vagrant VM, to serve the static UI and API requests:

    vagrant ~/treeherder$ ./bin/run_gunicorn

    Or for development you can use the django runserver instead of gunicorn:

    vagrant ~/treeherder$ ./ runserver

    this is more convenient because it automatically refreshes every time there’s a change in the code.

  • You must also start the UI dev server. Open a new terminal window and vagrant ssh to the VM again, then run the following:

    vagrant ~/treeherder$ yarn start:local

    This will build the UI code in the dist/ folder and keep watching for new changes (See the UI installation section for more ways to work with the UI code).

  • Visit http://localhost:5000 in your browser (NB: port has changed). Note: There will be no data to display until the ingestion tasks are run.

Running the ingestion tasks

Ingestion tasks populate the database with version control push logs, queued/running/completed buildbot jobs & output from log parsing, as well as maintain a cache of intermittent failure bugs. To run these:

  • Start up a celery worker to process async tasks:

    vagrant ~/treeherder$ celery -A treeherder worker -B --concurrency 5

    The “-B” option tells the celery worker to startup a beat service, so that periodic tasks can be executed. You only need one worker with the beat service enabled. Multiple beat services will result in periodic tasks being executed multiple times.

Ingesting a single push (at a time)

Alternatively, instead of running a full ingestion task, you can process just the jobs associated with any single push generated in the last 4 hours (builds-4h), in a synchronous manner. This is ideal for testing. For example:

vagrant ~/treeherder$ ./ ingest_push mozilla-inbound 63f8a47cfdf5

If running this locally, replace 63f8a47cfdf5 with a recent revision (= pushed within the last four hours) on mozilla-inbound.

You can further restrict the amount of data to a specific type of job with the “–filter-job-group” parameter. For example, to process only talos jobs for a particular push, try:

vagrant ~/treeherder$ ./ ingest_push --filter-job-group T mozilla-inbound 63f8a47cfdf

Ingesting a range of pushes

It is also possible to ingest the last N pushes for a repository:

vagrant ~/treeherder$ ./ ingest_push mozilla-central --last-n-pushes 100

In this mode, only the pushlog data will be ingested: additional results associated with the pushes will not. This mode is useful to seed pushes so they are visible on the web interface and so you can easily copy and paste changesets from the web interface into subsequent ingest_push commands.