Documentation / Performance Dashboard

Performance Dashboard

What you need

You need Docker and Docker Compose. If you haven’t used Docker before, you can read Getting started. And you can also read/learn more about Docker Compose to get a better start.

Up and running in (almost) 5 minutes

  1. Download our Docker compose file: curl -O
  2. Run: docker-compose up -d (make sure you run the latest Docker compose version)
  3. Run sitespeed to get some metrics: docker-compose run
  4. Access the dashboard:
  5. When you are done you can shut down and remove all the Docker containers by running docker-compose stop && docker-compose rm. Container data will be kept.
  6. To start from scratch, also remove the Graphite and Grafana data volumes by running docker volume rm performancedashboard_graphite performancedashboard_grafana.

If you want to play with the dashboards, the default login is sitespeedio and password is …well check out the docker-compose.yml file.

When you run this in production make sure to checkout our production guidelines.

Docker compose file #

We have prepared a Docker Compose file that downloads and sets up Graphite/Grafana and with a couple of example dashboards. It works perfectly when you want to try it out on localhost, but if you want to run it in production, you should modify it by making sure that the metrics are stored outside of your container/volumes. If you prefer InfluxDB over Graphite, you can use that too, but right now we only have one ready-made dashboard for InfluxDB (thank you Olivier Jan for contributing to that dashboard!).

Pre-made dashboards #

We insert ready-made dashboards with a Docker container using curl, making it easy for you to get started. You can check out the container with the dashboards here:

Example dashboards

The example dashboards are generic dashboards that will work with all data/metrics you collect using We worked hard to make them and the great thing is that you can use them as base dashboards, then create additional dashboards if you like.

The dashboards has a couple of templates (the dropdowns at the top of the page) that makes the dashboard interactive and dynamic. A dashboard that show metrics for a specific page has the following templates:

Page templates

The path is the first path after the namespace. Using the default values, the namespace looks like this: sitespeed_io.default.

When you choose one of the values in a template, the rest will be populated. You can choose from checking metrics for a specific page, browser, and connectivity.

The default namespace is sitespeed_io.default and the example dashboards are built upon a constant template variable called $base that is the first part of the namespace (that default is sitespeed_io but feel free to change that, and then change the constant).

Page summary #

The page summary shows metrics for a specific URL/page.

The page timings summary #

The page timings summary focus on Visual Metrics and is the number one dashboard you should use when you look for visual regressions.

Site summary #

The site summary show metrics for a site (a summary of all URLs tested for that domain).

3rd vs. 1st party #

How much does 3rd party code impact your page? To get this up and running, you should only need to configure the --firstParty parameter/regex when you run.

You can see the 3rd vs. 1st party dashboard here.

WebPageTest dashboards #

We have two optional dashboards for WebPageTest to show how you can build them if you use WebPageTest through

WebPageTest page summary #

Have we told you that we love WebPageTest? Yes, we have and here is an example of a default WebPageTest page summary where you can look at results for individual URLs.

WebPageTest site summary #

And then there is also a dashboard for all tested pages of a site.

Whatever you want #

Do you need anything else? Since we store all the data in Graphite and use Grafana you can create your own dashboards, which is super simple!

Configuration setup

You have the dashboard and you need to collect metrics. Using the crontab works fine or or you can just run an infinite loop.

Using the crontab (on a standalone server) you do like this: crontab -e to edit the crontab. Make sure your cron user can run Docker and change to your Graphite host. When you run this on a standalone server will be the public IP address of your server. The default port when sending metrics to Graphite is 2003, so you don’t have to include that.

On we have the following setup:

We have a small shell script that runs the tests. It is triggered from the cron and uses a configuration file (default.json) where we have the default configuration used for all tests (we then override some config values directly when we start the test). We also have a bash file that sets up the network.

Our file (we read which URLs we want to test from files):

Shell script #

# Specify the exact version of When you upgrade to the next version, pull it down and the change the tag

# Setup the network and default ones we wanna use
sudo /home/ubuntu/
THREEG="--network 3g"
CABLE="--network cable"

# Simplify some configurations
CONFIG="--config /"
DOCKER_SETUP="--shm-size=1g --rm -v /home/ubuntu/config:/ -v /result:/result -v /etc/localtime:/etc/localtime:ro --name sitespeed"

# Start running the tests
# We run more tests on our test server but this gives you an idea of how you can configure it
docker run $CABLE $DOCKER_SETUP $DOCKER_CONTAINER -n 11 --browsertime.viewPort 1920x1080 --browsertime.cacheClearRaw / $CONFIG
docker run $CABLE $DOCKER_SETUP $DOCKER_CONTAINER -n 11 --browsertime.viewPort 1920x1080 / -b firefox $CONFIG
docker run $THREEG $DOCKER_SETUP $DOCKER_CONTAINER --graphite.namespace sitespeed_io.emulatedMobile / -c 3g --mobile true $CONFIG

# We remove all docker stuff to get a clean next run
docker system prune --all --volumes -f

# Get the container so we have it the next time we wanna use it

Crontab #

We trigger the script from the crontab. We run the script every hour.

0 * * * * /root/ >> /tmp/ 2>&1

Infinite loop #

Another way is to just run the script in an infinite loop and then have a file that you remove (so the run stops) when you want to update your instance. This example script is on Ubuntu.

exec > $LOGFILE 2>&1

if [ -f "$CONTROL_FILE" ]
  echo "$CONTROL_FILE exist, do you have running tests?"
  exit 1;


function cleanup() {
  docker system prune --all --volumes -f

function control() {
  if [ -f "$CONTROL_FILE" ]
    echo "$CONTROL_FILE found. Make another run ..."
    echo "$CONTROL_FILE not found - stopping after cleaning up ..."
    echo "Exit"
    exit 0;

while true

  DOCKER_SETUP="--shm-size=1g --rm -v /home/ubuntu/config:/ -v /result:/result -v /etc/localtime:/etc/localtime:ro "
  THREEG="--network 3g"
  THREEGEM="--network 3gem"
  CABLE="--network cable"
  CONFIG="--config /"
  echo 'Start a new loop '
  echo "Start the networks ..."
  sudo /home/ubuntu/
  docker network ls

  docker run $CABLE $DOCKER_SETUP $DOCKER_CONTAINER -n 7 --browsertime.viewPort 1920x1080 --browsertime.cacheClearRaw true / $CONFIG
  docker run $CABLE $DOCKER_SETUP $DOCKER_CONTAINER -n 7 --browsertime.viewPort 1920x1080 / -b firefox $CONFIG

And make sure the script start on server restart. Edit the crontab crontab -e and add ( is the name of your loop script file):

@reboot rm /home/ubuntu/;/home/ubuntu/

And start it like this:

nohup /home/ubuntu/ &

default.json #

And our default configuration is in default.json:

  "browsertime": {
    "connectivity": {
      "engine": "external",
      "profile": "cable"
    "iterations": 5,
    "browser": "chrome",
    "speedIndex": true
  "graphite": {
    "host": "GRAPHITE_HOST",
    "namespace": "sitespeed_io.desktop",
    "auth": "GRAPHITE_AUTH"
  "slack": {
    "hookUrl": ""
  "resultBaseURL": "",
  "video": true,
  "gzipHAR": true,
  "html": {
    "fetchHARFiles": true
  "s3": {
     "key": "AWS_KEY",
     "secret": "AWS_SECRET",
     "bucketname": "",
     "removeLocalResult": true

Docker networks #

And we set up the following Docker networks (

echo 'Starting Docker networks'
docker network create --driver bridge --subnet= --gateway= --opt ""="docker1" 3g
tc qdisc add dev docker1 root handle 1: htb default 12
tc class add dev docker1 parent 1:1 classid 1:12 htb rate 1.6mbit ceil 1.6mbit
tc qdisc add dev docker1 parent 1:12 netem delay 150ms

docker network create --driver bridge --subnet= --gateway= --opt ""="docker2" cable
tc qdisc add dev docker2 root handle 1: htb default 12
tc class add dev docker2 parent 1:1 classid 1:12 htb rate 5mbit ceil 5mbit
tc qdisc add dev docker2 parent 1:12 netem delay 14ms

docker network create --driver bridge --subnet= --gateway= --opt ""="docker3" 3gfast
tc qdisc add dev docker3 root handle 1: htb default 12
tc class add dev docker3 parent 1:1 classid 1:12 htb rate 1.6mbit ceil 1.6mbit
tc qdisc add dev docker3 parent 1:12 netem delay 75ms

docker network create --driver bridge --subnet= --gateway= --opt ""="docker4" 3slow
tc qdisc add dev docker4 root handle 1: htb default 12
tc class add dev docker4 parent 1:1 classid 1:12 htb rate 0.4mbit ceil 0.4mbit
tc qdisc add dev docker4 parent 1:12 netem delay 200ms

Configure Graphite

We provide an example Graphite Docker container and when you put that into production, you need to change the configuration. Checkout our Graphite documentation.

Using S3 for HTML and video

You can store the HTML result on your local agent that runs, or you can dump the data to S3 or GCS and serve it from there. To use S3, you first need to set up an S3 bucket. And to set up a Google Cloud storage (GCS) bucket.

Then you configure to send the data to S3 by configuring the bucket name (and AWS key/secret if that’s not available on your server). For GCS you need to provide the name of the bucket, service account key and the project id.

You now have the result on S3 or GCS and you’re almost done. You should also configure to send annotations to Graphite for each run.


You can send annotations to Graphite to mark when a run happens so you can go from the dashboard to any HTML-results page.

You do that by configuring the URL that will serve the HTML with the CLI param resultBaseURL (the base URL for your S3 or GCS bucket) and configure the HTTP Basic auth username/password used by Graphite. You can do that by setting --graphite.auth LOGIN:PASSWORD.

You can also modify the annotation and append our own text/HTML and add your own tags. Append a message to the annotation with --graphite.annotationMessage. That way you can add links to a specific branch or whatever you feel that can help you. If needed set a custom title with --graphite.annotationTitle instead of the default title that displays the number of runs of the test.

You can add extra tags with --graphite.annotationTag. For multiple tags, add the parameter multiple times. Just make sure that the tags doesn’t collide with our internal tags.

Production Guidelines

Here are a couple of things you should check before you setup for production.

Setup (important!) #

To run this in a production environment, you should consider/make some modifications:

  1. Always run on a stand-alone instance
    • This avoids causing discrepancies in results, due to things like competing resources or network traffic. Then you just run with docker run … (only docker compose for Graphite/Grafana).
    • Run Grafana/Graphite on another server instance.
  2. Change the default user and password for Grafana.
  3. Change the default user and password for Graphite.
  4. Make sure you have configured storage-aggregation.conf in Graphite to fit your needs.
  5. Configure your storage-schemas.conf how long you wanna store your metrics.
  6. MAX_CREATES_PER_MINUTE is usually quite low in carbon.conf. That means you will not get all the metrics created for the first run, so you can increase it.
  7. Map the Graphite volume to a physical directory outside of Docker to have better control (both Whisper and graphite.db). Map them like this on your physical server (make sure to copy the empty graphite.db file):
    • /path/on/server/whisper:/opt/graphite/storage/whisper
    • /path/on/server/graphite.db:/opt/graphite/storage/graphite.db
  8. Remove the sitespeedio/grafana-bootstrap from the Docker compose file, you only need that for the first run.
  9. Optional: Disable anonymous users access

Memory & CPU #

How large will your instances need to be? You need to have enough memory for Chrome/Firefox (yep they can really use a lot of memory for some sites). Before we used a $80 instance on Digital Ocean (8GB memory, 4 Core processors) but we switched to use AWS c5.large for The reason is that the metrics are so more stable on AWS than Digital Ocean. We have tried out most cloud providers and AWS gave us the most stable metrics.

If you test a lot a pages (100+) in the same run, your NodeJS process can run out of memory (default memory for NodeJS is 1.76 GB). You can change and increase by setting MAX_OLD_SPACE_SIZE like this in your compose file:

        - MAX_OLD_SPACE_SIZE=3072

Cost # is Open Source and totally free. But what does it cost to have an instance of up and running?

Setting up an AWS instance c5.large has an upfront price $515 for a year (it is much cheaper to pay upfront). Or you can use a Optimized Droplet for $40 a month at Digital Ocean (they have served us well in our testing).

You also need to pay for S3 (to store the videos and HTML). For we pay $10-15 per month (depending how long time you want to store the data).

Do your organisation already use Graphite/InfluxDB and Grafana? Then use what you have. Else you need to have a server hosting Graphite/Grafana. We pay $20 per month at Digital Ocean for that. Depending on how many metrics and for how long time you wanna store them, you maybe need and extra disk. And you should also always backup your data.

How many runs can you do per month? Many of the paid services you also pay per run or have a maximum amount of runs. With our one instance at AWS we do 11 runs for 9 different URLs then we run 5 runs for 4 other URLs. That is 119 runs per hour. 2856 per day and 85680. We test Wikipedia at our instance so it can be that your site is a little slower, then you will not be able to make the same amount of runs per month.

Total cost:

  • $515 per AWS agent or $480 on Digital Ocean (80000+ tests per month per agent) per year
  • S3 $10-15 with data
  • Server for Graphite/Grafana

You also need to think of the time it takes for you to set it up and upgrade new Docker containers when there are new browser versions and new versions of Updating to a new Docker container on one server usually takes less than 2 minutes :)

Keeping your instance updated #

We constantly do new Docker release: bug fixes, new functionality and new versions of the browser. To keep your instance updated, follow the following work flow.

Log into your instance and pull the latest version of

docker pull sitespeedio/

Then update your script so it uses the new version (8.15.0 in this case). The next time runs, it will use the new version.

Go into the Grafana dashboard and create a new annotation, telling your team mates that you updated to the new version. This is real important so you can keep track of browser updates and other changes that can affect your metrics.