Documentation / Performance Dashboard

Performance Dashboard

We spent a lot of time making it easier in 4.x to install and run your own performance dashboard with pre made dashboards and a Docker compose file to rule them all. You can see the beauty here. In 4.4 and later you can also send the result (HTML/video) to S3 and add links in Grafana to go from you dashboard to the result pages.

What you need

You need Docker and Docker Compose. If you haven’t used Docker before, you can read Getting started. And you can also read/learn more about Docker Compose to get a better start.

Up and running in 5 minutes

  1. Download our new Docker compose file: curl -O
  2. Run: docker-compose up -d (make sure you run the latest Docker compose version)
  3. Run sitespeed to get some metrics: docker-compose run
  4. Access the dashboard:
  5. When you are done you can shutdown and remove all the docker containers by running docker-compose stop && docker-compose rm

If you want to play with the dashboards the default login is sitespeedio and password is …well checkout the docker-compose.yml file.

Docker compose file

We have prepared a Docker Compose file that downloads and setup Graphite/Grafana and + a couple of example dashboards. It works perfect when you wanna try it out locally, but if you wanna run it in production you should modify it by making sure that the metrics are stored outside of your container/volumes.

Pre made dashboards

We insert pre made dashboards with a Docker container using curl, that makes it easy for you to get started. You can check out the container with the dashboards here:

Example dashboards

The example dashboards are generic dashboards that will work with all data/metrics you collect using We worked hard to make them as good as possible and the great thing about them is that you can use them as base dashboards and then create the extra dashboards you like.

The dashboards has a couple of templates (the drop downs at the top of the page) that makes the dashboard interactive and dynamic. A dashboard that show metrics for a specific page has the following templates:

Page templates

The path is the first path after the namespace. Using default values the namespace looks is sitespeed_io.default.

When you choose one of the values in a template, the rest will be populated. You can choose checking metrics for a specific page, browser and connectivity.

The namespace

The default namespace is sitespeed_io.default and the example dashboards are built upon a constant template variable called $base that is the first part of the namespace (that default is sitespeed_io but feel free to change that, and then change the constant).

Page summary

The page summary shows metrics for a specific URL/page.

Page summary in Grafana

Site summary

The site summary show metrics for a site (a summary of all URLs tested for that domain).

Site summary in Grafana

3rd vs. 1st party

How much impact to 3rd party code has on your page? To get this up and running you should need to configure the --firstParty parameter/regex when you run.

3rd vs 1st

WebPageTest page summary

Have we told you that we love WebPageTest? Yes we have and here are a default WebPagTest page summary where you can look at results for individual URLs.

WebPageTest page summary

WebPageTest site summary

And then also for all tested pages of a site.

WebPageTest site summary

Whatever you want

Do you need anything else? Since we store all the data in Graphite and use Grafana you can create your own dashboards, it super simple!

Get the metrics

You have the dashboard and you need to collect metrics. Using the crontab works fine or whatever kind of scheduler you are using (or Jenkins per build or … whatever suits you the best).

Using the crontab (on a standalone server) you do like this: crontab -e to edit the crontab. Make sure your cron user can run Docker and change to your Graphite host. When you run this on a standalone server will be the public ip address of your server. The default port when sending metrics to graphite is 2003, so you don’t have to include that.

If you run the container and the cronjob locally you cannot use localhost since each docker container has it’s own localhost. On a mac or linux machine you can use $ ifconfig to retrieve your ip address. This will output a list of all connected interfaces and let you see which one is currently being used. The one listed with an “inet” address that is not “” is usually the interface that you’re connected through.

On we have the following setup:

We have a small shell script that runs the test that we run. It is triggered from the cron and used a configuration file (default.json) where we have the default configuration that we use for all tests we run (and then we override some config values direct when we start the test).

Our file (we read which URLs we wanna test from files):

DOCKER_SETUP="--privileged --shm-size=1g --rm -v /root/config:/ -v /result:/result"
THREEG="--network 3g"
CABLE="--network cable"
CONFIG="--config /"
docker run $CABLE $DOCKER_SETUP $DOCKER_CONTAINER / $CONFIG >> /tmp/s.log 2>&1
docker run $CABLE $DOCKER_SETUP $DOCKER_CONTAINER --browsertime.firefox.includeResponseBodies / -b firefox $CONFIG >> /tmp/s.log 2>&1
docker run $CABLE $DOCKER_SETUP $DOCKER_CONTAINER / -n 3 --firstParty "" $CONFIG >> /tmp/s.log 2>&1
docker run $THREEG $DOCKER_SETUP $DOCKER_CONTAINER / --graphite.namespace sitespeed_io.emulatedMobile -c 3g --mobile true $CONFIG >> /tmp/s.log 2>&1
docker run $CABLE $DOCKER_SETUP $DOCKER_CONTAINER / -n 3 --webpagetest.key 09d4f36ebc3a4bdfb05c9e8402b38524 $CONFIG >> /tmp/s.log 2>&1
docker run $CABLE $DOCKER_SETUP $DOCKER_CONTAINER / -b firefox $CONFIG >> /tmp/s.log 2>&1
docker run $CABLE $DOCKER_SETUP $DOCKER_CONTAINER / $CONFIG >> /tmp/s.log 2>&1
docker run $CABLE $DOCKER_SETUP $DOCKER_CONTAINER / $CONFIG --graphite.namespace sitespeed_io.desktopSecond --preURL >> /tmp/s.log 2>&1
docker run $CABLE $DOCKER_SETUP $DOCKER_CONTAINER / $CONFIG --graphite.namespace sitespeed_io.desktopSecond -b firefox --preURL >> /tmp/s.log 2>&1
docker run $THREEG $DOCKER_SETUP $DOCKER_CONTAINER / $CONFIG --graphite.namespace sitespeed_io.emulatedMobileSecond -c 3g --mobile true --preURL >> /tmp/s.log 2>&1
docker run $CABLE $DOCKER_SETUP $DOCKER_CONTAINER / --preScript / -login.txt $CONFIG --graphite.namespace sitespeed_io.desktopLoggedIn >> /tmp/s.log 2>&1

It is then triggered from the crontab:

0 * * * * /root/

And our default configuration is in default.json.

  "browsertime": {
    "connectivity": {
      "engine": "external",
      "profile": "cable",
    "iterations": 5,
    "browser": "chrome",
    "speedIndex": true
  "graphite": {
    "host": "",
    "namespace": "sitespeed_io.desktop",
    "auth": "LOGIN:PASSWORD"
  "slack": {
    "hookUrl": ""
  "resultBaseURL": "",
  "video": true,
  "s3": {
     "key": "AWS_KEY",
     "secret": "AWS_SECRET",
     "bucketname": "",
     "removeLocalResult": true

And we setup the following Docker networks: ~~~ #!/bin/bash echo ‘Starting Docker networks’ docker network create –driver bridge –subnet= –gateway= –opt “”=”docker1” 3g tc qdisc del dev docker1 root tc qdisc add dev docker1 root handle 1: htb default 12 tc class add dev docker1 parent 1:1 classid 1:12 htb rate 1.6mbit ceil 1.6mbit tc qdisc add dev docker1 parent 1:12 netem delay 300ms

docker network create –driver bridge –subnet= –gateway= –opt “”=”docker2” cable tc qdisc del dev docker2 root tc qdisc add dev docker2 root handle 1: htb default 12 tc class add dev docker2 parent 1:1 classid 1:12 htb rate 5mbit ceil 5mbit tc qdisc add dev docker2 parent 1:12 netem delay 28ms

docker network create –driver bridge –subnet= –gateway= –opt “”=”docker3” 3gfast tc qdisc del dev docker3 root tc qdisc add dev docker3 root handle 1: htb default 12 tc class add dev docker3 parent 1:1 classid 1:12 htb rate 1.6mbit ceil 1.6mbit tc qdisc add dev docker3 parent 1:12 netem delay 150ms

docker network create –driver bridge –subnet= –gateway= –opt “”=”docker4” 3gem tc qdisc del dev docker4 root tc qdisc add dev docker4 root handle 1: htb default 12 tc class add dev docker4 parent 1:1 classid 1:12 htb rate 0.4mbit ceil 0.4mbit tc qdisc add dev docker4 parent 1:12 netem delay 400ms ~~~

Configure Graphite

We provide an example Graphite Docker container and you should probably change the configuration depending on how often you want to run your tests, how long you want to keep the result and how much disk space you want to use.

With 4.x we try to send a moderated number of metrics per URL but you can change that yourself.

When you store metrics for a URL in Graphite you decide from the beginning how long time you want to store the data and how often in storage-schemas.conf. In our example Graphite setup every key under sitespeed_io is caught by the configuration in storage-schemas.conf that looks like:

pattern = ^sitespeed_io\.
retentions = 10m:60d,30m:90d

Every metric that is sent to Graphite following the pattern (the namespace starting with sitespeed_io), Graphite prepares storage for it every ten minutes the first 60 days, after that Graphite uses the configuration in storage-aggregation.conf to aggregate/downsize the metrics the next 90 days.

Depending on how often you run your analyze you wanna change the storage-schemas.conf. With the current config, if you analyze the same URL within 10 minutes, one of the runs will be discarded. But if you know you only run once an hour, you could increase the setting. Etsy has good documentation on how to configure Graphite.

One thing to know if you change your Graphite configuration: “Any existing metrics created will not automatically adopt the new schema. You must use to modify the metrics to the new schema. The other option is to delete existing whisper files (/opt/graphite/storage/whisper) and restart for the files to get recreated again.”

Crawling and Graphite

If you crawl a site that is not static you will pick up new pages each run or each day and that will make the Graphite database grow each day. Either you make sure you have a massive amount of storage or you change the storage-schemas.conf so that you don’t keep the metrics for so long. You could do that by setting up another namespace (start of the key) and catch metrics that you only store for a short time.

The Graphite DB size is determined by the number of unique data points and the frequency of them within configured time periods, meaning you can optimise how much space you need. If the majority of the URLs you need to test are static and are tested often, you should find there’s a maximum DB size depending on your storage-schemas.conf settings.

Using S3 for HTML and video

You can store the HTML result on your local agent that runs or you can dump the data to S3 and serve it from there. To use S3, you need to first setup a S3 bucket.

Then you just configure to send the data to S3 by configuring the bucket name (and key/secret if that’s not available on your server).

You have the result on S3 and you are almost done. You should also configure to send annotations to Graphite for each run.


You can send annotations to Graphite to mark when a run happens so that you can go from the dashboard to a result HTML page.

You do that by configuring the URL that will serve the HTML with the CLI param resultBaseURL (the base URL for your S3 bucket) and configure the HTTP Basic auth username/password used by Graphite. Do that by setting –graphite.auth LOGIN:PASSWORD.


To run this in a production environment you should consider/make some modifications.

  1. Always run on a standalone instance
    • This avoids causing discrepancies in results due to things like competing resources or network traffic.
  2. Change the default user and password for Grafana.
  3. Change the default user and password for Graphite.
  4. Make sure you have configured storage-aggregation.conf in Graphite to fit your needs.
  5. Map the Graphite volume to a physical directory outside of Docker to have better control (both Whisper and graphite.db)
  6. Remove the sitespeedio/grafana-bootstrap from the Docker compose file, you only need that for the first run.
  7. Optional: Disable anonymous users access

Memory & CPU

How large instance should have? we use a $80 instance on Digital Ocean (8gb memory, 4 Core processors) we use that large instance because Chrome and Firefox needs a lot memory and CPU. It also depends on how complex site is, if you have a lot Javascript/CSS the browser will need more memory.

If you test a lot a pages (100+) in the same run, your NodeJS process can run out of memory (default memory for NodeJS is 1.76 GB). You can change and increase by setting MAX_OLD_SPACE_SIZE like this in your compose file:

        - MAX_OLD_SPACE_SIZE=3072