Documentation / Performance Dashboard
We spent a lot of time making it easier in 4.x to install and run your own performance dashboard with pre made dashboards and a Docker compose file to rule them all. You can see the beauty here.
curl -O https://raw.githubusercontent.com/sitespeedio/sitespeed.io/master/docker/docker-compose.yml
docker-compose up -d(make sure you run the latest Docker compose version)
docker-compose run sitespeed.io https://www.sitespeed.io/ --graphite.host=graphite
docker-compose stop && docker-compose rm
If you want to play with the dashboards the default login is sitespeedio and password is …well checkout the docker-compose.yml file.
We have prepared a Docker Compose file that downloads and setup Graphite/Grafana and sitespeed.io + a couple of example dashboards. It works perfect when you wanna try it out locally, but if you wanna run it in production you should modify it by making sure that the metrics are stored outside of your container/volumes.
We insert pre made dashboards with a Docker container using curl, that makes it easy for you to get started. You can check out the container with the dashboards here: https://github.com/sitespeedio/grafana-bootstrap-docker
The example dashboards are generic dashboards that will work with all data/metrics you collect using sitespeed.io. We worked hard to make them as good as possible and the great thing about them is that you can use them as base dashboards and then create the extra dashboards you like.
The dashboards has a couple of templates (the drop downs at the top of the page) that makes the dashboard interactive and dynamic. A dashboard that show metrics for a specific page has the following templates:
The path is the first path after the namespace. Using default values the namespace looks is sitespeed_io.default.
When you choose one of the values in a template, the rest will be populated. You can choose checking metrics for a specific page, browser and connectivity.
The default namespace is sitespeed_io.default and the example dashboards are built upon a constant template variable called $base that is the first part of the namespace (that default is sitespeed_io but feel free to change that, and then change the constant).
The page summary shows metrics for a specific URL/page.
The site summary show metrics for a site (a summary of all URLs tested for that domain).
How much impact to 3rd party code has on your page? To get this up and running you should need to configure the
--firstParty parameter/regex when you run.
Have we told you that we love WebPageTest? Yes we have and here are a default WebPagTest page summary where you can look at results for individual URLs.
And then also for all tested pages of a site.
Do you need anything else? Since we store all the data in Graphite and use Grafana you can create your own dashboards, it super simple!
You have the dashboard and you need to collect metrics. Using the crontab works fine or whatever kind of scheduler you are using (or Jenkins per build or … whatever suits you the best).
Using the crontab (on a standalone server) you do like this:
crontab -e to edit the crontab. Make sure your cron user can run Docker and change my.graphite.host to your Graphite host. When you run this on a standalone server my.graphite.host will be the public ip adress of your server. The default port when sending metrics to graphite is 2003, so you don’t have to include that.
If you run the container and the cronjob locally you cannot use localhost since each docker container has it’s own localhost. On a mac or linux machine you can use
$ ifconfig to retrieve your ip adress. This will output a list of all connected interfaces and let you see which one is currently being used. The one listed with an “inet” address that is not “127.0.0.1” is usually the interface that you’re connected through.
SHELL=/bin/bash PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin 0,30 * * * * docker run --privileged --rm sitespeedio/sitespeed.io:4.0 -n 5 --graphite.host my.graphite.host -c cable -b chrome https://en.wikipedia.org/wiki/Sweden >> /tmp/sitespeed-output.txt 2>&1
We provides an example Graphite Docker container and you probably need to change the configuration depending on how often you want to run your tests, how long you want to keep the result and how much disk space you want to use.
With 4.x we try to send a moderated number of metrics per URL but you can change that yourself.
When you store metrics for a URL in Graphite you decide from the beginning how long time you want to store the data and how often in storage-schemas.conf. In our example Graphite setup every key under sitespeed_io is caught by the configuration in storage-schemas.conf that looks like:
[sitespeed] pattern = ^sitespeed_io\. retentions = 10m:60d,30m:90d
Every metric that is sent to Graphite following the pattern (the namespace starting with sitespeed_io), Graphite prepares storage for it every ten minutes the first 60 days, after that Graphite uses the configuration in storage-aggregation.conf to aggregate/downsize the metrics the next 90 days.
Depending on how often you run your analyze you wanna change the storage-schemas.conf. With the current config, if you analyze the same URL within 10 minutes, one of the runs will be discarded. But if you know you only run once an hour, you could increase the setting. Etsy has good documentation on how to configure Graphite.
One thing to know if you change your Graphite configuration: “Any existing metrics created will not automatically adopt the new schema. You must use whisper-resize.py to modify the metrics to the new schema. The other option is to delete existing whisper files (/opt/graphite/storage/whisper) and restart carbon-cache.py for the files to get recreated again.”
If you crawl a site that is not static you will pick up new pages each run or each day and that will make the Graphite database grow each day. Either you make sure you have a massive amount of storage or you change the storage-schemas.conf so that you don’t keep the metrics for so long. You could do that by setting up another namespace (start of the key) and catch metrics that you only store for a short time.
To run this in production you should do a couple of modifications.