Light tracks

Measuring frontend performance

As a frontend developer, I deal with performance issues every single day. Let's see how it is possible to get an initial picture of frontend performances.

Measuring frontend performances can be very difficult, as there are too many factors involved. This article does not claim to offer the ultimate solution to such a hard problem. Please consider this as a starting point for your studies and insights.

Every time I write a new feature which involves javscript or massive css code, I need to check how much that piece of code impacts performances. 

While I was searching for a tool which could retrieve some information about page load timings, I found the great phantomas. This tools is able to collect a huge amount of data regarding a specific URL. Bingo!

The workflow

So, here is how I take advantage of git (are you using git, do you? ) and phantomas to get an idea of how much weird is my code in terms of performances:

  • I always start from a clean repository
  • I collect backend and frontend timing metrics
  • I write my super fancy feature which involves javascript or a lot of css code
  • With git I can easly get to know what are the modifications I am doing
  • I collect backend and frontend timing metrics again, watching the difference

Do you think it is too simple? Oh, I love simple things and practices, and you should love them too.

The script

Data phantomas collects may be hard to read, so I wrote a very basic bash script (tested on both Linux and OSX) to get a graph of backend and frontend loading times.

#!/bin/bash

##
# DEFAULTS
##
URL=http://www.google.com
DEFAULT_OPTS="--no-externals --reporter=json --silent --timeout=600"
RUNS=2

##
# ARGUMENT PARSE
##
while getopts ":n:u:" opt; do
    case $opt in
        u)
            URL=$OPTARG
            URLSET=1
        ;;
        n)
            RUNS=$OPTARG
        ;;
        \?)
            echo "Invalid option: -$OPTARG" >&2
            exit 1
        ;;
    esac
done

[[ $URLSET -ne 1 ]] && echo "[WARNING] no URL set. Crawling google.com"

##
# MAIN
##
function collect {

    OPTS="--runs $RUNS"
    phantomas $URL $DEFAULT_OPTS $OPTS > desktop.json

    cat > $1 << \EOL
<!DOCTYPE html>
<html>
<head>
    <script src="https://code.jquery.com/jquery-1.11.1.min.js"></script>
    <script src="http://code.highcharts.com/highcharts.js"></script>
    <script>
EOL

    cat desktop.json | sed -e 's/^{/var bigjson={/' >> $1
    cat >> $1 << \EOL
;
$(function () {
    var timeFrontend = [];
    var timeBackend = [];
    var serie = [];
    var categories = [];
    var i = 0;
    for (i=0; i<bigjson.runs.length; i++){
        timeFrontend[i] = bigjson.runs[i].metrics['timeFrontend'];
        timeBackend[i] = bigjson.runs[i].metrics['timeBackend'];
        categories[i] = 'Run' + i;
    }
    serie[0] = { name: 'Time to get frontend (ms)' , data: timeFrontend }
    serie[1] = { name: 'Time to get backend (ms)' , data: timeBackend }

    $('#container').highcharts({
        chart: { type: 'line' },
        title: { text: bigjson.runs[0].url },
        xAxis: { categories: categories },
        series: serie
    });
});
    </script>
</head>
<body>
    <div id="container" style="width:100%; height:400px;">
    </div>
</body>
</html>
EOL

    rm desktop.json
    echo "[DONE] Report: $( realpath $1 )"
}

collect report.html

Run the script specifying the URL to crawl and number of runs. For example:

./frontend_perf.sh -u www.nephila.it -n 50

Conclusions

You should consider this as a starting point to measure frontend performances.

Include phantomas in a continuos monitoring system could be a great thing, but it is time consuming and requires a lot of sysadmin knowledge.

My script may be useful for quick access to performance data while you are trying to convince your sysadmin to set up a continuos monitoring server :)