When testing performance using several tools, we often encounter vastly different results. The discrepancy between measurements is not only confusing, but also doesn’t inspire confidence. It becomes difficult to establish which tool to trust.
This issue manifests itself when comparing Calibre to other services, in this case, PageSpeed Insights. There are significant differences that make comparison between the two tools an impossibility. In this guide, we explain those disparities, so you can better understand your performance results.
It’s important to note that each monitoring service operates under notably contrasting conditions:
The location of where tests originate from is a significant factor in performance testing. The latency for every request will differ from location to location. Your site may be closer, or further based on where the test is ran from.
Calibre | PageSpeed Insights |
---|---|
Tests originate from a pre-selected test location. Run on Amazon Web Services (AWS). | Tests originate from 1 of 4 global locations, based on round-robin selection and latency to your browser. Run on Google. |
PageSpeed is hosted in 4 locations:
Network conditions change based on locale, but another important factor is the applied network speed of the test.
Calibre | PageSpeed Insights |
---|---|
Test Profiles allow you to choose from a list of pre-configured network throttling settings. They’re chosen based on their relevance to observed global averages. | PageSpeed tests under two different network speeds: Desktop (described as “Dense 4G 25th percentile”) and mobile (described as “Mobile Slow 4G”). |
The underlaying hardware running performance tests has an effect on the accuracy of simulated hardware devices. Both Calibre and PageSpeed use different types of machines.
In order to return results fast, PageSpeed runs at full network speed, then applies a simulation to calculate performance metrics.
Calibre applies bandwidth and device throttling as it conducts the testing—metrics are recored in realtime.
No. Based on the factors described above, it’s impossible to make any form of informed comparison. When observing performance data, it’s important to establish a baseline and compare it in relativity to historic results. Only when recording in a stable environment, under the same conditions, we’ll be able to draw confident conclusions about our site’s performance.