December 10, 2019
In this article, we will explain critical web performance concepts and strategies. Understanding them will empower you to start the journey in optimising the speed of your sites and applications.
On a high-level, web performance is the speed in which requested content is downloaded, rendered and ready to be interacted with. That time is quantified in several ways through various performance metrics (which we will be talking about in detail later on).
What makes performance so complex is that what we deem fast through observing the metrics alone, might not accurately portray the experience of our users, which is why we talk about objective (measurable by metrics) and subjective (perceivable by people) performance.
Web performance is continually balancing between improving key metrics and testing our assumptions with people using our software.
Performance doesn’t have to be specific to development. You can go as far as to define performance as one of your product principles—how easy and fast is the user able to achieve their goals?
Poor web performance has a far-reaching impact on not only user experience and accessibility, but also business growth, the ability to advertise and market your products.
When 1 second delay can cost up to 20% conversion lost, performance becomes too costly to ignore.
The user experience impact of bad performance is quantifiable. 53% of mobile users abandon pages that fail to load within a mere 3 seconds. Our perception is also limited; after 1 second, our flow of thought is interrupted, and 10 seconds we’re distracted from the task we were trying to complete.
Google’s search ranking algorithm penalises slow sites as the results are ordered based on mobile speed. Additionally, your ads might be more expensive in effect. In the future, Google might be visibly marking slow sites in their browser interfaces, effectively signalling to viewers that they’re about to be subjected to a poor experience.
Slow speed is a design vector that directly reflects on your brand—it’s trustworthiness and quality.
Dozens of performance metrics portray different stages in the process of requesting, rendering and interacting with a page. Choosing which ones are relevant to you and also accurately represent the user experience can be difficult. Understanding how they sit within the spectrum of page delivery and execution is crucial.
Here’s how we categorise performance metrics:
|Paint||Metrics describing the visual changes that happen during page load.||First Paint, First Contentful Paint, First Meaningful Paint, Largest Contentful Paint|
|Byte Size||Metrics portraying the uncompressed size of assets once they have been delivered to the users.||Total Page Size in Bytes, Total Image Size in Bytes, Total CSS Size in Bytes|
|Lighthouse||Aggregated audit scorings provided by Lighthouse, which base on other performance metrics and custom conditions.||Performance Score, Accessibility Score, SEO Score, Best Practices Score|
When beginning to track performance, a useful set of metrics to start with would be Time to Interactive, Largest Contentful Paint, Total Blocking Time, First Contentful Paint, Time To First Byte and Lighthouse Performance Score.
This selection provides a comprehensive collection of metrics portraying not only overall performance but also user-experience from the standpoint of visuals and interactivity.
Significant problems will inevitably depend on your context, but there are types of assets and specific areas that cause most performance headaches.
What often gets missed when analysing script is the fact that uncompressed size is often 2-3× larger than the optimised package, usually sent in a single file that has to be parsed. With size comes the hefty price of execution times that worsen with slower networks and devices.
When talking about script, it’s important to mention the impact of third party resources. External script is harder to control and often goes unoptimised as developers defer responsibility to service providers. When the majority of sites serve approximately 40 external scripts, their impact cannot be ignored.
We can be effective at managing external script by adopting progressive loading strategies. Always only download and execute what’s relevant for a given customer context (a good example is our fake live chat implementation that made Time to Interactive 30% faster).
Appropriate management and prioritisation of critical requests is another area that can make or break the performance of your sites. With the power to direct how browsers fetch assets through priority hints, we can ensure essential assets are available in time to provide a speedy rendering experience.
Knowing these common areas where bottlenecks arise will ensure you know where to look for most significant improvements.
Choosing the right monitoring tool that suits your needs and scale will have a significant impact on how successful you are. There are several factors to look out for, no matter what your specific requirements might be.
The first of them is an overarching principle guiding your performance work.
Understanding your customers is critical. It’s necessary to identify the spectrum of your audience.
Only when there’s visibility in where people are, which devices they’re using and under what networking conditions, you will be able to adjust your performance strategy accordingly. Just as we analyse behaviours, conditions and personas while trying to build successful products, performance work calls for similar, intentional thinking.
Often we see organisations using a range of different tools to measure speed. What they often don’t know is that using several tools is doomed to fail. Each provider varies in testing infrastructure—the location of testing machines, network conditions, devices with differing CPUs. Trying to draw comparisons between platforms running on various stacks and algorithms (WebPageTest, Lighthouse and PageSpeed or a custom solution) is impossible. It will introduce unnecessary confusion about the trustworthiness of the metrics themselves. Choose one platform and commit to long-term monitoring to be able to draw meaningful conclusions.
How reliable the results are is another essential factor to look for. Adequate test verification will prevent suspicious spikes from happening and unnecessarily drawing attention to a false positive. Seek out tools that have systems in place to ensure measurements can be trusted, at all times.
Tooling introducing transparency to the state of user experience spanning beyond the development team will help you foster a performance culture and get stakeholder buy-in. Platforms offering features such as Slack alerts, email reporting and GitHub Pull Request comparisons will make performance an unmissable part of your development process.
Whatever solution you end up choosing it should be reliable, trustworthy, supporting new generation of metrics, extensible through robust APIs and making reports available beyond your development team.
Hopefully, you have now learned enough about web performance to confidently start monitoring and improving the user experience and adoption of your products.
If you’re interested in keeping up with performance and web platform news, sign up for our fortnightly mailing—Performance Newsletter.
Be notified about new product features, releases and performance research.