Karolina Szczur
November 22, 2023
Illustrated by
Happiness score, experience score, or performance score—many have tried to distil people’s experience into a singular number (which, in performance case, has serious downsides).
But is it even possible to boil down human perception to a single number? What’s the gap between measured and perceived performance, and which one should we use as our goal?
Perceived performance refers to how fast or responsive a website or app feels compared to how fast or responsive it is, as reported by metrics. Where metric-driven performance relies mainly on development improvements, managing perceived performance is a combination of design choices, and leveraging the psychology of how people perceive time and speed.
Because perceived performance is subjective and dependent on external factors (what is the setting, someone’s browsing a site in? What device are they using? What’s their network like? What’s their emotional state?) it’s notoriously challenging to quantify.
Despite this complexity, there are a handful of guiding questions we can ask to help determine if the perception of speed will have a positive sentiment:
While performance measurements are indispensable, how responsive people perceive our user interfaces to be should be the north star for prioritising speed efforts.
Before we can get into actionable advice on how to work with perceived speed, we need to learn the basics of how people understand time and what affects its perception.
Time might be a scaffolding for our life, but it’s a subjective and ever-changing concept. Accordingly to numerous studies, our time perception can be altered by emotional state, age, whether we’re bored, craving something, or our culture. Ever heard of “time flies when you’re having fun” or stared down a pot of water waiting to boil? Then you’ve already experienced how we feel time expanding and contracting.
Since time is highly subjective, we need principles to guide us in knowing what “fast” means, what it depends on and how we can deliver it for everyone. One commonly invoked principle in the field of psychology is the Weber-Fechner law, which defines the Just Noticeable Difference (JND) in response to stimulation.
Research indicates that we can only notice a change of more than 20%.
In the web performance context, this means aiming for improvements well above the 20% threshold. This number doesn’t mean that smaller, cumulative improvements don’t count, but provides us invaluable information about what human perception is like.
We’re also better at detecting small differences in time in shorter, rather than longer intervals. Especially when performing routine, quick tasks, we will be more affected by delays. That points us to optimising actions people perform most frequently within our sites and apps, such as adding to cart, the checkout process, or create, read, update and delete (CRUD) actions.
Moreover, our timeliness expectation is tightly tied to how complex the interaction is supposed to be. The more complex the task we’re performing, the more forgiving we might be of delays:
Our expectation is also tightly tied to different modes of input (mouse, keyboard, speech, etc.) and interface feedback (sound, visuals, or haptic). We’ll have different expectations depending on how they use the web and what feedback we receive.
How we perceive speed also connects to the idea of flow. In the 1960s, psychology professor Mihaly Csikszentmihalyi established the definition of flow:
[Flow] is being fully absorbed in activity—it’s a natural state of being productively engaged with a task without being aware of underlying technology.
When website or app performance enhances the feeling of flow, we’re more likely to have a positive sentiment about its performance.
When talking about feeling focused on a task, how do we quantify the timeframes for maintaining flow? Many cite Jakob Nielsen, who specifies three limits (0.1 second, 1 second and 10 seconds) to human attention. While this simplification might be useful to some, there’s a difference between “instant” and “immediate” and between “attentive” and “non-attentive” spans (which you might call passive and active).
Bucketing timeframes by attention span (based on Steve Souw’s definitions and research by Rina A. Doherty and Paul Sorenson) proves more useful when assessing performance depending on length of the operation and perceived complexity:
Attention | Category | Response time | Description |
---|---|---|---|
Attentive | Instantaneous | below 300ms | Perceived as closed-loop system, where people are in direct control. |
Immediate | 300ms–1 sec | Perceived as easy to perform. | |
Transient | 1–5 sec | Perceived as requiring some simple processing, but people still feel like they are making continuous progress. People are unlikely to disengage from task flow. | |
Attention span | 5–10 sec | Perceived as requiring more wait time. People need informative feedback to stay engaged. | |
Non-attentive | Non-attentive | 10 sec–5 min | Perceiving as requiring more complex processing. People are likely to disengage and multi-task. |
Walk-away | above 5 min | Perceived as requiring intensive processing. People won’t stay engaged with the task. |
While the non-attentive times are far more generous, this doesn’t mean we should stop aiming for as low thresholds as possible. What they do is give a more granular distinction between attentive and non-attentive spans. We can then use this information to design flows depending on the duration of an action, provide appropriate, timely feedback and prioritise performance work.
There are dozens of other heuristics, laws and principles on time perception, which we aren’t going to cover. But what the above information gives you is an understanding of:
We know that providing feedback can decrease user’s perception of wait-time and associated stress. One of the most common cases where feedback is necessary is following an interaction, especially when the response takes time. The more timely status and progress updates we give, the more control and certainty people have while browsing.
Managing these updates often includes loading indicators. To truly make every interaction feel snappy, we should choose them based on the context of the length of the response time and predicted wait time for an action to be completed:
Load time | Wait time | Strategy |
---|---|---|
below 1s | - | No loader needed |
1–2s | - | Skeleton screen or localised spinner |
2–10s | Fixed | Time indicator |
Open-ended | Progress bar or step indicator | |
above 10 s | Fixed | Percentage indicator or background process indicator |
Open-ended | Notify people when task is complete |
There are also additional considerations for loading indicators, besides choosing the right type for wait time:
Appropriate communication of status and progress helps manage expectations and soften the effect of delays we can’t optimise. You can use paint metrics (such as Largest Contentful Paint, First Contentful Paint) and runtime metrics (Interaction to Next Paint) to assess loading, but they won’t replace step-by-step, visual testing.
Visitors are never more aware of how long a page takes to load than when they have nothing to do. Luckily, there are a few strategies you can use to make sure people’s attention remains occupied, and doesn’t drop off:
Implementing these improvements means being aware of when visitors will likely be waiting. Make sure to track load times for all your pages and analyse the most common behaviours and actions, so you can implement optimisations where they’re most needed. Similarly to the first strategy, this approach helps us meaningfully manage delays.
Sudden page movements are the bane of online existence. It’s frustrating when you click a menu item, and it suddenly shifts up or down a substantial amount, resulting in landing not where you intended to. These shifts also destroy any illusion that your site is loading quickly, showing that the page is still very much a work in progress.
Shifts happen for several reasons. To optimise how pages and their building blocks render:
Luckily, unexpected movement is something we can test and address, thanks to the Cumulative Layout Shift metric. You can find actionable strategies for avoiding sudden page movements in our Cumulative Layout Shift guide. You can also quickly check CLS in Core Web Vitals Test for free. Avoiding sudden page movements helps us communicate reliability and readiness of a site—all critical to perceived performance.
Spinning fans, hot battery, and a blank or unresponsive page—all hallmarks of the browser struggling to render a requested page. When things go awfully wrong like this, the perception of the service (or brand), as well as our trust, are already negatively impacted. Fortunately, we have well-tested strategies to prevent those situations:
Reducing and optimising script resources prevents the signs of intensive processing and reduces the possibility of delays. We can use Interaction to Next Paint and Total Blocking Time to quantify and track these efforts.
When people mention web performance, they usually focus on development-first aspects like JavaScript frameworks and performance metrics. These are all important for creating a better experience for your visitors—but smart design, rooted in behavioural psychology, also has its role in creating better UX and performance, perceived or otherwise.
To learn more about how you can design your site for better performance, subscribe to our Performance Newsletter.
This post is also available in Hebrew, thanks to Alik Zem. לקריאת הפוסט בשפה העברית
We will send you articles, tools, case studies and more, so you can become a better performance advocate.
Harry Roberts
Consultant Web Performance Engineer