April 14, 2020
Web performance space is defined and quantified with a range of metrics. Those metrics fall into different categories, such as paint, runtime, request, byte size or custom Google Lighthouse scores. There are dozens of metrics available, not mentioning the possibilities of creating custom measurements.
With such a vast landscape of metrics often identified by acronyms only known to people within the performance space, it can be difficult to choose relevant measurements but also effectively talk about performance as a team.
Knowing about the new generation of metrics or old ones being deprecated can also be challenging as no single standard or entity is providing such information.
Being effective in performance work means understanding standards, browser protocols, rendering, metrics, and knowing where to get reliable information about new developments.
It’s a lot to take in.
While metrics don’t completely disappear from browsers and reporting tools, sometimes we learn that they don’t have the necessary properties to make them reliable and implementable. The most recent example of such a metric is First Meaningful Paint.
First Meaningful Paint (FMP) is the time at which the largest area of above-the-fold content was painted to the screen. FMP was one of the metrics contributing to the Lighthouse Performance Score calculation, but as of Lighthouse 6, it has been removed. FMP often has been used in pair with First Contentful Paint (FCP), a metric that returns a similar landmark timing, but is calculated using different methods.
If you are familiar with the new generation of performance metrics, FMP definition sounds somewhat similar to the Largest Contentful Paint (LCP), which is now recommended as its replacement. Both metrics focus on when the largest (usually imagery) element becomes visible in the user’s viewport. We will talk about LCP more later.
There are several reasons to rethink using First Meaningful Paint:
Each performance metric should have several properties that make it a good top-level metric. First Contentful Paint has been reported to be overly sensitive to the changes in page load, which goes against the stability property that’s vital to each metric.
Calculating FMP also relies on very specific Blink (the rendering engine used by Chromium) events, which makes it impossible to implement in other browsers but Chrome.
While performance metrics might become available in developer tools of different browsers at varying times, a metric that can only be used by a single browser prevents the standardisation process that’s essential to the development of the web.
FMP was previously used for Lighthouse Performance Score calculations, but recently it has been removed and replaced with a set of more modern performance metrics. It won’t be contributing to your Performance Score as of Lighthouse 6.
All of those factors make First Meaningful Paint a less effective and potentially misleading measurement. Unless you’ve been tracking and getting benefits from FMP, we suggest focusing on replacement metrics described below.
If you are going to stop tracking First Meaningful Paint, there are two replacement metrics that you should be focusing on.
The closest replacement (considering the rendering timeline and order) will be First Contentful Paint: the time at which the browser rendered any text, image, non-white canvas or SVG content. FCP remains a good indicator of the initial contentful load and an essential marker within the loading timeline. FCP will be useful to track in any context.
The second metric is Largest Contentful Paint, which reports the render time of the largest element in the viewport that falls in the category of images, videos or certain block elements. LCP is likely to be reported at a much later time than FMP. It’s a relatively new metric that will be especially helpful in the context of news, editorial and e-commerce sites.
Both of these metrics are used to calculate the new Lighthouse Performance Score and are trackable in Calibre. No matter if you decide to replace FMP tracking or no, they are good candidates to add to your monitoring set.
Some performance metrics are more useful than others depending on your context. To be successful, weigh in the characteristics of the metric and what it tracks alongside business and user experience goals.
Be notified about new product features, releases and performance research.