Lighthouse Changes How Performance Score is Calculated

Profile photo of Karolina Szczur

Karolina Szczur

March 27, 2020

This article explains the changes in the upcoming Lighthouse 6.0.0 and its scoring algorithm.


Lighthouse 6 is the first major release of the popular auditing tool since May 2019, introducing critical changes that will impact everyone relying on the scoring. Remember, the PageSpeed Score (which contributes to your SEO ranking) is the same as the Lighthouse Performance Score.

Changes to the scoring algorithm

The most significant change coming with Lighthouse 6 is a complete overhaul of its scoring algorithm, which means how Lighthouse calculates the famous Performance Score.

In the previous major release, Performance Score was calculated based on 5 metrics. In Lighthouse 6, the score is measured with 6 metrics, removing First Meaningful Paint and First CPU Idle and replacing them with Largest Contentful Paint, Total Blocking Time and Cumulative Layout Shift (more on those new metrics below).

Additionally, the weighting for each one of the metrics contributing to the overall score has changed, putting more emphasis on the new generation of measurements focusing on user experience:

Lighthouse 5 scoring metrics and their weightLighthouse 6 scoring metrics and their weight
First Contentful Paint (20%)First Contentful Paint (15% ⬇️)
Time to Interactive (33.3%)Time to Interactive (15% ⬇️)
Speed Index (26.7%)Speed Index (15% ⬇️)
First Meaningful Paint (6.7%)Largest Contentful Paint (25% ⬆️)
First CPU Idle (13.3%)Total Blocking Time (25% ⬆️)
-Cumulative Layout Shift (5% ⬆️)

No matter how you are using Lighthouse, whether it’s one-off tests or any continuous performance testing tool, your Performance Score will change.


It’s essential to understand the changes in metrics and scoring and prepare for possible differences you will see in your reporting. Paul Irish from the Lighthouse team developed a handy calculator that can help you grasp the difference between your current and future scoring.

New performance metrics

A few months ago, we wrote about the new generation of performance metrics that focus on accurately portraying what users experience. Total Blocking Time and Largest Contentful Paint have been present in Lighthouse since last year but not visible in the test result viewer. At Calibre, we’ve been reporting those measurements since the initial release.


Total Blocking Time, Largest Contentful Paint with the addition of Cumulative Layout Shift become available in Lighthouse 6 and serve as a basis for Performance Score calculations.


If you haven’t been tracking those three new metrics, now is the time to add them to your list.

New device defaults

Up until now, Lighthouse was conducting its mobile tests using a Nexus 5X—an Android-based smartphone. In Lighthouse 6, Motorola Moto G4 becomes the mobile benchmark. Those devices have similar CPU capabilities, but the size of the viewport will change. While this change shouldn’t cause much variance, with changes in reporting infrastructure and setups, it’s always to possible to see minor differences.

If you’re a Calibre user, we’ve been automatically setting up all newly created Sites with a mobile test profile running on a Motorola Moto G4 for a few years.

New audits focused on JavaScript

JavaScript size and execution is one of the most significant sources of performance problems. There are a couple new audits focusing on modernizing and removing unnecessary script:

  • Legacy JavaScript audit: finds calls to old methods that now have better, modern alternatives.
  • Duplicated Script audit: alerts when the same module is included multiple times on a page. This audit requires source maps.

These audits are likely to be present in Lighthouse 6 or later.

Browser extension to use PageSpeed Insights API

Chrome and Firefox Lighthouse extension tests will run with PageSpeed Insights API, not local Lighthouse tests. The official recommendation is to use Lighthouse built directly into Chrome Developer Tools instead, located in the Audits tab.

For one-off tests, use PageSpeed Insights. Keep in mind that PageSpeed Insights, Chrome stable and Chrome Canary are on different release schedules (some short, some several months long). This means you might be seeing disparities in results if tests are run on different Lighthouse versions.

Discrepancies in what infrastructure is deployed to run monitoring and which version of Lighthouse is used are common sources of confusion surrounding performance results. Another alternative is to use Lighthouse command-line client. At the time of writing, npm install -g lighthouse@next will download the 6.0 beta release.

For stability and reliability of quality audits, we strongly recommend using continuous testing tools that run on the latest and most stable version of Lighthouse. At Calibre, we constantly review Lighthouse betas and update to stable versions as they become available.


There are dozens of changes coming to Lighthouse 6. Check the project changelog for a full list.

Profile photo of Karolina Szczur

Karolina Szczur

Karolina is the Product Design Lead at Calibre. With years of design experience under her belt, she’s responsible for the comprehensive user experience spanning over product, visuals and content. Find her on Twitter or LinkedIn.

Related posts

Everything You Have to Know About Core Web Vitals

Don’t miss another article

Be notified about new product features, releases and performance research.