June 17, 2020
Release Notes is a monthly summary of new features, product improvements and upcoming releases.
On May 26, we updated our Test Agents to the latest version of Lighthouse. Lighthouse 6 comes with a completely revamped Performance Score algorithm, new metrics, audits, and more. All tests from May 26 onwards run using Lighthouse 6, which is why you are likely to observe changes to your Performance Score.
With Lighthouse 6, we are also reporting a new metric—Cumulative Layout Shift. Alongside Total Blocking Time and Largest Contentful Paint (present in Calibre since September and October 2019), CLS is a part of a new generation of performance metrics that aim to portray user experience more accurately.
This also means that you can track all Core Web Vitals (Total Blocking Time, Largest Contentful Paint and Cumulative Layout Shift) in Calibre. We are committed to keeping our platform on par with the newest developments in the performance space.
The Main Thread Execution Timeline showcases all long task warnings and blocking times in the lifespan of the main thread activity. This means that you can not only inspect tasks that contribute to Total Blocking Time (defined between First Contentful Paint and Time to Interactive boundaries) but other long tasks on the main thread that don’t fall in this category.
Along with the latest improvements to Snapshot page showing Snapshot status, we enhanced the transparency around test failures. When a Snapshot fails, you will see a message stating the reason on the Snapshot page.
On June 4, we uncovered an issue in which Performance Scores for all mobile Test Profiles were miscalculated between May 26 — June 4, coinciding with our Lighthouse 6 release. This issue has affected all customers with mobile Test Profiles. All other metrics and scores were not affected.
An unforeseen change in how mobile simulations were handled in Lighthouse 6 resulted in Calibre reporting much lower values for mobile Performance Scores than they should have been. On June 4, after consultation with the Lighthouse Core Team, we re-calculated the Performance Score for all mobile Test Profiles within the affected timeline. This change is now reflected in the Calibre interface. The changes are not populated within any data exported with the API or Webhooks. No action is necessary.
It’s our core focus to provide stable and reliable performance reporting you can trust. We understand how an issue of misreported metrics breaches that trust and affects your teams’ ability to be successful in improving speed. This is not the standard we set for ourselves at Calibre, and we recognise that on this occasion, we have failed to provide accurate reporting for the given period. We are sorry for that. We are working towards an even closer collaboration with the Lighthouse Team to prevent such regressions from happening in the future.
Test Verification plays a significant role in our performance platform. We documented the way Calibre verifies each test it conducts to make sure metrics we report are stable, reliable and believable.
A few months ago, we shared an early preview of the new Performance Budgeting. Since then, we have made numerous improvements to the initial concept and scope.
With new Budgets, Calibre will guide you in creating performance thresholds that not only make sense for your context but also are in line with industry recommendations for optimal speed.
You will be able to browse your Budgets depending on their status or devices they affect and drill down into specific Pages. You will be notified in Slack or through email based on selected frequency (more and less reactive).
We are excited to share Budgets 2.0 with you within the coming days.
We will send you our latest web performance articles and notifications about new Calibre features.