Imagine you’re browsing a website. The website loads painfully slow and the images take a century to load, the css and the other files keep on downloading and the website doesn’t look good in mobile. Will this be a good user experience?

Photo by Atul Choudhary on Pexels.com

We live in a world where user perception changes every few seconds. If a webpage is taking much time to load and present to the user, then chances are the user will not wait for it and will go to some other website – resulting in possible loss of business and dollars for companies. In order to not have this loss, there is an increased scrutiny on how the web pages load and how it can be made more efficient.

A school of thought is that the backend engine that powers should be optimised to handle the user load. That is true. Your Backend system should be robust enough to return response as quickly as possible and testing like performance, stress, load would help you achieve this by finding out the performance bottlenecks. However, a web system rarely doesn’t only relies on back-end. At the end, the user will interact with the end user interface, which is generally called the Front end.

So it becomes imperative that you have a FE which is optimised. Now we have tools like Jmeter, K6, Loadrunner etc. that can help perform tests for performance in BE, what about for FE? That is where Google’s Lighthouse comes into picture.

What is Lighthouse

As published by Google

Lighthouse is an open-source, automated tool for improving the quality of web pages. You can run it against any web page, public or requiring authentication. It has audits for performance, accessibility, progressive web apps, SEO, and more.

You can run Lighthouse in Chrome DevTools, from the command line, or as a Node module. You give Lighthouse a URL to audit, it runs a series of audits against the page, and then it generates a report on how well the page did. From there, use the failing audits as indicators on how to improve the page. Each audit has a reference doc explaining why the audit is important, as well as how to fix it.

When you open Chrome DevTools, you can see the Lighthouse tab –

Google Lighthouse is a tool intended to help web developers improve the quality of the pages on their site.

It checks the loading speeds, accessibility and the overall search engine optimization (SEO) of the page it checks. These functions are important, as the quality of each page is a contributing factor to your website’s SEO as a whole.

Originally intended to analyze Progressive Web Apps (PWAs), Google has expanded the functionality of this tool to make it more accessible to the everyday web designer.

It runs tests, also known as audits, on how well the page loads under simulated conditions. The simulated conditions include a weak data connection on a slow device, packet loss and network throttling.

Seem unfair? It really isn’t.

The goal of Google Lighthouse is to help you to better optimize your site loading speeds, so testing from not-so-ideal conditions is an excellent real-world test to help you on your optimization journey.

Lighthouse Metrics

When you’re running Lighthouse on your webpage, Lighthouse will take into account different audits. The different audits when calculated will then be used to get a score. There are multiple metrics, which can be broadly classified into these categories of audits –

  • Performance – The performance of your site’s pages relates to the overall speed at which your site loads and how well users are able to interact with your site content.

  • Accessibility -The Accessibility test checks your pages to be sure that a page header has been established. It also runs basic checks on the colours on your site to be sure that it’s easily readable and whether or not your site text can be scaled for a visually-impaired user.

  • Best-Practices– This test is more generalized than the other tests, but focuses heavily on security vulnerabilities.

  • SEO – SEO is something that many users are familiar with already. The test checks to be sure that your web page is easily discoverable by search engines.

  • Progressive Web App – The Progressive Web App (PWA) scan tests your pages to make sure that they can be easily downloaded on a viewer’s mobile device and accessed offline

In this article we will discuss the first collection of metrics – which contribute to the Performance score of the webpage.

Performance Audits And Scoring

In Performance audits, the main thing that is being checked it what is the performance of the webpage during the loading of the webpage. Performance audits also check how soon the webpage is available for the user to interact with. Let’s see an example – this is the lighthouse test that I did for one of the webpages – link given below that was shared by Prashant Hegde – one of the good people to follow on Linkedin for testing related stuff.

URL : https://birdeatsbug.com/blog/five-clever-ideas-to-elevate-your-testing-team-to-the-next-level

As you can see from the photo above, there are multiple pointers being given under the METRICS section and that is something that we will try to understand in this post. But before that let’s understand the scoring that has been given

Lighthouse assigns an overall performance score to a page based on how your page performed for all these metrics. The score can be anything from 0 to 100.

As taken from the Lighthouse docs, this is mentioned as

Once Lighthouse has gathered the performance metrics (mostly reported in milliseconds), it converts each raw metric value into a metric score from 0 to 100 by looking where the metric value falls on its Lighthouse scoring distribution. The scoring distribution is a log-normal distribution derived from the performance metrics of real website performance data on HTTP Archive.

More about the above can be read for these docs.

Now you see that there are different colours assigned to different metrics and their values. The color coding is done in this way

  • 0-49 – Poor, then the colour will be Red
  • 50-89 – Needs improvement. In this case colour is Orange.
  • 90-100 – Good. In this case colour is Green.

The scores mentioned above are actually a weighted average of the different metrics. Each metrics is assigned a specific weightage. So, more heavily weighted metrics have a bigger effect on your overall Performance score. The metric scores are not visible in the report, but are calculated under the hood.

Performance Audit Metrics

If you see the image above, then you can see some of the metrics that come under the Performance audit. Let’s list them and see what they do.

  • First Contentful Paint (FCP): Measures the time at which the first text or image becomes visible to users. First Contentful Paint (FCP) is a web performance metric that measures the time it takes for the first piece of content to appear on a webpage.

  • Largest Contentful Paint (LCP): Calculates the time a page takes to load its largest element for users. Largest Contentful Paint (LCP) is a web performance metric that measures the loading speed of the largest element visible in the viewport.

  • Total Blocking Time (TBT): Measures the amount of time that a page is blocked from reacting to user input, like a mouse . It measures the total amount of time a page is unresponsive to user input during its loading process.click.

  • Cumulative Layout Shift (CLS): Measures the layout shifts that occur as users access a page

  • Speed Index (SI): Shows how quickly the content of a page is loaded. Speed Index (SI) is a web performance metric that measures how quickly the visible parts of a webpage are loaded and displayed to the user.

Each of these carry different weights. And their overall Weightage, these individual metrics are called up.In the part 2 we will see what these metrics and what their weightage in the overall results. Let’s see the part 2 for some actual working.