1 private link
I was a initially sceptical of AVIF – I don't like the idea that the web has to pick up the scraps left by video formats. But wow, I'm seriously impressed with the results above.
Have you ever looked at the Network Panel in DevTools, or a waterfall in WebPageTest and wondered what determines the order of the resources, and how you can influence it?
The techniques at your disposal can be (roughly) grouped into the following types of optimizations:
- Parallel execution (with web workers usually)
- GPU acceleration
- Efficient Arrays and linear algebra routines
- Source compilation + optimization (asm.js, WebAssembly)
And each of these is suited particularly well to overcome a specific performance bottleneck (or use case).
content-visibility
enables the user agent to skip an element's rendering work, including layout and painting, until it is needed. Because rendering is skipped, if a large portion of your content is off-screen, leveraging the content-visibility property makes the initial user load much faster. It also allows for faster interactions with the on-screen content.
Assuming that an optimised 404 page is only required because users will mistype a URL in their browser is short-sighted. As the HTTP Archive data has shown, there are many other reasons why a user may encounter a 404 response (even if they have no idea they actually are!). The web performance impact of a users browser loading an unoptimised 404 page can be huge, and it can have a real impact on their experience of your whole site. All it takes is a forgotten file or misplaced ; in some JavaScript, and your users could be encountering it.
This is not a React hit piece, but rather a plea for consideration of how we do our work. Some of these performance pitfalls can be avoided if we take care to evaluate what tools make sense for the job, even for apps with a great deal of complex interactivity.
[…] if you use React or any VDOM library, you should spend some time investigating its impact on an array of devices. Get a cheap Android device and see how your app feels to use. Contrast that experience with your high-end devices.
The implementations of Back/Forward caches in popular browsers are helping to improve this experience even further - which has the benefit of significantly speeding up the web for up to 20% of navigations!
Now that you know these metrics, we can use them to understand what’s happening on our sites and to ask better questions.
Fortunes are made and lost based on how brands thread the needle between site speed and functionality. Despite this, the Retail Systems Research (RSR)’s survey reveals the average retailer’s website is still too slow, and their mobile sites are even slower.
Even worse, many have no idea how they stack up.
… ~50% savings compared to JPEG, and ~20% savings compared to WebP.
… can be lossy or lossless, has the ability to use an alpha channel (transparency for UI and design elements), and even has the ability to store a series of animated frames (think lightweight high-quality animated GIFs).
… one of the first image formats to support HDR color support; offering higher brightness, color bit depth, and color gamuts.
Results show that enabling TLS 1.3 is a good idea. It offers more security and better performance for your users. It’s also worth noting that TLS 1.3 will be a requirement to use the QUIC transport layer network protocol in the future. This will pave the way to HTTP/3. And once 0-RTT becomes more prevalent, for repeat website visits the purple on the graphs displayed above will disappear completely. Even faster connections for all (at least for those that use a browser that supports it anyway).
We must start by trying to use the option that damages the environment least, and that is text. Don’t assume that images are automatically more powerful than text. Sometimes, text does the job better.
[…] there is one less round trip until Application Data can be sent in TLS 1.3 as compared to TLS 1.2. This significantly improves performance especially on high latency networks.
However, I noticed that our max TLS version was 1.2 rather than the newer and faster 1.3, as 1.3 removes an extra RTT for a faster handshake. Turns out the version of nginx-ingress we were using was still using 1.2 only as default. A quick ConfigMap change later, and we were on 1.3.
track and measure the performance of sites that use popular JavaScript frameworks and libraries.
Automates ImageOptim, ImageAlpha, and JPEGmini for Mac to make batch optimisation of images part of your automated build process.
It would be useful to have insight into the moment when assistive technology is able to interact with and communicate page content, so that we can know when a page is “ready” for all users, and not just some.
[…] It’d be interesting to know which existing metrics are irrelevant to assistive tech
[…] it might be interesting to measure “jank” and stability in the process of arriving at a usable accessibility tree.
[…] how are metrics like First Input Delay translating to the interaction time that someone experiences when using assistive technology?
An explanation of how Matt built the UK government's Synthetic Monitoring dashboards.
Lazy loading strategies for performance gains within React applications.
What these metrics do give you is a baseline to look at to make an informed decision if the feature you are shipping is worth the overhead. That's something between the site and it's users to decide in many ways, but Web Vitals gives some the tools to help quantify that.