1 private link
The <div> is the most versatile and used element in HTML. It represents nothing, while allowing developers to manipulate it into almost anything by use...
By using techniques that assess the performance impact of a build in relation to the performance characteristics (magnitude, variance, trend) of adjacent builds, we can more confidently distinguish genuine regressions from metrics that are elevated for other reasons (e.g. inherited code, regressions in previous builds or one-off data spikes due to test irregularities). We also spend less time chasing false negatives and no longer need to manually assign a threshold to each result — the data itself now sets the thresholds dynamically.
We see in the data that the presence of certain errors lead to actions of user frustration that have bottom line implications for the business serving the site. The two most prominent cases of this are reloads and abandonwments (page exits).”
Why? Because this is pretty hard to “understand” that the page exit or the reload in the SR.
Indeed, we just see the error at the very last second and boom, finish (or next replay start in case of reload)
Today in Lyon, France, was the We Love Speed conference. Its focus is on everything related to web performance. Even if the conference talks were only in French, I'll do this recap in English, to let more people learn from it.
In the new responsiveness metrics, we measure the latency of user interactions, how your customers navigate and act on your website, rather than individual events. A user interaction, such as tap (click), drag, and keyboard interaction, usually triggers multiple events.
I’ve learned that one reason there isn’t a good reference for the role of the CTO is that the size of the company and the expectations of the CEO define the job.
"AMP Has Irreparably Damaged Publishers’ Trust in Google-led Initiatives", Sarah Gooding (@wptavern)
In summary, it claims that Google falsely told publishers that adopting AMP would enhance load times, even though the company’s employees knew that it only improved the “median of performance” and actually loaded slower than some speed optimization techniques publishers had been using.
It’s easy to get excited about new techniques for measuring and improving site speed but this focus on the technical side of performance can lead us to think of speed as a technical issue, rather than a business issue with technical roots.
Almost half of all pages that scored 100 on Lighthouse didn’t meet the recommended Core Web Vitals thresholds.
[…]
- If you're going to talk about the performance of a production site, use real-user data.
- If you're going to use a single number to cite a performance result, specify where that number falls in the distribution.
- When talking about real-user performance, be specific about the time period.
- If you do want to brag about your Lighthouse score or other lab results, do so in the context of the larger performance story.
Core Web Vitals is a measurable SEO ranking factor. This data study shows changes seen during the Page Experience Update July-Aug 2021
Automation typically includes purely code-based tasks that don’t even think about a browser, but some tasks need to interact and use the browser as a human would like performing a search on a site. How can we leverage tools that can automate the browser and pack it into a serverless API endpoint to make easily accessible?
Render-blocking resources are a common hurdle to rendering your page faster. They impact your Web Vitals which now impact your SEO. Slow render times also frustrate your users and can cause them to abandon your page.
Writing about browser performance is hard, but it’s not fruitless. I’ve had enough successes over the years (and enough stubbornness and curiosity, I guess) that I keep doing it.
Make use of the requestIdleCallback
callback to improve the responsiveness of input while ensuring JS actions during the typing.
You now have a simple, platform-reliant way of preventing unnecessary requests. You have another tool in your belt to save your users time and money. Also you’ve got a way to save a little carbon from being released into our atmosphere to power a server farm. And you can use this tool with any style of website: static file sites, single page applications, and server rendered applications. It’s a superpower.
One of the great things about Eleventy is its flexibility and its lack of assumptions about how your projects should be organized. However, in order to preserve my own sanity, I needed to come up with a default files and folders architecture that made sense to me.
If your site uses native image lazy-loading, check how it's implemented and run A/B tests to better understand its performance costs. It may benefit from more eagerly loading images above the fold.
Putting images on websites is incredibly simple, yes? Actually, yes, it is. You use <img> and link it to a valid source in the src attribute and you’re done. Except that there are (counts fingers) 927 things you could (and some you really should) do that often go overlooked. Let’s see…
As we have seen, the for-of loop beats for, for-in, and .forEach() w.r.t. usability.
Any difference in performance between the four looping mechanisms should normally not matter. If it does, you are probably doing something very computationally intensive and switching to WebAssembly may make sense.
By combining the powers of real-user experiences in the Chrome UX Report 3 (CrUX) dataset with web technology detections in HTTP Archive 1, we can get a glimpse into how architectural decisions like choices of CMS platform or JavaScript framework play a role in sites’ CWV performance.