The specification for web browser native support for lazy-loading images landed in the HTML Living Standard . This new feature lets web developers tell the browser to defer loading an image until it is scrolled into view, or it’s about to be scrolled into view.
Images account for 49 % of the median webpage’s byte size, according to the HTTP Archive. Lazy image loading can help reduce these images’ impact on page load performance. It can also help lower data costs by clients that never scroll down to images far down on a page.
Historically, lazy-loading was implemented by responding to changes in the scroll position and tracking the image element’s offset from the top of the page. This could degrade page-scrolling performance. Comparatively, the new native lazy loading for images is easier to implement and doesn’t degrade scrolling performance. All it takes is one extra argument on your images.
However, the current specification is vague on exactly when browsers ought to load a deferred image. It must be loaded if the image is visible or is about to become visible to the user.
This ambiguity in the specification has created implementations with different user experiences. Before digging into the details on how they differ, I first need to explain about the Intersection Observer API.
Intersection Observer is the modern replacement for the practice of manually handling scroll events to calculate whether an element is visible on the page. The browser now handles that for you and fires an event when a tracked element scrolls into view. An Intersection Observer can be configured with a margin around the visible viewport (the
IntersectionObserver.rootMargin); causing the event to be fired at a configurable distance before scrolling into view.
Chromium Blink (Chrome), Mozilla Gecko (Firefox), and WebKit (Safari) have all implemented lazy-loading images using an Intersection Observer. However, each implementation has set different margins! These margins are not configurable without recompiling the rendering engine.
Chromium Blink uses a margin of 3000px on low-latency network connections, and up to 8000px on high-latency connections. Depending on the network latency, this can cause all images on the page to be loaded right away. This behavior compromises on some of the data-savings and loading-performance benefits you could otherwise get with lazy-loading images.
Mozilla Gecko sets no margin at all. As a result, a lazy-loaded image wouldn’t be loaded until at least 1px of it is visible to the user. Again depending on network latency, this can result in the user seeing a blank area while the image is loaded. This creates the opposite problem of Blink where the lazy-loading behavior is too lazy.
Update (): Firefox 81 (due for release in 2021-Q1) will introduce a new default omnidirectional root margin of 300px.
WebKit’s implementation is incomplete as of the time of writing. However, a proposed patch sets its margins to 100px vertical and 0px horizontal. This gives the browser a small heads-up to start loading the image before it’s scrolled into view. This might not be enough depending on network conditions and scrolling speed.
The browser vendors have gone for different trade-offs between data-saving, perceived-performance, and how acceptable a temporary blank area is. These margins aren’t set in stone and they may change over time.
This results in an environment where the end-user experiences can vary considerably based on the user’s browser of choice. Worse, web developers’ don’t have a say in the matter.
I think that it’s good for the web if more websites adopt lazy-loading techniques using the new easy-to-implement web-native method. However, I do wish that those different implementors could have had a quick lunch together and could agree on one behavior.
Blink’s behavior has been called “too eager”, and I tend to agree. However, Gecko’s behavior also seems too lazy. I believe the ideal default margin probably lays somewhere between these two implementations.
I’ve often implemented lazy-loading with a margin of one–two times the viewport height. The idea has been to get the browser to prepare one–two screens worth of content. It loads more content earlier on, presumably, more powerful devices with large screens, and less content on devices with smaller form factors. This may not be the best approach in every situation, though.