For years, web developers have been driven to try to find ways to mask the performance characteristics of our sites. We know that graphics are getting larger, third-party libraries are more prominent, and expectations are rising– but at the same time, everyone wants something up in 1.2 seconds after hitting the link. Get something to load in a hurry, even if it’s not the full content, and everyone will feel like it’s faster. However, like most “philosophies for web design”, this can be quickly followed from sensible to downright hazardous.
First, the approach often expresses itself as “if we load the skeleton of the page and a few loading spinner graphics, that’s enough to get us started.” It’s enough to get the browser to say “done”, but the user is still sitting there, looking at spinners. Immediately frustrating, and with the potential for the user to be sitting there for a long, and unknown time, waiting for the block of content he actually wanted to read to arrive.
Perhaps the most irritating variation on this theme is the “progress bar” page– the site is smart enough to say “75% of assets are loaded” but deliberately hides the visibility of the real content instead of letting the user at least start reading. Once everything is finally in place, THEN it will deign you to see the page. This is not Web design anymore, it’s yearning for the days of Flash. It’s also a particularly brittle approach– if it gets stuck on a request late in the process, how long will the user be sitting there, steaming, when they could certainly read everything BUT the graphic that was failing to load?
Second, it tends to encourage “nonconventional” page structures. The recent trend towards “single page websites” comes to mind– everything is neatly read in behind the scenes, so we can have elegant CSS transitions that make it look instantaneous. It works but only at the cost of greatly reduced accessibility for bookmarking, users without scripts enabled and search engine robots. All the wonderful rich content you wanted Google to see? It’s all in JSON objects with no publically accessible links, unless you go to extra effort to expose it.
Finally, there are situations where it becomes important to expose the true performance cost of an operation. Think of things like credit card transactions, large report generation, and pinging external services. No matter what you to do hide it behind pretty animations, it’s going to take twenty seconds or more to complete. If a system designed to look seamless freezes– or even shows one of those “eternally looping progress bar animations that fool nobody”– there’s a significant chance the user will think “broken” and make another request… which will frequently perform even worse than the first one.
In this situation, prominently breaking out and saying “this is a page reload” is a powerful way to convey to the user “yes, something is happening, but you have to wait.”
There are some legitimate tactics to improve the user experience on slow-to-load sites. Load the main content, and try to defer stuff that’s truly secondary, like social widgets and marketing teases. Specify dimensions where possible to allow rendering to start faster. Use progressive images, or even set up fallback to CSS colours and gradients, so navigation works while the images are downloading.
If you have to rely on things like AJAX-inserted content, like a dynamic table of data, consider “paginating” the content. Not as real “pages” (that just screams clickbait site), but as sensibly structured requests. You can probably find a way to build the first segment of the content static– for example, explanations and headings, then load a section of the dynamic data at a time, starting at the top. Hopefully, the loading will keep up with the reader’s progress through your content.