Tags: rendering

75

sparkline

Friday, November 18th, 2022

Remix and the Alternate Timeline of Web Development - Jim Nielsen’s Blog

It sounds like Remix takes a sensible approach to progressive enhancement.

Thursday, October 13th, 2022

Two JavaScripts

There are two JavaScripts.

One for the server - where you can go wild.

One for the client - that should be thoughtful and careful.

Yes! This! I’m always astounded to see devs apply the same mindset to backend and frontend development, just because it happens to be in the same language. I don’t care what you use on your own machine or your own web server, but once you’re sending something down the wire to end users, you need to prioritise their needs over your own.

It’s the JavaScript on the client side that’s the problem. What’s given to the visitor.

I’d ask you, if you’re still reading, that you consider a separation of JavaScript between client and server. If you’re a dev, consider the payload, your bundle and work to reduce the cost to your visitor. Heck, think progressive enhancement.

Wednesday, September 21st, 2022

Will Serving Real HTML Content Make A Website Faster? Let’s Experiment! - WebPageTest Blog

Spoiler: the answer to the question in the title is a resounding “hell yeah!”

Scott brings receipts.

Monday, April 4th, 2022

UA gotta be kidding

Brian recounts the sordid messy history of user-agent strings.

I remember somebody once describing a user-agent string as “a reverse-chronological history of web browsers.”

Thursday, February 10th, 2022

Why Safari does not need any protection from Chromium – Niels Leenheer

Safari is very opinionated about which features they will support and which they won’t. And that is fine for their browser. But I don’t want the Safari team to choose for all browsers on the iOS platform.

A terrific piece from Niels pushing back on the ridiculous assertion that Apple’s ban on rival rendering engines in iOS is somehow a noble battle against a monopoly (rather than the abuse of monopoly power it actually is). If there were any truth to the idea that Apple’s browser ban is the only thing stopping everyone from switching to Chrome, then nobody would be using Safari on MacOS where users are free to choose whichever rendering engine they want.

The Safari team is capable enough not to let their browser become irrelevant. And Apple has enough money to support the Safari team to take on other browsers. It does not need some artificial App Store rule to protect it from the competition.

WebKit-only proponents are worried about losing control and Google becoming too powerful. And they feel preventing Google from controlling the web is more important than giving more power to users. They believe they are protecting users against themselves. But that is misguided.

Users need to be in control because if you take power away from users, you are creating the future you want to prevent, where one company sets the rules for everybody else. It is just somebody else who is pulling the strings.

Tuesday, January 11th, 2022

The monoculture web

Firefox as the asphyxiating canary in the coalmine of the web.

Thursday, October 7th, 2021

Have Single-Page Apps Ruined the Web? | Transitional Apps with Rich Harris, NYTimes - YouTube

This is a terrific and nuanced talk that packs a lot into less than twenty minutes.

I heartily concur with Rich’s assessment that most websites aren’t apps or documents but something in between. It’s a continuum. And I really like Rich’s proposed approach: transitional web apps.

(The secret sauce in transitional web apps is progressive enhancement.)

Have Single-Page Apps Ruined the Web? | Transitional Apps with Rich Harris, NYTimes

Wednesday, May 12th, 2021

Google Workspace Updates: Google Docs will now use canvas based rendering: this may impact some Chrome extensions

Yikes!

We’re updating the way Google Docs renders documents. Over the course of the next several months, we’ll be migrating the underlying technical implementation of Docs from the current HTML-based rendering approach to a canvas-based approach to improve performance and improve consistency in how content appears across different platforms.

I’ll be very interested to see how they handle the accessibility of this move.

Sunday, November 29th, 2020

Rendering Spectrum | CSS-Tricks

Sensible advice from Chris:

So what’s the best rendering method? Whatever works best for you, but perhaps a hierarchy like this makes some general sense:

  1. Static HTML as much as you can
  2. Edge functions over static HTML so you can do whatever dynamic things
  3. Server generated HTML what you have to after that
  4. Client-side render only what you absolutely have to

Monday, September 7th, 2020

What is the Value of Browser Diversity? - daverupert.com

I’ve thought about these questions for over a year and narrowed my feelings of browser diversity down to two major value propositions:

  1. Browser diversity keeps the Web deliberately slow
  2. Browser diversity fosters consensus and cooperation over corporate rule

Wednesday, August 19th, 2020

radEventListener: a Tale of Client-side Framework Performance | CSS-Tricks

Excellent research by Jeremy Wagner comparing the performance impact of React, Preact, and vanilla JavaScript. The results are simultaneously shocking and entirely unsurprising.

Monday, July 27th, 2020

the Web at a crossroads - Web Directions

John weighs in on the clashing priorities of browser vendors.

Imagine if the web never got CSS. Never got a way to style content in sophisticated ways. It’s hard to imagine its rise to prominence in the early 2000s. I’d not be alone in arguing a similar lack of access to the sort of features inherent to the mobile experience that WebKit and the folks at Mozilla have expressed concern about would (not might) largely consign the Web to an increasingly marginal role.

Thursday, July 9th, 2020

Implementors

The latest newsletter from The History Of The Web is a good one: The Browser Engine That Could. It’s all about the history of browsers and more specifically, rendering engines.

Jay quotes from a 1992 email by Tim Berners-Lee when there was real concern about having too many different browsers. But as history played out, the concern shifted to having too few different browsers.

I wrote about this—back when Edge switched to using Chromium—in a post called Unity where I compared it to political parties:

If you have hundreds of different political parties, that’s not ideal. But if you only have one political party, that’s very bad indeed!

I talked about this some more with Brian and Stuart on the Igalia Chats podcast: Web Ecosystem Health (here’s the mp3 file).

In the discussion we dive deeper into the naunces of browser engine diversity; how it’s not the numbers that matter, but representation. The danger with one dominant rendering engine is that it would reflect one dominant set of priorities.

I think we’re starting to see this kind of battle between different sets of priorities playing out in the browser rendering engine landscape.

Webkit published a list of APIs they won’t be implementing in their current form because of security concerns around fingerprinting. Mozilla is taking the same stand. Google is much more gung-ho about implementing those APIs.

I think it’s safe to say that every implementor wants to ship powerful APIs and ensure security and privacy. The issue is with which gets priority. Using the language of principles and priorities, you could crudely encapsulate Apple and Mozilla’s position as:

Privacy, even over capability.

That design principle would pass the reversibility test. In fact, Google’s position might be represented as:

Capability, even over privacy.

I’m not saying Apple and Mozilla don’t value powerful APIs. I’m not saying Google doesn’t value privacy. I’m saying that Google’s priorities are different to Apple’s and Mozilla’s.

Alas, Alex is saying that Apple and Mozilla don’t value capability:

There is a contingent of browser vendors today who do not wish to expand the web platform to cover adjacent use-cases or meaningfully close the relevance gap that the shift to mobile has opened.

That’s very disappointing. It’s a cheap shot. As cheap as saying that, given Google’s business model, Chrome wouldn’t want to expand the web platform to provide better privacy and security.

Tuesday, July 7th, 2020

We need more inclusive web performance metrics | Filament Group, Inc.

Good point. When we talk about perceived performance, the perception in question is almost always visual. We should think more inclusively than that.

Friday, June 19th, 2020

Monday, June 15th, 2020

Tuesday, May 26th, 2020

as days pass by — Browsers are not rendering engines

You see, diversity of rendering engines isn’t actually in itself the point. What’s really important is diversity of influence: who has the ability to make decisions which shape the web in particular ways, and do they make those decisions for good reasons or not so good?

Stuart responds to a post from Brian that was riffing off a post of mine from a while back. I like this kind of social network.

Thursday, February 6th, 2020

Hydration

As you may have noticed, I’m a fan of progressive enhancement.

It’s not cool. It’s often at odds with “modern” web development, so I end up looking like an old man yelling at a cloud to get off my lawn. Or something.

At its heart though, progressive enhancement seems fairly uncontroversial and inoffensive to me. It’s an approach. A mindset. Here’s how I describe it in Resilient Web Design:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

Progressive enhancement makes use of the principle of least power:

Choose the least powerful language suitable for a given purpose.

That’s step two of the three-step process. But the third step is vital.

I think a lot of the hostility towards progressive enhancement comes from a misunderstanding of that three-step process, perhaps thinking that it stops at step two. I’m sure that some have intrepreted progressive enhancement as preventing developers from using the latest and greatest technology. Nothing could be further from the truth!

Taking a layered approach to building on the web gives you permission to try cutting‐edge JavaScript APIs, regardless of how many or how few browsers currently implement them.

The most common misunderstanding of progressive enhancement is that it’s inherently about JavaScript. That’s not true. You can apply progressive enhancement at every step of front-end development: HTML, CSS, and JavaScript.

But because of JavaScript’s strict error-handling model (at least compared to HTML and CSS), it’s in the JavaScript layer that the lack of a progressive enhancement mindset is most often felt.

That’s why I was saddened by the rise of frameworks and mindsets that assume the availability of JavaScript. Single page apps generally follow this assumption. Everything is delivered via JavaScript: content, markup, styles, and behaviour.

This leads to a terrible situation for performance. The user is left staring at a blank screen, waiting for something—anything!—to appear. Browsers are optimised to stream HTML as soon as they can. Delivering your content via JavaScript rather than HTML means you’re not taking advantage of that optimisation. Your users suffer.

But I was very heartened when I saw the pendulum start to swing back the other way a bit…

Let’s say you’re using a JavaScript framework like React. But the reason you’re using it isn’t because you’re doing anything particularly complex in the browser involving state management. You might be using React because you really like the way it encourages modularity and componentisation.

A few years ago, making a single page app was pretty much the only way you could use React. For you as a developer to experience the benefits of modularity and componentisation, users had to pay the price in the payload (and fragility) of client-side JavaScript.

That’s no longer the case. Now that we can run JavaScript on the server, it’s possible to build in a modular, componentised way and still use progressive enhancement.

When I first heard about Gatsby and Next.js, I thought that was the selling point. Run React on the server; send pre-generated HTML down the wire to the user; then enhance with client-side JavaScript.

But that’s not exactly how it works. The pre-generated HTML isn’t functional. It still needs a bucketload of JavaScript before it can do anything. The actual process is: Run React on the server; send pre-generated HTML down the wire to the user; then send everything again but this time in JavaScript, bundled with the entire React library.

This leads to a situation for users that’s almost worse than before. Instead of staring at a blank screen, now they get HTML lickety-split—excellent! But if they try to interact with what’s on screen, they’ll find that nothing is working yet. Even worse, once the JavaScript is delivered, and is being parsed, they probably can’t even scroll—their device is too busy interpreting all that JavaScript. Your users suffer.

All your content is sent twice. First HTML is sent from the server. These days this is called “server-side rendering”, even though for decades the technical term was “serving a web page” (I’m pretty sure the rendering part happens in a browser). Then a JavaScript library—plus all your bespoke JavaScript—is loaded. Then all your content is loaded again as JSON.

So you’ve got a facade of an interface that you can’t actually interact with until a deluge of JavaScript has been loaded, parsed and executed. The term used for this stage of the process is “hydration”, which makes it sound more like a relaxing treatment from Gwyneth Paltrow than the horrible user experience it is.

The idea is that subsequent navigations—which will happen with Ajax—should be snappy. But the price has already been paid by then. The initial loading experience is jagged and frustrating.

Don’t get me wrong: server-side rendering is great …if what you’re sending from the server is functional. It’s the combination of hollow HTML sent from the server, followed by a huge browser-freezing dump of JavaScript that is an anti-pattern.

This use of server-side rendering followed by hydration feels like progressive enhancement, because it separates out the delivery of markup and scripts. But it’s missing the mindset.

The layered approach of progressive enhancement echoes the separation of concerns in the front-end stack: HTML, CSS, and JavaScript—each layer expressing more power. But while these concepts are related, they’re not interchangable. Separating out the layers of your tech stack isn’t necessarily progressive enhancement. If you have some HTML that relies on JavaScript to be useful, then there’s no benefit in separating that HTML into a separate payload. The HTML that you initially send down the wire needs to be functional (at least at a basic level) before the JavaScript arrives.

I was a little disappointed to see Kyle Simpson—who I admire greatly—conflate separation of concerns with progressive enhancement in his talk from JSCamp 2019:

This content is here. I can see it, and it’s even styled. But I can’t click on the damn button because nothing has loaded in the JavaScript layer yet.

Anybody experienced that where you’ve been on a web page and it’s not really fully functional yet? I can see something but I can’t actually make any usage of it yet.

These are all things that cropped out of our thought process that said: “Let’s build the web in layers. Let’s deliver it progressively in layers. Because that’s morally right. We call this progressive enhancement. And let’s not worry too much about all these potential user experience flaws that may happen.”

That’s a spot-on description of server-side rendering and hydration, but it’s a gross mischaracterisation of progressive enhancement.

That button that requires JavaScript to work? That should’ve been generated with JavaScript. (For example, if you’re building a complex web app, consider sending a read-only view down the wire in HTML—then add any interactive interface elements with JavaScript in the browser.)

If people are equating progressive enhancement with thoughtless server-side rendering and hydration, then I can see why they’d be hostile towards it.

Users would be better served with unprogressive non-enhancement:

You take some structured content, which follows the vertical flow of the document in a way that everyone understands.

Which people traverse easily by either dragging their scroll bar with their mouse, or operating the keyboard using the up and down keys, or using the spacebar.

Or if they’re using a touch device, simply flicking backwards and forwards in that easy way that we’ve all become used to. What you do is you take that, and you fucking well leave it alone.

Alas, that’s not what tools like Gatsby offer. The latest post on their blog is called Why Gatsby is better with JavaScript:

But what about sites or pages where there is no client-side interactivity? Even for those pages, Gatsby offers performance benefits by including JavaScript.

I beg to differ.

(By the way, that same blog post also initially tried to equate the performance hit of client-side JavaScript with the performance hit of images. Andy explains why that’s disingenuous.)

Hope is on the horizon for React in the form of partial hydration. I sincerely hope that it will become the default way of balancing server-side rendering with just-in-time client-side interaction.

The situation we have now is the worst of both worlds: server-side rendering followed by a tsunami of hydration. It has a whiff of progressive enhancement to it (because there’s a cosmetic separation of concerns) but it has none of the user benefits.

Switching to Firefox | Brad Frost

Like Brad, I switched to Firefox for web browsing and Duck Duck Go for searching quite a while back. I highly recommend it.

Monday, January 27th, 2020

Diary of an Engine Diversity Absolutist – Dan’s Blog

Dan responds to an extremely worrying sentiment from Alex:

The sentiment about “engine diversity” points to a growing mindset among (primarily) Google employees that are involved with the Chromium project that puts an emphasis on getting new features into Chromium as a much higher priority than working with other implementations.

Needless to say, I agree with this:

Proponents of a “move fast and break things” approach to the web tend to defend their approach as defending the web from the dominance of native applications. I absolutely think that situation would be worse right now if it weren’t for the pressure for wide review that multiple implementations has put on the web.

The web’s key differentiator is that it is a part of the commons and that it is multi-stakeholder in nature.