Tags: ratio

321

sparkline

Monday, April 6th, 2020

Performance, security, and ethics: influencing effectively

I wrote something recently about telling the story of performance. Sue Loh emphasis the importance of understanding what makes people tick:

Performance engineers need to be an interesting mix of data-lovers and people-whisperers.

Local-first software: You own your data, in spite of the cloud

The cloud gives us collaboration, but old-fashioned apps give us ownership. Can’t we have the best of both worlds?

We would like both the convenient cross-device access and real-time collaboration provided by cloud apps, and also the personal ownership of your own data embodied by “old-fashioned” software.

This is a very in-depth look at the mindset and the challenges involved in building truly local-first software—something that Tantek has also been thinking about.

Thursday, April 2nd, 2020

Visual Design Inspiration from Agency Websites–And Other Tangential Observations | Jim Nielsen’s Weblog

Tyring to do make screenshots of agency websites is tricky if the website is empty HTML with everything injected via JavaScript.

Granted, agencies are usually the ones pushing the boundaries. “Pop” and “pizazz” are what sell for many of them (i.e. “look what we can do!”) Many of these sites pushed the boundaries of what you can do in the browser, and that’s cool. I like seeing that kind of stuff.

But if you asked me what agency websites inspired both parts me, I’d point to something like Clearleft or Paravel. To me, they strike a great balance of visual design with the craft of building for an accessible, universal web.

CSS Architecture for Modern JavaScript Applications - MadeByMike

Mike sees the church of JS-first ignoring the lessons to be learned from the years of experience accumulated by CSS practitioners.

As the responsibilities of front-end developers have become more broad, some might consider the conventions outlined here to be not worth following. I’ve seen teams spend weeks planning the right combination of framework, build tools, workflows and patterns only to give zero consideration to the way they architect UI components. It’s often considered the last step in the process and not worthy of the same level of consideration.

It’s important! I’ve seen well-planned project fail or go well over budget because the UI architecture was poorly planned and became un-maintainable as the project grew.

Wednesday, April 1st, 2020

How to build a bad design system | CSS-Tricks

Working in a big organization is shocking to newcomers because of this, as suddenly everyone has to be consulted to make the smallest decision. And the more people you have to consult to get something done, the more bureaucracy exists within that company. In short: design systems cannot be effective in bureaucratic organizations. Trust me, I’ve tried.

Who hurt you, Robin?

Sunday, March 15th, 2020

Twitter thread as blog post: Thoughts on how we write CSS | Lara Schenck

CSS only truly exists in a browser. As soon as we start writing CSS outside of the browser, we rely on guesses and memorization and an intimate understanding of the rules. A text editor will never be able to provide as much information as a browser can.

Wednesday, March 11th, 2020

Why is CSS frustrating? ・ Robin Rendle

CSS is frustrating because you have to actually think of a website like a website and not an app. That mental model is what everyone finds so viscerally upsetting. And so engineers do what feels best to them; they try to make websites work like apps, like desktop software designed in the early naughts. Something that can be controlled.

Friday, February 28th, 2020

Why is CSS Frustrating? | CSS-Tricks

Why do people respect JavaScript or other languages enough to learn them inside-out, and yet constantly dunk on CSS?

The headline begs the question, but Robin makes this very insightful observation in the article itself:

I reckon the biggest issue that engineers face — and the reason why they find it all so dang frustrating — is that CSS forces you to face the webishness of the web. Things require fallbacks. You need to take different devices into consideration, and all the different ways of seeing a website: mobile, desktop, no mouse, no keyboard, etc. Sure, you have to deal with that when writing JavaScript, too, but it’s easier to ignore. You can’t ignore your the layout of your site being completely broken on a phone.

Tuesday, February 25th, 2020

Inspiring high school students with HTML and CSS - Stephanie Stimac’s Blog

I love, love, love this encounter that Stephanie had with high school students when she showed them her own website (“Your website? You have a website?”).

I opened the DevTools on my site and there was an audible gasp from the class and excited murmuring.

“That’s your code?” A student asked. “Yes, that’s all my code!” “You wrote all of that?!” “Yes, it’s my website.”

And the class kind of exploded and starting talking amongst themselves. I was floored and my perspective readjusted.

When I code, it’s usually in HTML and CSS, and I suppose there’s a part of me that feels like that isn’t special because some tech bros decide to be vocal and loud about HTML and CSS not being special nearly everyday (it is special and tech bros can shut up.)

And the response from that class of high school students delighted me and grounded me in a way I haven’t experienced before. What I view as a simple code was absolute magic to them. And for all of us who code, I think we forget it is magic. Computational magic but still magic. HTML and CSS are magic.

Yes! Yes! Yes!

Saturday, February 15th, 2020

Same HTML, Different CSS

Like a little mini CSS Zen Garden, here’s one compenent styled five very different ways.

Crucially, the order of the markup doesn’t consider the appearance—it’s concerned purely with what makes sense semantically. And now with CSS grid, elements can be rearranged regardless of source order.

CSS is powerful and capable of doing amazingly beautiful things. Let’s embrace that and keep the HTML semantical instead of adapting it to the need of the next design change.

Tuesday, February 11th, 2020

Utopia

This is the project that Trys and James have been working on at Clearleft. It’s a way of approaching modular scales in web typography that uses CSS locks and custom properties to fantastic effect.

Utopia is not a product, a plugin, or a framework. It’s a memorable/pretentious word we use to refer to a way of thinking about fluid responsive design.

Thursday, February 6th, 2020

On design systems and agency | Andrew Travers

Design systems can often ‘read’ as very top down, but need to be bottom up to reflect the needs of different users of different services in different contexts.

I’ve yet to be involved in a design system that hasn’t struggled to some extent for participation and contribution from the whole of its design community.

Hydration

As you may have noticed, I’m a fan of progressive enhancement.

It’s not cool. It’s often at odds with “modern” web development, so I end up looking like an old man yelling at a cloud to get off my lawn. Or something.

At its heart though, progressive enhancement seems fairly uncontroversial and inoffensive to me. It’s an approach. A mindset. Here’s how I describe it in Resilient Web Design:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

Progressive enhancement makes use of the principle of least power:

Choose the least powerful language suitable for a given purpose.

That’s step two of the three-step process. But the third step is vital.

I think a lot of the hostility towards progressive enhancement comes from a misunderstanding of that three-step process, perhaps thinking that it stops at step two. I’m sure that some have intrepreted progressive enhancement as preventing developers from using the latest and greatest technology. Nothing could be further from the truth!

Taking a layered approach to building on the web gives you permission to try cutting‐edge JavaScript APIs, regardless of how many or how few browsers currently implement them.

The most common misunderstanding of progressive enhancement is that it’s inherently about JavaScript. That’s not true. You can apply progressive enhancement at every step of front-end development: HTML, CSS, and JavaScript.

But because of JavaScript’s strict error-handling model (at least compared to HTML and CSS), it’s in the JavaScript layer that the lack of a progressive enhancement mindset is most often felt.

That’s why I was saddened by the rise of frameworks and mindsets that assume the availability of JavaScript. Single page apps generally follow this assumption. Everything is delivered via JavaScript: content, markup, styles, and behaviour.

This leads to a terrible situation for performance. The user is left staring at a blank screen, waiting for something—anything!—to appear. Browsers are optimised to stream HTML as soon as they can. Delivering your content via JavaScript rather than HTML means you’re not taking advantage of that optimisation. Your users suffer.

But I was very heartened when I saw the pendulum start to swing back the other way a bit…

Let’s say you’re using a JavaScript framework like React. But the reason you’re using it isn’t because you’re doing anything particularly complex in the browser involving state management. You might be using React because you really like the way it encourages modularity and componentisation.

A few years ago, making a single page app was pretty much the only way you could use React. For you as a developer to experience the benefits of modularity and componentisation, users had to pay the price in the payload (and fragility) of client-side JavaScript.

That’s no longer the case. Now that we can run JavaScript on the server, it’s possible to build in a modular, componentised way and still use progressive enhancement.

When I first heard about Gatsby and Next.js, I thought that was the selling point. Run React on the server; send pre-generated HTML down the wire to the user; then enhance with client-side JavaScript.

But that’s not exactly how it works. The pre-generated HTML isn’t functional. It still needs a bucketload of JavaScript before it can do anything. The actual process is: Run React on the server; send pre-generated HTML down the wire to the user; then send everything again but this time in JavaScript, bundled with the entire React library.

This leads to a situation for users that’s almost worse than before. Instead of staring at a blank screen, now they get HTML lickety-split—excellent! But if they try to interact with what’s on screen, they’ll find that nothing is working yet. Even worse, once the JavaScript is delivered, and is being parsed, they probably can’t even scroll—their device is too busy interpreting all that JavaScript. Your users suffer.

All your content is sent twice. First HTML is sent from the server. These days this is called “server-side rendering”, even though for decades the technical term was “serving a web page” (I’m pretty sure the rendering part happens in a browser). Then a JavaScript library—plus all your bespoke JavaScript—is loaded. Then all your content is loaded again as JSON.

So you’ve got a facade of an interface that you can’t actually interact with until a deluge of JavaScript has been loaded, parsed and executed. The term used for this stage of the process is “hydration”, which makes it sound more like a relaxing treatment from Gwyneth Paltrow than the horrible user experience it is.

The idea is that subsequent navigations—which will happen with Ajax—should be snappy. But the price has already been paid by then. The initial loading experience is jagged and frustrating.

Don’t get me wrong: server-side rendering is great …if what you’re sending from the server is functional. It’s the combination of hollow HTML sent from the server, followed by a huge browser-freezing dump of JavaScript that is an anti-pattern.

This use of server-side rendering followed by hydration feels like progressive enhancement, because it separates out the delivery of markup and scripts. But it’s missing the mindset.

The layered approach of progressive enhancement echoes the separation of concerns in the front-end stack: HTML, CSS, and JavaScript—each layer expressing more power. But while these concepts are related, they’re not interchangable. Separating out the layers of your tech stack isn’t necessarily progressive enhancement. If you have some HTML that relies on JavaScript to be useful, then there’s no benefit in separating that HTML into a separate payload. The HTML that you initially send down the wire needs to be functional (at least at a basic level) before the JavaScript arrives.

I was a little disappointed to see Kyle Simpson—who I admire greatly—conflate separation of concerns with progressive enhancement in his talk from JSCamp 2019:

This content is here. I can see it, and it’s even styled. But I can’t click on the damn button because nothing has loaded in the JavaScript layer yet.

Anybody experienced that where you’ve been on a web page and it’s not really fully functional yet? I can see something but I can’t actually make any usage of it yet.

These are all things that cropped out of our thought process that said: “Let’s build the web in layers. Let’s deliver it progressively in layers. Because that’s morally right. We call this progressive enhancement. And let’s not worry too much about all these potential user experience flaws that may happen.”

That’s a spot-on description of server-side rendering and hydration, but it’s a gross mischaracterisation of progressive enhancement.

That button that requires JavaScript to work? That should’ve been generated with JavaScript. (For example, if you’re building a complex web app, consider sending a read-only view down the wire in HTML—then add any interactive interface elements with JavaScript in the browser.)

If people are equating progressive enhancement with thoughtless server-side rendering and hydration, then I can see why they’d be hostile towards it.

Users would be better served with unprogressive non-enhancement:

You take some structured content, which follows the vertical flow of the document in a way that everyone understands.

Which people traverse easily by either dragging their scroll bar with their mouse, or operating the keyboard using the up and down keys, or using the spacebar.

Or if they’re using a touch device, simply flicking backwards and forwards in that easy way that we’ve all become used to. What you do is you take that, and you fucking well leave it alone.

Alas, that’s not what tools like Gatsby offer. The latest post on their blog is called Why Gatsby is better with JavaScript:

But what about sites or pages where there is no client-side interactivity? Even for those pages, Gatsby offers performance benefits by including JavaScript.

I beg to differ.

(By the way, that same blog post also initially tried to equate the performance hit of client-side JavaScript with the performance hit of images. Andy explains why that’s disingenuous.)

Hope is on the horizon for React in the form of partial hydration. I sincerely hope that it will become the default way of balancing server-side rendering with just-in-time client-side interaction.

The situation we have now is the worst of both worlds: server-side rendering followed by a tsunami of hydration. It has a whiff of progressive enhancement to it (because there’s a cosmetic separation of concerns) but it has none of the user benefits.

Wednesday, February 5th, 2020

Design System Won’t Fix Your Problems | Viljami Salminen

This is something we’ve learned at Clearleft—you can’t create a design system for an organisation, hand it over to them, and expect it to be maintained.

You can’t just hire an agency to create a design system and expect that the system alone will solve something. It won’t do much before the people in the organization align on this idea as well, believe in it, invest in it, and create a culture of collaboration around it.

The people who will be living with the design system must be (co-)creators of it. That’s very much the area we work in now.

Monday, January 27th, 2020

Diary of an Engine Diversity Absolutist – Dan’s Blog

Dan responds to an extremely worrying sentiment from Alex:

The sentiment about “engine diversity” points to a growing mindset among (primarily) Google employees that are involved with the Chromium project that puts an emphasis on getting new features into Chromium as a much higher priority than working with other implementations.

Needless to say, I agree with this:

Proponents of a “move fast and break things” approach to the web tend to defend their approach as defending the web from the dominance of native applications. I absolutely think that situation would be worse right now if it weren’t for the pressure for wide review that multiple implementations has put on the web.

The web’s key differentiator is that it is a part of the commons and that it is multi-stakeholder in nature.

Monday, January 20th, 2020

Unity

It’s official. Microsoft’s Edge browser is running on the Blink rendering engine and it’s available now.

Just over a year ago, I wrote about my feelings on this decision:

I’m sure the decision makes sound business sense for Microsoft, but it’s not good for the health of the web.

The importance of browser engine diversity is beautifully illustrated (literally) in Rachel’s The Ecological Impact of Browser Diversity.

But I was chatting to Amber the other day, and I mentioned how I can see the theoretical justification for Microsoft’s decision …even if I don’t quite buy it myself.

Picture, if you will, something I’ll call the bar of unity. It’s a measurement of how much collaboration is happening between browser makers.

In the early days of the web, the bar of unity was very low indeed. The two main browser vendors—Microsoft and Netscape—not only weren’t collaborating, they were actively splintering the languages of the web. One of them would invent a new HTML element, and the other would invent a completely different element to do the same thing (remember abbr and acronym). One of them would come up with one model for interacting with a document through JavaScript, and the other would come up with a completely different model to the same thing (remember document.all and document.layers).

There wasn’t enough collaboration. Our collective anger at this situation led directly to the creation of The Web Standards Project.

Eventually, those companies did start collaborating on standards at the W3C. The bar of unity was raised.

This has been the situation for most of the web’s history. Different browser makers agreed on standards, but went their own separate ways on implementation. That’s where they drew the line.

Now that line is being redrawn. The bar of unity is being raised. Now, a number of separate browser makers—Google, Samsung, Microsoft—not only collaborate on standards but also on implementation, sharing a codebase.

The bar of unity isn’t right at the top. Browsers can still differentiate in their user interfaces. Edge, for example, can—and does—offer very sensible defaults for blocking trackers. That’s much harder for Chrome to do, given that Google are amongst the worst offenders.

So these browsers are still competing, but the competition is no longer happening at the level of the rendering engine.

I can see how this looks like a positive development. In fact, from this point of view, Mozilla are getting in the way of progress by having a separate codebase (yes, this is a genuinely-held opinion by some people).

On the face of it, more unity sounds good. It sounds like more collaboration. More cooperation.

But then I think of situations where complete unity isn’t necessarily a good thing. Take political systems, for example. If you have hundreds of different political parties, that’s not ideal. But if you only have one political party, that’s very bad indeed!

There’s a sweet spot somewhere in between where there’s a base of level of agreement and cooperation, but there’s also plenty of room for disagreement and opposition. Right now, the browser landscape is just about still in that sweet spot. It’s like a two-party system where one party has a crushing majority. Checks and balances exist, but they’re in peril.

Firefox is one of the last remaining representatives offering an alternative. The least we can do is support it.

Tuesday, January 7th, 2020

Life Under The Ice

Here’s the latest wonderful project from Ariel—explore microscopic specimens from Antarctica:

The collected Antarctic microbes were found living within glaciers, under the sea ice, next to frozen lakes, and in subglacial ponds.

Beautiful!

Wednesday, January 1st, 2020

Bound in Shallows: Space Exploration and Institutional Drift

If a human civilization beyond Earth ever comes into being, this will be unprecedented in any historical context we might care to invoke—unprecedented in recorded history, unprecedented in human history, unprecedented in terrestrial history, and so on. There have been many human civilizations, but all of these civilizations have arisen and developed on the surface of Earth, so that a civilization that arises or develops away from the surface of Earth would be unprecedented and in this sense absolutely novel even if the institutional structure of a spacefaring civilization were the same as the institutional structure of every civilization that has existed on Earth. For this civilizational novelty, some human novelty is a prerequisite, and this human novelty will be expressed in the mythology that motivates and sustains a spacefaring civilization.

A deep dive into deep time:

Record-keeping technologies introduce an asymmetry into history. First language, then written language, then printed books, and so and so forth. Should human history extend as far into the deep future as it now extends into the deep past, the documentary evidence of past beliefs will be a daunting archive, but in an archive so vast there would be a superfluity of resources to trace the development of human mythologies in a way that we cannot now trace them in our past. We are today creating that archive by inventing the technologies that allow us to preserve an ever-greater proportion of our activities in a way that can be transmitted to our posterity.

Monday, December 9th, 2019

Level of Effort | Brad Frost

Brad gets ranty …with good reason.

Monday, December 2nd, 2019

Oddly Amazing Animals from A to Z

This book is a beautiful tribute to Cindy.

Several talented illustrators have come together to create a unique book about unique animals. Each contributor has a special connection to the book’s original illustrator, Cindy Li. When she was unable to complete the illustrations before passing away in 2018, many of Cindy’s talented friends offered to help finish the project.

I think you should get a copy of this book for the little animal lover in your life this Christmas.

Proceeds from the sale of this book benefit Apollo Li Harris and Orion Li Harris, two out-of-this-world kids who had an amazing mom.