Tags: behaviour

48

sparkline

Monday, November 30th, 2020

Clean advertising

Imagine if you were told that fossil fuels were the only way of extracting energy. It would be an absurd claim. Not only are other energy sources available—solar, wind, geothermal, nuclear—fossil fuels aren’t even the most effecient source of energy. To say that you can’t have energy without burning fossil fuels would be pitifully incorrect.

And yet when it comes to online advertising, we seem to have meekly accepted that you can’t have effective advertising without invasive tracking. But nothing could be further from the truth. Invasive tracking is to online advertising as fossil fuels are to energy production—an outmoded inefficient means of getting substandard results.

Before the onslaught of third party cookies and scripts, online advertising was contextual. If I searched for property insurance, I was likely to see an advertisement for property insurance. If I was reading an article about pet food, I was likely to be served an advertisement for pet food.

Simply put, contextual advertising ensured that the advertising that accompanied content could be relevant and timely. There was no big mystery about it: advertisers just needed to know what the content was about and they could serve up the appropriate advertisement. Nice and straightforward.

Too straightforward.

What if, instead of matching the advertisement to the content, we could match the advertisement to the person? Regardless of what they were searching for or reading, they’d be served advertisements that were relevant to them not just in that moment, but relevant to their lifestyles, thoughts and beliefs? Of course that would require building up dossiers of information about each person so that their profiles could be targeted and constantly updated. That’s where cross-site tracking comes in, with third-party cookies and scripts.

This is behavioural advertising. It has all but elimated contextual advertising. It has become so pervasive that online advertising and behavioural advertising have become synonymous. Contextual advertising is seen as laughably primitive compared with the clairvoyant powers of behavioural advertising.

But there’s a problem with behavioural advertising. A big problem.

It doesn’t work.

First of all, it relies on mind-reading powers by the advertising brokers—Facebook, Google, and the other middlemen of ad tech. For all the apocryphal folk tales of spooky second-guessing in online advertising, it mostly remains rubbish.

Forget privacy: you’re terrible at targeting anyway:

None of this works. They are still trying to sell me car insurance for my subway ride.

Have you actually paid attention to what advertisements you’re served? Maciej did:

I saw a lot of ads for GEICO, a brand of car insurance that I already own.

I saw multiple ads for Red Lobster, a seafood restaurant chain in America. Red Lobster doesn’t have any branches in San Francisco, where I live.

Finally, I saw a ton of ads for Zipcar, which is a car sharing service. These really pissed me off, not because I have a problem with Zipcar, but because they showed me the algorithm wasn’t even trying. It’s one thing to get the targeting wrong, but the ad engine can’t even decide if I have a car or not! You just showed me five ads for car insurance.

And yet in the twisted logic of ad tech, all of this would be seen as evidence that they need to gather even more data with even more invasive tracking and surveillance.

It turns out that bizarre logic is at the very heart of behavioural advertising. I highly recommend reading the in-depth report from The Correspondent called The new dot com bubble is here: it’s called online advertising:

It’s about a market of a quarter of a trillion dollars governed by irrationality.

The benchmarks that advertising companies use – intended to measure the number of clicks, sales and downloads that occur after an ad is viewed – are fundamentally misleading. None of these benchmarks distinguish between the selection effect (clicks, purchases and downloads that are happening anyway) and the advertising effect (clicks, purchases and downloads that would not have happened without ads).

Suppose someone told you that they keep tigers out of their garden by turning on their kitchen light every evening. You might think their logic is flawed, but they’ve been turning on the kitchen light every evening for years and there hasn’t been a single tiger in the garden the whole time. That’s the logic used by ad tech companies to justify trackers.

Tracker-driven behavioural advertising is bad for users. The advertisements are irrelevant most of the time, and on the few occasions where the advertising hits the mark, it just feels creepy.

Tracker-driven behavioural advertising is bad for advertisers. They spend their hard-earned money on invasive ad tech that results in no more sales or brand recognition than if they had relied on good ol’ contextual advertising.

Tracker-driven behavioural advertising is very bad for the web. Megabytes of third-party JavaScript are injected at exactly the wrong moment to make for the worst possible performance. And if that doesn’t ruin the user experience enough, there are still invasive overlays and consent forms to click through (which, ironically, gets people mad at the legislation—like GDPR—instead of the underlying reason for these annoying overlays: unnecessary surveillance and tracking by the site you’re visiting).

Tracker-driven behavioural advertising is good for the middlemen doing the tracking. Facebook and Google are two of the biggest players here. But that doesn’t mean that their business models need to be permanently anchored to surveillance. The very monopolies that make them kings of behavioural advertising—the biggest social network and the biggest search engine—would also make them titans of contextual advertising. They could pivot from an invasive behavioural model of advertising to a privacy-respecting contextual advertising model.

The incumbents will almost certainly resist changing something so fundamental. It would be like expecting an energy company to change their focus from fossil fuels to renewables. It won’t happen quickly. But I think that it may eventually happen …if we demand it.

In the meantime, we can all play our part. Just as we can do our bit for the environment at an individual level by sorting our recycling and making green choices in our day to day lives, we can all do our bit for the web too.

The least we can do is block third-party cookies. Some browsers are now doing this by default. That’s good.

Blocking third-party JavaScript is a bit trickier. That requires a browser extension. Most of these extensions to block third-party tracking are called ad blockers. That’s a shame. The issue is not with advertising. The issue is with tracking.

Alas, because this software is labelled under ad blocking, it has led to the ludicrous situation of an ethical argument being made to allow surveillance and tracking! It goes like this: websites need advertising to survive; if you block the ads, then you are denying these sites revenue. That argument would make sense if we were talking about contextual advertising. But it makes no sense when it comes to behavioural advertising …unless you genuinely believe that online advertising has to be behavioural, which means that online advertising has to track you to be effective. Such a belief would be completely wrong. But that doesn’t stop it being widely held.

To argue that there is a moral argument against blocking trackers is ridiculous. If anything, there’s a moral argument to be made for installing anti-tracking software for yourself, your friends, and your family. Otherwise we are collectively giving up our privacy for a business model that doesn’t even work.

It’s a shame that advertisers will lose out if tracking-blocking software prevents their ads from loading. But that’s only going to happen in the case of behavioural advertising. Contextual advertising won’t be blocked. Contextual advertising is also more lightweight than behavioural advertising. Contextual advertising is far less creepy than behavioural advertising. And crucially, contextual advertising works.

That shouldn’t be a controversial claim: the idea that people would be interested in adverts that are related to the content they’re currently looking at. The greatest trick the ad tech industry has pulled is convincing the world that contextual relevance is somehow less effective than some secret algorithm fed with all our data that’s supposed to be able to practically read our minds and know us better than we know ourselves.

Y’know, if this mind-control ray really could give me timely relevant adverts, I might possibly consider paying the price with my privacy. But as it is, YouTube still hasn’t figured out that I’m not interested in Top Gear or football.

The next time someone is talking about the necessity of advertising on the web as a business model, ask for details. Do they mean contextual or behavioural advertising? They’ll probably laugh at you and say that behavioural advertising is the only thing that works. They’ll be wrong.

I know it’s hard to imagine a future without tracker-driven behavioural advertising. But there are no good business reasons for it to continue. It was once hard to imagine a future without oil or coal. But through collective action, legislation, and smart business decisions, we can make a cleaner future.

Tuesday, May 12th, 2020

The History of the Future

It me:

Although some communities have listed journalists as “essential workers,” no one claims that status for the keynote speaker. The “work” of being a keynote speaker feels even more ridiculous than usual these days.

Monday, March 30th, 2020

To-Do Terrarium

I love this little to-do app! Every time you tick something off your list, something grows in your virtual terrarium. Lovely!

Thursday, February 6th, 2020

Hydration

As you may have noticed, I’m a fan of progressive enhancement.

It’s not cool. It’s often at odds with “modern” web development, so I end up looking like an old man yelling at a cloud to get off my lawn. Or something.

At its heart though, progressive enhancement seems fairly uncontroversial and inoffensive to me. It’s an approach. A mindset. Here’s how I describe it in Resilient Web Design:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

Progressive enhancement makes use of the principle of least power:

Choose the least powerful language suitable for a given purpose.

That’s step two of the three-step process. But the third step is vital.

I think a lot of the hostility towards progressive enhancement comes from a misunderstanding of that three-step process, perhaps thinking that it stops at step two. I’m sure that some have intrepreted progressive enhancement as preventing developers from using the latest and greatest technology. Nothing could be further from the truth!

Taking a layered approach to building on the web gives you permission to try cutting‐edge JavaScript APIs, regardless of how many or how few browsers currently implement them.

The most common misunderstanding of progressive enhancement is that it’s inherently about JavaScript. That’s not true. You can apply progressive enhancement at every step of front-end development: HTML, CSS, and JavaScript.

But because of JavaScript’s strict error-handling model (at least compared to HTML and CSS), it’s in the JavaScript layer that the lack of a progressive enhancement mindset is most often felt.

That’s why I was saddened by the rise of frameworks and mindsets that assume the availability of JavaScript. Single page apps generally follow this assumption. Everything is delivered via JavaScript: content, markup, styles, and behaviour.

This leads to a terrible situation for performance. The user is left staring at a blank screen, waiting for something—anything!—to appear. Browsers are optimised to stream HTML as soon as they can. Delivering your content via JavaScript rather than HTML means you’re not taking advantage of that optimisation. Your users suffer.

But I was very heartened when I saw the pendulum start to swing back the other way a bit…

Let’s say you’re using a JavaScript framework like React. But the reason you’re using it isn’t because you’re doing anything particularly complex in the browser involving state management. You might be using React because you really like the way it encourages modularity and componentisation.

A few years ago, making a single page app was pretty much the only way you could use React. For you as a developer to experience the benefits of modularity and componentisation, users had to pay the price in the payload (and fragility) of client-side JavaScript.

That’s no longer the case. Now that we can run JavaScript on the server, it’s possible to build in a modular, componentised way and still use progressive enhancement.

When I first heard about Gatsby and Next.js, I thought that was the selling point. Run React on the server; send pre-generated HTML down the wire to the user; then enhance with client-side JavaScript.

But that’s not exactly how it works. The pre-generated HTML isn’t functional. It still needs a bucketload of JavaScript before it can do anything. The actual process is: Run React on the server; send pre-generated HTML down the wire to the user; then send everything again but this time in JavaScript, bundled with the entire React library.

This leads to a situation for users that’s almost worse than before. Instead of staring at a blank screen, now they get HTML lickety-split—excellent! But if they try to interact with what’s on screen, they’ll find that nothing is working yet. Even worse, once the JavaScript is delivered, and is being parsed, they probably can’t even scroll—their device is too busy interpreting all that JavaScript. Your users suffer.

All your content is sent twice. First HTML is sent from the server. These days this is called “server-side rendering”, even though for decades the technical term was “serving a web page” (I’m pretty sure the rendering part happens in a browser). Then a JavaScript library—plus all your bespoke JavaScript—is loaded. Then all your content is loaded again as JSON.

So you’ve got a facade of an interface that you can’t actually interact with until a deluge of JavaScript has been loaded, parsed and executed. The term used for this stage of the process is “hydration”, which makes it sound more like a relaxing treatment from Gwyneth Paltrow than the horrible user experience it is.

The idea is that subsequent navigations—which will happen with Ajax—should be snappy. But the price has already been paid by then. The initial loading experience is jagged and frustrating.

Don’t get me wrong: server-side rendering is great …if what you’re sending from the server is functional. It’s the combination of hollow HTML sent from the server, followed by a huge browser-freezing dump of JavaScript that is an anti-pattern.

This use of server-side rendering followed by hydration feels like progressive enhancement, because it separates out the delivery of markup and scripts. But it’s missing the mindset.

The layered approach of progressive enhancement echoes the separation of concerns in the front-end stack: HTML, CSS, and JavaScript—each layer expressing more power. But while these concepts are related, they’re not interchangable. Separating out the layers of your tech stack isn’t necessarily progressive enhancement. If you have some HTML that relies on JavaScript to be useful, then there’s no benefit in separating that HTML into a separate payload. The HTML that you initially send down the wire needs to be functional (at least at a basic level) before the JavaScript arrives.

I was a little disappointed to see Kyle Simpson—who I admire greatly—conflate separation of concerns with progressive enhancement in his talk from JSCamp 2019:

This content is here. I can see it, and it’s even styled. But I can’t click on the damn button because nothing has loaded in the JavaScript layer yet.

Anybody experienced that where you’ve been on a web page and it’s not really fully functional yet? I can see something but I can’t actually make any usage of it yet.

These are all things that cropped out of our thought process that said: “Let’s build the web in layers. Let’s deliver it progressively in layers. Because that’s morally right. We call this progressive enhancement. And let’s not worry too much about all these potential user experience flaws that may happen.”

That’s a spot-on description of server-side rendering and hydration, but it’s a gross mischaracterisation of progressive enhancement.

That button that requires JavaScript to work? That should’ve been generated with JavaScript. (For example, if you’re building a complex web app, consider sending a read-only view down the wire in HTML—then add any interactive interface elements with JavaScript in the browser.)

If people are equating progressive enhancement with thoughtless server-side rendering and hydration, then I can see why they’d be hostile towards it.

Users would be better served with unprogressive non-enhancement:

You take some structured content, which follows the vertical flow of the document in a way that everyone understands.

Which people traverse easily by either dragging their scroll bar with their mouse, or operating the keyboard using the up and down keys, or using the spacebar.

Or if they’re using a touch device, simply flicking backwards and forwards in that easy way that we’ve all become used to. What you do is you take that, and you fucking well leave it alone.

Alas, that’s not what tools like Gatsby offer. The latest post on their blog is called Why Gatsby is better with JavaScript:

But what about sites or pages where there is no client-side interactivity? Even for those pages, Gatsby offers performance benefits by including JavaScript.

I beg to differ.

(By the way, that same blog post also initially tried to equate the performance hit of client-side JavaScript with the performance hit of images. Andy explains why that’s disingenuous.)

Hope is on the horizon for React in the form of partial hydration. I sincerely hope that it will become the default way of balancing server-side rendering with just-in-time client-side interaction.

The situation we have now is the worst of both worlds: server-side rendering followed by a tsunami of hydration. It has a whiff of progressive enhancement to it (because there’s a cosmetic separation of concerns) but it has none of the user benefits.

Monday, January 6th, 2020

Browser defaults

I’ve been thinking about some of the default behaviours that are built into web browsers.

First off, there’s the decision that a browser makes if you enter a web address without a protocol. Let’s say you type in example.com without specifying whether you’re looking for http://example.com or https://example.com.

Browsers default to HTTP rather than HTTPS. Given that HTTP is older than HTTPS that makes sense. But given that there’s been such a push for TLS on the web, and the huge increase in sites served over HTTPS, I wonder if it’s time to reconsider that default?

Most websites that are served over HTTPS have an automatic redirect from HTTP to HTTPS (enforced with HSTS). There’s an ever so slight performance hit from that, at least for the very first visit. If, when no protocol is specified, browsers were to attempt to reach the HTTPS port first, we’d get a little bit of a speed improvement.

But would that break any existing behaviour? I don’t know. I guess there would be a bit of a performance hit in the other direction. That is, the browser would try HTTPS first, and when that doesn’t exist, go for HTTP. Sites served only over HTTP would suffer that little bit of lag.

Whatever the default behaviour, some sites are going to pay that performance penalty. Right now it’s being paid by sites that are served over HTTPS.

Here’s another browser default that Rob mentioned recently: the viewport meta tag:

I thought I might be able to get away with omitting meta name="viewport". Apparently not! Maybe someday.

This all goes back to the default behaviour of Mobile Safari when the iPhone was first released. Most sites wouldn’t display correctly if one pixel were treated as one pixel. That’s because most sites were built with the assumption that they would be viewed on monitors rather than phones. Only weirdos like me were building sites without that assumption.

So the default behaviour in Mobile Safari is assume a page width of 1024 pixels, and then shrink that down to fit on the screen …unless the developer over-rides that behaviour with a viewport meta tag. That default behaviour was adopted by other mobile browsers. I think it’s a universal default.

But the web has changed since the iPhone was released in 2007. Responsive design has swept the web. What would happen if mobile browsers were to assume width=device-width?

The viewport meta element always felt like a (proprietary) band-aid rather than a long-term solution—for one thing, it’s the kind of presentational information that belongs in CSS rather than HTML. It would be nice if we could bid it farewell.

Wednesday, May 22nd, 2019

Complexity Explorables

A cornucopia of interactive visualisations. You control the horizontal. You control the vertical. Networks, flocking, emergence, diffusion …it’s all here.

Thursday, March 7th, 2019

Optimizing for outrage – A Whole Lotta Nothing

I have no doubt that showing just the top outrageous tweets leads to more engagement. If you’re constantly hitting people with outlandish news stories they’ll open the app more often and interact and post about what they think so the cycle continues.

Wednesday, March 6th, 2019

Unsolved Problems by Beth Dean

An Event Apart in Seattle continues. It’s the afternoon of day two and Beth Dean is here to give a talk called Unsolved Problems:

Technology products are being adapted faster than ever. We’ve spent a lot of time adopting new technology, but not as much time considering the social impact of doing so. This talk looks at large scale system design in the offline world, and takes lessons from them to our online work. You’ll learn how to expand your design approach from self-contained products, to considering the broader systems in which they exist.

Fun fact: An Event Apart was the first conference that Beth attended over ten years ago.

Who recognises this guy on screen? It’s Robert Stack, the creepy host of Unsolved Mysteries. It was kind of like the X-Files. The X-Files taught Beth to be a sceptic. Imagine Beth’s surprise when her job at Facebook led her to actual conspiracies. It’s been a hard year, what with Cambridge Analytica and all.

Beth’s team is focused on how people experience ads, while the whole rest of the company is focused on ads from the opposite end. She’s the Fox Mulder of the company.

Technology today has incredible reach. In recent years, we’ve seen 1:1 harm. That’s when a product negatively effects someone directly. In their book, Eric and Sara point out that Facebook is often the first company to solve these problems.

1:many harm is another use of technology. Designing in isolation isn’t new to tech. We’ve seen 1:many harm in urban planning. Brasilia is a beautiful city that nobody wants to live in. You need messy, mixed-use spaces, not a space designed for cars. Niemeier planned for efficiency, not reality.

Eichler buildings were supposed to be egalitarian. But everything that makes these single-story homes great places to live also makes them great targets for criminals. Isolation by intentional design leads to a less safe place to live.

One of Frank Gehry’s buildings turned into a deathtrap when it was covered with snow. And in summer, the reflective material makes it impossible to sit on side of it. His Facebook office building has some “interesting” restroom allocation, which was planned last.

Ohio had a deer overpopulation problem. So the solution they settled on was to introduce coyotes. Now there’s a coyote problem. When coyotes breed with stray dogs, they start to get aggressive and they hunt in packs. This is the cobra effect: when the solution to your problem makes the problem worse. The British government offered a bounty for cobras in India. So people bred snakes for the bounty. So they got rid of the bounty …and then all those snakes were released into the wild.

So-called “ride sharing” apps are about getting one person from point A to point B. They’re not about making getting around easier in general.

Google traffic directions don’t factor in the effect of Google giving everyone the same traffic directions.

AirBnB drives up rent …even though it started out as a way to help people who couldn’t make rent. Sounds like cobra farming.

Automating Inequality by Virgina Eubanks is an excellent book about being dropped by health insurance. An algorithm did it. By taking broken systems and automating them, we accelerate disenfranchisement.

Then there’s Facebook. Psychological warfare is not new. Radio and television have influenced elections long before the internet. Politicians changed their language to fit the medium of radio.

The internet has removed all friction that helps us behave cooperatively. Removing friction was once our goal, but it turns out that friction is sometimes useful. The internet has turned into an outrage machine.

Solving problems in the isolation of our own products ignores the broader context of society.

The Waze map reflects cities as they are, not the way someone wishes them to be.

—Noam Bardin, CEO of Waze

From bulletin boards to today’s web, the internet has always been toxic because human nature is toxic. Maybe that’s the bigger problem to solve.

We can look to other industries…

Ideo redesigned the hospital experience. People were introduced to their entire care staff on their first visit. Sloan Kettering took a similar approach. Artwork serves as wayfinding. Every room has its own bathroom. A Chicago hostpital included gardens because it improves recovery.

These hospital examples all:

  • Designed for an intended outcome.
  • Met people where they were.
  • Strengthened existing support networks.

We’ve seen some bad examples from urban planning, but there are success stories too.

A person on a $30 bicycle is as important as someone in a $30,000 car, said Enrique Peñalosa.

Copenhagen once faced awful traffic congestion. Now people cycle everywhere. It’s the fastest way to get around. The city is designed for bicycles first. People rode more when it felt safer. It’s no coincidence that Copenhagen ranks as one of the most livable cities in the world.

Scandinavian prisons use a concept called restorative justice. The staff plays badminton with the inmates. They cook together. Treat people like dirt and they will act like dirt. Treat people like people and they will act like people. Recividism rates in Norway are now way low.

  • Design for dignity and cooperation.
  • Solve for everyone in a system.
  • Policy should reflect intended outcomes.

The deHavilland Comet was made of metal. After a few blew apart at the seams, they switched from rivetted material. Airlines today develop a culture of crew resource management that encourages people to speak up.

  • Plan for every point of failure.
  • Empower everyone on a team to solve problems.
  • Adapt.

What can we do?

  • Policies affect design. We need to work more closely with policy makers.
  • Question access. Are all opinions equal? Where are computers making decisions that should involve people.
  • Forget neutrality. Technology is not neutral. Neutrality allows us to abdicate responsibility.
  • Stay a litte bit paranoid. Think about what the worst case scenario might be.

Make people better curators. How might we allow people to assess the veracity of information for themselves? What if we gave people better tools to affect their overall experience, not just small customisations?

We can use what we know about people to bring out their best behaviours. We can empower people to take action instead of just outrage.

What if we designed for the good of the community instead of the success of individuals. Like the Vauban in Freiburg! It was squatted, and the city gave control to the squatters to create an eco neighbourhood with affordable housing.

We need to think about what kind of worlds we want to create. What if we made the web less like a mall and more like a public park?

These are hard problems. But we solve hard technology problems every day. We could be the first generation of builders to solve technology’s hard problems.

Thursday, February 21st, 2019

Why Behavioral Scientists Need to Think Harder About the Future - Behavioral Scientist

Speculative fiction as a tool for change:

We need to think harder about the future and ask: What if our policies, institutions, and societies didn’t have to be organized as they are now? Good science fiction taps us into a rich seam of radical answers to this question.

Thursday, December 13th, 2018

Learning to unlearn – The Sea of Ideas

This is the real challenge for service workers:

For 30 years, we taught billions of humans that you need to be connected to the internet to consume the web via a browser! This means web users need to unlearn that web sites can’t be used offline.

Friday, November 23rd, 2018

When to use CSS vs. JavaScript | Go Make Things

Chris Ferdinandi has a good rule of thumb:

If something I want to do with JavaScript can be done with CSS instead, use CSS.

Makes sense, given their differing error-handling models:

A JavaScript error can bring all of the JS on a page to screeching halt. Mistype a CSS property or miss a semicolon? The browser just skips the property and moves on. Use an unsupported feature? Same thing.

But he also cautions against going too far with CSS. Anything to do with state should be done with JavaScript:

If the item requires interaction from the user, use JavaScript (things like hovering, focusing, clicking, etc.).

‘Sfunny; I remember when we got pseudo-classes, I wrote a somewhat tongue-in-cheek post called :hover Considered Harmful:

Presentation and behaviour… the twain have met, the waters are muddied, the issues are confused.

Monday, July 23rd, 2018

On Designing and Building Toggle Switches

Sara shows a few different approaches to building accessible toggle switches:

Always, always start thinking about the markup and accessibility when building components, regardless of how small or simple they seem.

Tuesday, July 10th, 2018

Components and concerns

We tend to like false dichotomies in the world of web design and web development. I’ve noticed one recently that keeps coming up in the realm of design systems and components.

It’s about separation of concerns. The web has a long history of separating structure, presentation, and behaviour through HTML, CSS, and JavaScript. It has served us very well. If you build in that order, ensuring that something works (to some extent) before adding the next layer, the result will be robust and resilient.

But in this age of components, many people are pointing out that it makes sense to separate things according to their function. Here’s the Diana Mounter in her excellent article about design systems at Github:

Rather than separating concerns by languages (such as HTML, CSS, and JavaScript), we’re are working towards a model of separating concerns at the component level.

This echoes a point made previously in a slidedeck by Cristiano Rastelli.

Separating interfaces according to the purpose of each component makes total sense …but that doesn’t mean we have to stop separating structure, presentation, and behaviour! Why not do both?

There’s nothing in the “traditonal” separation of concerns on the web (HTML/CSS/JavaScript) that restricts it only to pages. In fact, I would say it works best when it’s applied on smaller scales.

In her article, Pattern Library First: An Approach For Managing CSS, Rachel advises starting every component with good markup:

Your starting point should always be well-structured markup.

This ensures that your content is accessible at a very basic level, but it also means you can take advantage of normal flow.

That’s basically an application of starting with the rule of least power.

In chapter 6 of Resilient Web Design, I outline the three-step process I use to build on the web:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

That chapter is filled with examples of applying those steps at the level of an entire site or product, but it doesn’t need to end there:

We can apply the three‐step process at the scale of individual components within a page. “What is the core functionality of this component? How can I make that functionality available using the simplest possible technology? Now how can I enhance it?”

There’s another shared benefit to separating concerns when building pages and building components. In the case of pages, asking “what is the core functionality?” will help you come up with a good URL. With components, asking “what is the core functionality?” will help you come up with a good name …something that’s at the heart of a good design system. In her brilliant Design Systems book, Alla advocates asking “what is its purpose?” in order to get a good shared language for components.

My point is this:

  • Separating structure, presentation, and behaviour is a good idea.
  • Separating an interface into components is a good idea.

Those two good ideas are not in conflict. Presenting them as though they were binary choices is like saying “I used to eat Italian food, but now I drink Italian wine.” They work best when they’re done in combination.

Thursday, May 3rd, 2018

Why Silicon Valley can’t fix itself

Backlash backlash:

The nature of human nature is that it changes. It can not, therefore, serve as a stable basis for evaluating the impact of technology. Yet the assumption that it doesn’t change serves a useful purpose. Treating human nature as something static, pure and essential elevates the speaker into a position of power. Claiming to tell us who we are, they tell us how we should be.

The Wisdom and/or Madness of Crowds

The latest explainer/game from Nicky Case is an absolutely brilliant interactive piece on small world networks.

Wednesday, July 19th, 2017

The magical and the mundane

The iPhone—and by extension, the smartphone—is a decade old. Ian Bogost has written an interesting piece in The Atlantic charting our changing relationship with the technology.

First, it was like a toy dog:

A device that could be cared for, and conspicuously so.

Then, it was like a cigarette:

A nervous tic, facilitated by a handheld apparatus that releases relief when operated.

Later, it was like a rosary:

Its toy-dog quirks having been tamed, its compulsive nature having been accepted, the iPhone became the magic wand by which all worldly actions could be performed, all possible information acquired.

Finally, it simply becomes …a rectangle.

Abstract, as a shape. Flat, as a surface. But suggestive of so much. A table for community. A door for entry, or for exit. A window for looking out of, or a picture for looking into. A movie screen for distraction, or a cradle for comfort, or a bed for seduction.

Design dissolves in behaviour. This is something that Ben wrote about recently in his excellent Slapdashery series: “Everything’s amazing and nobody’s happy.”

Technology tweaks our desire for novelty; but as soon as we get it we’re usually bored. There are no technologies that I can think of that haven’t become mundane.

This is something I touched on in my talk last year at An Event Apart. There’s a thread throughout the talk about Arthur C. Clarke, and of course I quote his third law:

Any sufficiently advanced technology is indistinguishable from magic.

I propose an addendum to that:

Any sufficiently advanced technology is indistinguishable from magic at first.

The magical quickly becomes the mundane. That’s exactly the point that Louis CK is making in the piece that Ben references.

Seven years ago Frank wrote his wonderful essay There Is A Horse In The Apple Store:

I have a term called a “tiny pony.” It is a thing that is exceptional that no one, for whatever reason, notices. Or, conversely, it is an exceptional thing that everyone notices, but quickly grows acclimated to despite the brilliance of it all.

We are surrounded by magical tiny ponies. I mean, just think: right now you are reading some words at a URL on the World Wide Web. Even more magically, I just published some words at my own URL on the World Wide Web. That still blows my mind! I hope I never lose that feeling.

Thursday, November 3rd, 2016

Adoption

Tom wrote a post on Ev’s blog a while back called JavaScript Frameworks: Distribution Channels for Good Ideas (I’ve been hoping he’d publish it on his own site so I’d have a more permanent URL to point to, but so far, no joy). It’s well worth a read.

I don’t really have much of an opinion on his central point that browser makers should work more closely with framework makers. I’m not so sure I agree with the central premise that frameworks are going to be around for the long haul. I think good frameworks—like jQuery—should aim to make themselves redundant.

But anyway, along the way, Tom makes this observation:

Google has an institutional tendency to go it alone.

JavaScript not good enough? Let’s create Dart to replace it. HTML not good enough? Let’s create AMP to replace it. I’m just waiting for them to announce Google Style Sheets.

I don’t really mind these inventions. We’re not forced to adopt them, and generally, we don’t. Tom again:

They poured enormous time and money into Dart, even building an entire IDE, without much to show for it. Contrast Dart’s adoption with the adoption of TypeScript and Flow, which layer improvements on top of JavaScript instead of trying to replace it.

See, that’s a really, really good point. It’s so much easier to get people to adjust their behaviour than to change it completely.

Sass is a really good example of this. You can take any .css file, save it as a .scss file, and now you’re using Sass. Then you can start using features (or not) as needed. Very smart.

Incidentally, I’m very curious to know how many people use the scss syntax (which is the same as CSS) compared to how many people use the sass indented syntax (the one with significant whitespace). In his brilliant Sass for Web Designers book, I don’t think Dan even mentioned the indented syntax.

Or compare the adoption of Sass to the adoption of HAML. Now, admittedly, the disparity there might be because Sass adds new features, whereas HAML is a purely stylistic choice. But I think the more fundamental difference is that Sass—with its scss syntax—only requires you to slightly adjust your behaviour, whereas something like HAML requires you to go all in right from the start.

This is something that has been on my mind a lately while I’ve been preparing my new talk on evaluating technology (the talk went down very well at An Event Apart San Francisco, by the way—that’s a relief). In the talk, I made a reference to one of Grace Hopper’s famous quotes:

Humans are allergic to change.

Now, Grace Hopper subsequently says:

I try to fight that.

I contrast that with the approach that Tim Berners-Lee and Robert Cailliau took with their World Wide Web project. The individual pieces were built on what people were already familiar with. URLs use slashes so they’d be feel similar to UNIX file paths. And the first fledging version of HTML took its vocabulary almost wholesale from a version of SGML already in use at CERN. In fact, you could pretty much take an existing CERN SGML file and open it as an HTML file in a web browser.

Oh, and that browser would ignore any tags it didn’t understand—behaviour that, in my opinion, would prove crucial to the growth and success of HTML. Because of its familiarity, its simplicity, and its forgiving error handling, HTML turned to be more successful than Tim Berners-Lee expected, as he wrote in his book Weaving The Web:

I expected HTML to be the basic waft and weft of the Web but documents of all types: video, computer aided design, sound, animation and executable programs to be the colored threads that would contain much of the content. It would turn out that HTML would become amazingly popular for the content as well.

HTML and SGML; Sass and CSS; TypeScript and JavaScript. The new technology builds on top of the existing technology instead of wiping the slate clean and starting from scratch.

Humans are allergic to change. And that’s okay.

Monday, April 13th, 2015

Progressive enhancement with handlers and enhancers | hiddedevries.nl

I like this declarative approach to associating JavaScript behaviours with HTML elements.

Sunday, March 29th, 2015

Native Scrolling by Anselm Hannemann

This gets nothing but agreement from me:

For altering the default scroll speed I honestly couldn’t come up with a valid use-case.

My theory is that site owners are trying to apply app-like whizz-banginess to the act of just trying to read some damn text, and so they end up screwing with the one interaction still left to the reader—scrolling.

Saturday, September 13th, 2014

Why You Want a Code of Conduct & How We Made One | Incisive.nu

A great piece by Erin on the value of a code of conduct for conferences, filled with practical advice.

Once you decide to create a code and do it thoughtfully, you’ll find the internet overflows with resources to help you accomplish your goals, and good people who’ll offer guidance and advice. From my own experience, I can say that specificity and follow-through will make your code practical and give it teeth; humane language and a strong connection to your community will make it feel real and give it a heart.