Tags: medium

198

sparkline

Global Diversity CFP Day—Brighton edition

There are enough middle-aged straight white men like me speaking at conferences. That’s why the Global Diversity Call-For-Proposals Day is happening this Saturday, February 3rd.

The purpose is two-fold. One is to encourage a diverse range of people to submit talk proposals to conferences. The other is to help with the specifics—coming with ideas, writing a good title and abstract, preparing the presentation, and all that.

Julie is organising the Brighton edition. Clearleft are providing the venue—68 Middle Street. I’ll be on hand to facilitate. Rosa and Dot will be doing the real work, mentoring the attendees.

If you’ve ever thought about submitting a talk proposal to a conference but just don’t know where to start, or if you’re just interested in the idea, please do come along on Saturday. It’s starts at 11am and will be all wrapped up by 3pm.

See you there!

Famous first words

GDPR and Google Analytics

Enforcement of the European Union’s General Data Protection Regulation is coming very, very soon. Look busy. This regulation is not limited to companies based in the EU—it applies to any service anywhere in the world that can be used by citizens of the EU.

It’s less about data protection and more like a user’s bill of rights. That’s good. Cennydd has written a techie’s rough guide to GDPR.

The Open Data Institute’s Jeni Tennison wrote down her thoughts on how it could change data portability in particular. While she welcomes GDPR, she has some misgivings.

Blaine—who really needs to get a blog—shared his concerns in the form of the online equivalent of interpretive dance …a twitter thread (it’s called a thread because it inevitably gets all tangled, and it’s easy to break.)

The interesting thing about the so-called “cookie law” is that it makes no mention of cookies whatsoever. It doesn’t list any specific technology. Instead it states that any means of tracking or identifying users across websites requires disclosure. So if you’re setting a cookie just to manage state—so that users can log in, or keep items in a shopping basket—the legislation doesn’t apply. But as soon as your site allows a third-party to set a cookie, it’s banner time.

Google Analytics is a classic example of a third-party service that uses cookies to track people across domains. That’s pretty much why it exists. We, as site owners, get to use this incredibly powerful tool, and all we have to do in return is add one little snippet of JavaScript to our pages. In doing so, we’re allowing a third party to read or write a cookie from their domain.

Before Google Analytics, Google—the search engine business—was able to identify and track what users were searching for, and which search results they clicked on. But as soon as the user left google.com, the trail went cold. By creating an enormously useful analytics product that only required site owners to add a single line of JavaScript, Google—the online advertising business—gained the ability to keep track of users across most of the web, whether they were on a site owned by Google or not.

Under the old “cookie law”, using a third-party cookie-setting service like that meant you had to inform any of your users who were citizens of the EU. With GDPR, that changes. Now you have to get consent. A dismissible little overlay isn’t going to cut it any more. Implied consent isn’t enough.

Now this situation raises an interesting question. Who’s responsible for getting consent? Is it the site owner or the third party whose script is the conduit for the tracking?

In the first scenario, you’d need to wait for an explicit agreement from a visitor to your site before triggering the Google Analytics functionality. Suddenly it’s not as simple as adding a single line of JavaScript to your site.

In the second scenario, you don’t do anything differently than before—you just add that single line of JavaScript. But now that script would need to launch the interface for getting consent before doing any tracking. Google Analytics would go from being something invisible to something that directly impacts the user experience of your site.

I’m just using Google Analytics as an example here because it’s so widespread. This also applies to third-party sharing buttons—Twitter, Facebook, etc.—and of course, advertising.

In the case of advertising, it gets even thornier because quite often, the site owner has no idea which third party is about to do the tracking. Many, many sites use intermediary services (y’know, ‘cause bloated ad scripts aren’t slowing down sites enough so let’s throw some just-in-time bidding into the mix too). You could get consent for the intermediary service, but not for the final advert—neither you nor your site’s user would have any idea what they were consenting to.

Interesting times. One way or another, a massive amount of the web—every website using Google Analytics, embedded YouTube videos, Facebook comments, embedded tweets, or third-party advertisements—will be liable under GDPR.

It’s almost as if the ubiquitous surveillance of people’s every move on the web wasn’t a very good idea in the first place.

Design ops for design systems

Leading Design was one of the best events I attended last year. To be honest, that surprised me—I wasn’t sure how relevant it would be to me, but it turned out to be the most on-the-nose conference I could’ve wished for.

Seeing as the event was all about design leadership, there was inevitably some talk of design ops. But I noticed that the term was being used in two different ways.

Sometimes a speaker would talk about design ops and mean “operations, specifically for designers.” That means all the usual office practicalities—equipment, furniture, software—that designers might need to do their jobs. For example, one of the speakers recommended having a dedicated design ops person rather than trying to juggle that yourself. That’s good advice, as long as you understand what’s meant by design ops in that context.

There’s another context of use for the phrase “design ops”, and it’s one that we use far more often at Clearleft. It’s related to design systems.

Now, “design system” is itself a term that can be ambiguous. See also “pattern library” and “style guide”. Quite a few people have had a stab at disambiguating those terms, and I think there’s general agreement—a design system is the overall big-picture “thing” that can contain a pattern library, and/or a style guide, and/or much more besides:

None of those great posts attempt to define design ops, and that’s totally fair, because they’re all attempting to define things—style guides, pattern libraries, and design systems—whereas design ops isn’t a thing, it’s a practice. But I do think that design ops follows on nicely from design systems. I think that design ops is the practice of adopting and using a design system.

There are plenty of posts out there about the challenges of getting people to use a design system, and while very few of them use the term design ops, I think that’s what all of them are about:

Clearly design systems and design ops are very closely related: you really can’t have one without the other. What I find interesting is that a lot of the challenges relating to design systems (and pattern libraries, and style guides) might be technical, whereas the challenges of design ops are almost entirely cultural.

I realise that tying design ops directly to design systems is somewhat limiting, and the truth is that design ops can encompass much more. I like Andy’s description:

Design Ops is essentially the practice of reducing operational inefficiencies in the design workflow through process and technological advancements.

Now, in theory, that can encompass any operational stuff—equipment, furniture, software—but in practice, when we’re dealing with design ops, 90% of the time it’s related to a design system. I guess I could use a whole new term (design systems ops?) but I think the term design ops works well …as long as everyone involved is clear on the kind of design ops we’re all talking about.

Needs must

I got a follow-up comment to my follow-up post about the follow-up comment I got on my original post about Google Analytics. Keep up.

I made the point that, from a front-end performance perspective, server logs have no impact whereas a JavaScript-based analytics solution must have some impact on the end user. Paul Anthony says:

Google won the analytics war because dropping one line of JS in the footer and handing a tried and tested interface to customers is an obvious no brainer in comparison to setting up an open source option that needs a cron job to parse the files, a database to store the results and doesn’t provide mobile interface.

Good point. Dropping one snippet of JavaScript into your front-end codebase is certainly an easier solution …easier for you, that is. The cost is passed on to your users. This is a classic example of where user needs and developer needs are in opposition. I’ve said it before and I’ll say it again:

Given the choice between making something my problem, and making something the user’s problem, I’ll choose to make it my problem every time.

It’s true that this often means doing more work. That’s why it’s called work. This is literally what our jobs are supposed to entail: we put in the work to make life easier for users. We’re supposed to be saving them time, not passing it along.

The example of Google Analytics is pretty extreme, I’ll grant you. The cost to the user of adding that snippet of JavaScript—if you’ve configured things reasonably well—is pretty small (again, just from a performance perspective; there’s still the cost of allowing Google to track them across domains), and the cost to you of setting up a comparable analytics system based on server logs can indeed be disproportionately high. But this tension between user needs and developer needs is something I see play out again and again.

I’ve often thought the HTML design principle called the priority of constituencies could be adopted by web developers:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity. In other words costs or difficulties to the user should be given more weight than costs to authors.

In Resilient Web Design, I documented the three-step approach I take when I’m building anything on the web:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

Now I’m wondering if I should’ve clarified that second step further. When I talk about choosing “the simplest possible technology”, what I mean is “the simplest possible technology for the user”, not “the simplest possible technology for the developer.”

For example, suppose I were going to build a news website. The core functionality is fairly easy to identify: providing the news. Next comes the step where I choose the simplest possible technology. Now, if I were a developer who had plenty of experience building JavaScript-driven single page apps, I might conclude that the simplest route for me would be to render the news via JavaScript. But that would be a fragile starting point if I’m trying to reach as many people as possible (I might well end up building a swishy JavaScript-driven single page app in step three, but step two should almost certainly be good ol’ HTML).

Time and time again, I see decisions that favour developer convenience over user needs. Don’t get me wrong—as a developer, I absolutely want developer convenience …but not at the expense of user needs.

I know that “empathy” is an over-used word in the world of user experience and design, but with good reason. I think we should try to remind ourselves of why we make our architectural decisions by invoking who those decisions benefit. For example, “This tech stack is best option for our team”, or “This solution is the best for the widest range of users.” Then, given the choice, favour user needs in the decision-making process.

There will always be situations where, given time and budget constraints, we end up choosing solutions that are easier for us, but not the best for our users. And that’s okay, as long as we acknowledge that compromise and strive to do better next time.

But when the best solutions for us as developers become enshrined as the best possible solutions, then we are failing the people we serve.

That doesn’t mean we must become hairshirt-wearing martyrs; developer convenience is important …but not as important as user needs. Start with user needs.

Heisenberg

I wrote about Google Analytics yesterday. As usual, I syndicated the post to Ev’s blog, and I got an interesting response over there. Kelly Burgett set me straight on some of the finer details of how goals work, and finished with this thought:

You mention “delivering a performant, accessible, responsive, scalable website isn’t enough” as if it should be, and I have to disagree. It’s not enough for a business to simply have a great website if you are unable to understand performance of channel marketing, track user demographics and behavior on-site, and optimize your site/brand based on that data. I’ve seen a lot of ugly sites who have done exceptionally well in terms of ROI, simply because they are getting the data they need from the site in order make better business decisions. If your site cannot do that (ie. through data collection, often third party scripts), then your beautifully-designed site can only take you so far.

That makes an excellent case for having analytics. But that’s not necessarily the same as having Google analytics, or even JavaScript-driven analytics at all.

By far the most useful information you get from analytics is around where people have come from, where did they go next, and what kind of device are they using. None of that information requires JavaScript. It’s all available from your server logs.

I don’t want to come across all old-man-yell-at-cloud here, but I’m trying to remember at what point self-hosted software for analysing your log traffic became not good enough.

Here’s the thing: logging on the server has no effect on the user experience. It’s basically free, in terms of performance. Logging via JavaScript, by its very nature, has some cost. Even if its negligible, that’s one more request, and that’s one more bit of processing for the CPU.

All of the data that you can only get via JavaScript (in-page actions, heat maps, etc.) are, in my experience, better handled by dedicated software. To me, that kind of more precise data feels different to analytics in the sense of funnels, conversions, goals and all that stuff.

So in order to get more fine-grained data to analyse, our analytics software has now doubled down on a technology—JavaScript—that has an impact on the end user, where previously the act of observation could be done at a distance.

There are also blind spots that come with JavaScript-based tracking. According to Google Analytics, 0% of your customers don’t have JavaScript. That’s not necessarily true, but there’s literally no way for Google Analytics—which relies on JavaScript—to even do its job in the absence of JavaScript. That can lead to a dangerous situation where you might be led to think that 100% of your potential customers are getting by, when actually a proportion might be struggling, but you’ll never find out about it.

Related: according to Google Analytics, 0% of your customers are using ad-blockers that block requests to Google’s servers. Again, that’s not necessarily a true fact.

So I completely agree than analytics are a good thing to have for your business. But it does not follow that Google Analytics is a good thing for your business. Other options are available.

I feel like the assumption that “analytics = Google Analytics” is like the slippery slope in reverse. If we’re all agreed that analytics are important, then aren’t we also all agreed that JavaScript-based tracking is important?

In a word, no.

This reminds me of the arguments made in favour of intrusive, bloated advertising scripts. All of the arguments focus on the need for advertising—to stay in business, to pay the writers—which are all great reasons for advertising, but have nothing to do with JavaScript, which is at the root of the problem. Everyone I know who uses an ad-blocker—including me—doesn’t use it to stop seeing adverts, but to stop the performance of the page being degraded (and to avoid being tracked across domains).

So let’s not confuse the means with the ends. If you need to have advertising, that doesn’t mean you need to have horribly bloated JavaScript-based advertising. If you need analytics, that doesn’t mean you need an analytics script on your front end.

Analysing analytics

Hell is other people’s JavaScript.

There’s nothing quite so crushing as building a beautifully performant website only to have it infested with a plague of third-party scripts that add to the weight of each page and reduce the responsiveness, making a mockery of your well-considered performance budget.

Trent has been writing about this:

My latest realization is that delivering a performant, accessible, responsive, scalable website isn’t enough: I also need to consider the impact of third-party scripts.

He’s started the process by itemising third-party scripts. Frustratingly though, there’s rarely one single culprit that you can point to—it’s the cumulative effect of “just one more beacon” and “just one more analytics script” and “just one more A/B testing tool” that adds up to a crappy experience that warms your user’s hands by ensuring your site is constantly draining their battery.

Actually, having just said that there’s rarely one single culprit, Adobe Tag Manager is often at the root of third-party problems. That and adverts. It’s like opening the door of your beautifully curated dream home, and inviting a pack of diarrhetic elephants in: “Please, crap wherever you like.”

But even the more well-behaved third-party scripts can get out of hand. Google Analytics is so ubiquitous that it’s hardly even considered in the list of potentially harmful third-party scripts. On the whole, it’s a fairly well-behaved citizen of your site’s population of third-party scripts (y’know, leaving aside the whole surveillance capitalism business model that allows you to use such a useful tool for free in exchange for Google tracking your site’s visitors across the web and selling the insights from that data to advertisers).

The initial analytics script that you—asynchronously—load into your page isn’t very big. But depending on how you’ve configured your Google Analytics account, that might just be the start of a longer chain of downloads and event handlers.

Ed recently gave a lunchtime presentation at Clearleft on using Google Analytics—he professes modesty but he really knows his stuff. He was making sure that everyone knew how to set up goals’n’stuff.

As I understand it, there are two main categories of goals: events and destinations (there are also durations and pages, but they feel similar to destinations). You use events to answer questions like “Did the user click on this button?” or “Did the user click on that search field?”. You use destinations to answer questions like “Did the user arrive at this page?” or “Did the user come from that page?”

You can add as many goals to your site’s analytics as you want. That’s an intoxicating offer. The problem is that there is potentially a cost for each goal you create. It’s an invisible cost. It’s paid by the user in the currency of JavaScript sent down the wire (I wish that the Google Analytics admin interface were more like the old interface for Google Fonts, where each extra file you added literally pushed a needle higher on a dial).

It strikes me that the event-based goals would necessarily require more JavaScript in order to listen out for those clicks and fire off that information. The destination-based goals should be able to get all the information needed from regular page navigations.

So I have a hypothesis. I think that destination-based goals are less harmful to performance than event-based goals. I might well be wrong about that, and if I am, please let me know.

With that hypothesis in mind, and until I learn otherwise, I’ve got two rules of thumb to offer when it comes to using Google Analytics:

  1. Try to keep the number of goals to a minimum.
  2. If you must create a goal, favour destinations over events.

Words I wrote in 2017

I wrote 78 blog posts in 2017. That works out at an average of six and a half blog posts per month. I’ll take it.

Here are some pieces of writing from 2017 that I’m relatively happy with:

Going Rogue. A look at the ethical questions raised by Rogue One

In AMP we trust. My unease with Google’s AMP format was growing by the day.

A minority report on artificial intelligence. Revisiting two of Spielberg’s films after a decade and a half.

Progressing the web. I really don’t want progressive web apps to just try to imitate native apps. They can be so much more.

CSS. Simple, yes, but not easy.

Intolerable. A screed. I still get very, very angry when I think about how that manifestbro duped people.

Акула. Recounting a story told by a taxi driver.

Hooked and booked. Does A/B testing lead to dark patterns?

Ubiquity and consistency. Different approaches to building on the web.

I hope there’s something in there that you like. It always a nice bonus when other people like something I’ve written, but I write for myself first and foremost. Writing is how I figure out what I think. I will, of course, continue to write and publish on my website in 2018. I’d really like it if you did the same.

Food I ate in 2017

I did a fair bit of travelling in 2017, which I always enjoy. I particularly enjoy it when Jessica comes with me and we get to sample the cuisine of other countries.

Portugal will always be a culinary hotspot for me, particularly Porto (“tripas à moda do Porto” is one of the best things I’ve ever tasted). When I was teaching at the New Digital School in Porto back in February, I took full advantage of the culinary landscape. A seafood rice (and goose barnacles) at O Gaveto in Matosinhos was a particular highlight.

Goose barnacles. Seafood rice.

The most unexpected thing I ate in Porto was when I wandered off for lunch on my own one day. I ended up in a little place where, when I walked in, it was kind of like that bit in the Western when the music stops and everyone turns to look. This was clearly a place for locals. The owner didn’t speak any English. I didn’t speak any Portuguese. But we figured it out. She mimed something sandwich-like and said a word I wasn’t familiar with: bifana. Okay, I said. Then she mimed the universal action for drinking, so I said “agua.” She looked at with a very confused expression. “Agua!? Não. Cerveja!” Who am I to argue? Anyway, she produced this thing which was basically some wet meat in a bun. It didn’t look very appetising. But this was the kind of situation where I couldn’t back out of eating it. So I took a bite and …it was delicious! Like, really, really delicious.

This sandwich was delicious and I have no idea what was in it. I speak no Portuguese and the café owner spoke no English.

Later in February, we went to Pittsburgh to visit Cindy and Matt. We were there for my birthday, so Cindy prepared the most amazing meal. She reproduced a dish from the French Laundry—sous-vide lobster on orzo. It was divine!

Lobster tail on orzo with a Parmesan crisp.

Later in the year, we went to Singapore for the first time. The culture of hawker centres makes it the ideal place for trying lots of different foods. There were some real revelations in there.

chicken rice fishball noodles laksa grilled pork

We visited lots of other great places like Reykjavík, Lisbon, Barcelona, and Nuremberg. But as well as sampling the cuisine of distant locations, I had some very fine food right here in Brighton, home to Trollburger, purveyors of the best burger you’ll ever eat.

Checked in at Trollburger. The Hellfire! 🌶 Troll’s Fiery Breath and bolognese fries from @trollburger. Burning crusader. Having a delicious Nightfire burger from @trollburgerBN1.

I also have a thing for hot wings, so it’s very fortunate that The Joker, home to the best wings in Brighton, is just around the corner from the dance studio where Jessica goes for ballet. Regular wing nights became a thing in 2017.

Checked in at The Joker. Lunch break at FFConf. — with Graham Checked in at The Joker. with Jessica Checked in at The Joker. Wing night! — with Jessica

I started a little routine in 2017 where I’d take a break from work in the middle of the afternoon, wander down to the seafront, and buy a single oyster. It only took a few minutes out of the day but it was a great little dose of perspective each time.

Today’s oyster. Today’s oyster. Today’s oyster. Today’s oyster. Today’s oyster. Today’s oyster on the beach. Today’s oyster on the beach.

But when I think of my favourite meals of 2017, most of them were home-cooked.

Sirloin steak with thyme. 🥩 Sous-vide pork tenderloin stuffed with capers and herbs. Roasting pork, apples, and onions. 🐷🍏 Fabada Asturiana. Rib of beef with potatoes and broad bean, fennel and burrata. Grillin’ chicken. A bountiful table. Grilling lamb. Summertime on a plate. Rib of beef, carrots, carrot-top chimichurri, and kale. The roast chicken angel watches over its flock of side dishes. Ribeye.

Audio I listened to in 2017

I huffduffed 290 pieces of audio in 2017. I’ve still got a bit of a backlog of items I haven’t listened to yet, but I thought I’d share some of my favourite items from the past year. Here are twelve pieces of audio, one for each month of 2017…

Donald Hoffman’s TED talk, Do we see reality as it really is?. TED talks are supposed to blow your mind, right? (22:15)

How to Become Batman on Invisibilia. Alix Spiegel and Lulu Miller challenge you to think of blindness as social construct. Hear ‘em out. (58:02)

Where to find what’s disappeared online, and a whole lot more: the Internet Archive on Public Radio International. I just love hearing Brewster Kahle’s enthusiasm and excitement. (42:43)

Every Tuesday At Nine on Irish Music Stories. I’ve been really enjoying Shannon Heaton’s podcast this year. This one digs into that certain something that happens at an Irish music session. (40:50)

Adam Buxton talks to Brian Eno (part two is here). A fun and interesting chat about Brian Eno’s life and work. (53:10 and 46:35)

Nick Cave and Warren Ellis on Kreative Kontrol. This was far more revealing than I expected: genuine and unpretentious. (57:07)

Paul Lloyd at Patterns Day. All the talks at Patterns Day were brilliant. Paul’s really stuck with me. (28:21)

James Gleick on Time Travel at The Long Now. There were so many great talks from The Long Now’s seminars on long-term thinking. Nicky Case and Jennifer Pahlka were standouts too. (1:20:31)

Long Distance on Reply All. It all starts with a simple phone call. (47:27)

The King of Tears on Revisionist History. Malcolm Gladwell’s style suits podcasting very well. I liked this episode about country songwriter Bobby Braddock. Related: Jon’s Troika episode on tearjerkers. (42:14)

Feet on the Ground, Eyes on the Stars: The True Story of a Real Rocket Man with G.A. “Jim” Ogle. This was easily my favourite podcast episode of 2017. It’s on the User Defenders podcast but it’s not about UX. Instead, host Jason Ogle interviews his father, a rocket scientist who worked on everything from Apollo to every space shuttle mission. His story is fascinating. (2:38:21)

R.E.M. on Song Exploder. Breaking down the song Try Not To Breathe from Automatic For The People. (16:15)

I’ve gone back and added the tag “2017roundup” to each of these items. So if you’d like to subscribe to a podcast of just these episodes, here are the links:

Books I read in 2017

Here are the books I read in 2017. It’s not as many as I hoped.

I set myself a constraint this year so that I’d have to alternate between reading fiction and non-fiction: no reading two fiction books back-to-back, and no reading two non-fiction books back-to-back. I quite like the balanced book diet that resulted. I think I might keep it going.

Anyway, in order of consumption, here are those books…

Leviathan Wakes by James S.A. Corey

★★★☆☆

I had already seen—and quite enjoyed—the first series of the television adaption of The Expanse so I figured I’d dive into the books that everyone kept telling me about. The book was fun …but no more than that. I don’t think I’m invested enough to read any of the further books. In some ways, I think this makes for better TV than reading (despite the TV’s shows annoying “slow motion in zero G” trope that somewhat lessens the hard sci-fi credentials).

Black Box Thinking by Matthew Syed

★★★★☆

This was recommended by James Box, and on the whole, I really liked it. There’s a lot of anecdata though. Still, the fundamental premise is a good one, comparing the attitudes towards risk in two different industries; aviation and healthcare. A little bit more trimming down would’ve helped the book—it dragged on just a bit too long.

The Separation by Christopher Priest

★★★★★

I need to read at least one Christopher Priest book a year. They’re in a league of their own, somehow outside the normal rules of criticism. This one is a true stand-out. As ever, it messes with your head and gets weirder as it goes on. If you haven’t read any Christopher Priest, I reckon this would be a great one to start with.

Deep Sea and Foreign Going by Rose George

★★★★☆

Recommended by both Jessica and Danielle, this is a well-crafted look into life on board a cargo ship, as well as an examination of ocean-going logistics. If you liked the Containers podcast, you’ll like this. I found it a little bit episodic—more like a collection of magazine articles sometimes—but still enjoyable.

Bloodchild by Octavia E. Butler

A false start. This is a short story, not a novel—I didn’t know that when I downloaded it to my Kindle. It’s an excellent short story though. Still, I felt it didn’t count in my zigzagging between fiction and non-fiction so I followed it with…

Star Maker by Olaf Stapledon

★★★☆☆

Science fiction from the 1930s. The breadth of imagination is quite staggering, even if the writing is sometimes a bit of a slog. Still, it seems remarkably ahead of its time in many ways.

The Sense Of Style by Steven Pinker

★★★★☆

I spent a portion of 2017 writing a book so I was eager to read Steven Pinker’s take on a style guide, having thoroughly enjoyed The Language Instinct and The Blank Slate. This book starts with a bang—a critique of some examples of great writing. Then there’s some good practical advice, and then there’s a bit of a laundry list of non-rules. Typical of Pinker, the points about unclear writing are illustrated with humorous real-world examples. Overall, a good guide but perhaps a little longer than it needs to be.

Aurora by Kim Stanley Robinson

★★★★★

I loved everything about this book.

Writing On The Wall by Tom Standage

★★☆☆☆

I’ve read of all of Tom Standage’s books but none of them have ever matched the brilliance of The Victorian Internet. This one was frustratingly shallow. Every now and then there were glimpses of a better book. There’s a chapter on radio that gets genuinely exciting and intriguing. If Tom Standage wrote a whole book on that, I’d read it in a heartbeat. But in this collection of social media through the ages, it just reminded me of how much better he can be.

Grass by Sheri S. Tepper

★★★☆☆

Recommended by Jessica and Denise, this was my first Sheri S. Tepper book. It took me a while to get into it, but I enjoyed it. There’s nothing groundbreaking here, but it’s a solid planetary romance.

Bird By Bird by Anne Lamott

★★★☆☆

This has been recommended to me by more people than I can recall. I was very glad to finally get to read it (myself and Amber did a book swap: I gave her A Sense Of Style and she gave me this). As a guide to writing, it’s got some solid advice, humorously delivered, but there were also moments where I found the style grating. Still worth reading though.

The Gradual by Christopher Priest

★★★★☆

I just can’t get enough of Christopher Priest. I saw that his latest book was in the local library and I snapped it up. This one is set entirely in the Dream Archipelago. Yet again, the weirdness increases as the book progresses. It’s not up there with The Islanders or The Adjacent, but it’s as unsettling as any of his best books.

A Brief History of Everyone Who Ever Lived by Adam Rutherford

★★★★★

I think this was the best non-fiction book I read this year. It’s divided into two halves. The first half, which I preferred, dealt with the sweep of human history as told through our genes. The second half deals with modern-day stories in the press that begin “Scientists say…” It was mostly Adam Rutherford gritting his teeth in frustration as he points out that “it’s a bit more complicated than that.” Thoroughly enjoyable, well written, and educational.

A Closed And Common Orbit by Becky Chambers

★★★☆☆

I had read the first book in this series, A Long Way to a Small, Angry Planet, and thought it was so-so. It read strangely like fan fiction, and didn’t have much of a though-line. But multiple people said that this second outing was a big improvement. They weren’t wrong. This is definitely a better book. The story is relatively straightforward, and as with all good sci-fi, it’s not really telling us about a future society—it’s telling us about the world we live in. The book isn’t remarkable but it’s solid.

The Dream Machine: J.C.R Licklider And The Revolution That Made Computing Possible by M. Mitchell Waldrop

★★★★☆

This is the kind of book that could have been written just for me. The ARPANET, Turing, Norbert Wiener and Cybernetics, Xerox PARC, the internet, the web …it’s all in here. I enjoyed it, but it was a long slog. I’m not sure if using J.C.R. Licklider as the unifying factor in all these threads really worked. And maybe it was just the length of the book getting to me, but by the time I was two-thirds of the way through, I was getting weary of the dudes. Yes, there were a lot of remarkable men involved in these stories, but my heart sank with every chapter that went by without a single woman being mentioned. I found it ironic that so many intelligent people had the vision to imagine a world of human-computer symbiosis, but lacked the vision to challenge the status quo of the societal structures they were in.

Broken Monsters by Lauren Beukes

★★★☆☆

Lauren defies genre-pigeonholing once again. This is sort of a horror, sort of a detective story, and sort of a social commentary. It works well, although I was nervous about the Detroit setting sometimes veering into ruin porn. I don’t think it’s up there with Zoo City or The Shining Girls, but it’s certainly a page-turner.

Accessibility For Everyone by Laura Kalbag

★★★★☆

Because the previous non-fiction book I read was so long, I really fancied something short and to-the-point. A Book Apart to the rescue. You can be guaranteed that any book from that publisher will be worth reading, and this is no exception.

Ninefox Gambit by Yoon Ha Lee

★★★★☆

There was a lot of buzz around this book, and it came highly recommended by Danielle. It’s thoroughly dizzying in its world-building; you’re plunged right into the thick of things with no word of explanation or exposition. I like that. There were times when I thought that maybe I had missed some important information, because I was having such a hard time following what was going on, but then I’d realise that the sense of disorientation was entirely deliberate. Good stuff …although for some reason I ended up liking it more than loving it.

High Performance Browser Networking by Ilya Grigorik

★★★☆☆

A recommendation from Harry. The whole book is available online for free. That’s how I’ve been reading it—in a browser tab. In fact, I have to confess that I haven’t finished it. I’m dipping in and out. There’s a lot of very detailed information on how networks and browsers work. I’m not sure how much of it is going into my brain, but I very much appreciate having this resource to hand.

A Fire Upon The Deep by Vernor Vinge

I picked up a trade paperback copy of this sci-fi book at The Tattered Cover bookstore in Denver when I was there for An Event Apart earlier this month. I had heard it mentioned often and it sounds like my kind of yarn. I’m about halfway through it now and so far, so good.

There you have it.

It’s tough to pick a clear best. In non-fiction, I reckon Adam Rutherford’s A Brief History of Everyone Who Ever Lived just about pips Steven Pinker’s A Sense of Style. In fiction, Christopher Priest’s The Separation comes close, but Kim Stanley Robinson’s Aurora remains my favourite.

Like I said, not as many books as I would like. And of those twenty works, only seven were written by women—I need to do better in 2018.

The Last Jedi

If you haven’t seen The Last Jedi (yet), please stop reading. Spoilers ahoy.

I’ve been listening to many, many podcast episodes about the latest Star Wars film. They’re all here on Huffduffer. You can subscribe to a feed of just those episodes if you want.

I am well aware that the last thing anybody wants or needs is one more hot take on this film, but what the heck? I figured I’d jot down my somewhat simplistic thoughts.

I loved it.

But I wasn’t sure at first. I’ve talked to other people who felt similarly on first viewing—they weren’t sure if they liked it or not. I know some people who, on reflection, decided they definitely didn’t like it. I completely understand that.

A second viewing helped to cement my positive feelings towards this film. This is starting to become a trend: I didn’t think much of Rogue One on first viewing, but a second watch reversed my opinion completely. Maybe I just find it hard to really get into the flow when I’m seeing a new Star Wars film for the very first time—an event that I once thought would never occur again.

My first viewing of The Last Jedi wasn’t helped by having the worst seats in the house. My original plan was to see it with Jessica at a minute past midnight in The Duke Of York’s in Brighton. I bought front-row tickets as soon as they were available. But then it turned out that we were going to be in Seattle at that time instead. We quickly grabbed whatever tickets were left. Those seats were right at the front and far edge of the cinema, so the screen was more trapezoid than rectangular. The lights went down, the fanfare blared, and the opening crawl begin its march up …and to the left. My brain tried to compensate for the perspective effects but it was hard. Is Snoke’s face supposed to look like that? Does that person really have such a tiny head?

But while the spectacle was somewhat marred, the story unfolded in all its surprising delight. I thoroughly enjoyed the feeling of having the narrative rug repeatedly pulled out from under me.

I loved the unexpected end of Snoke in his vampiric boudoir. Let’s face it, he was the least interesting part of The Force Awakens—a two-dimensional evil mastermind. To despatch him in the middle of the middle chapter was the biggest signal that The Last Jedi was not simply going to retread the beats of the original trilogy.

I loved the reveal of Rey’s parentage. This was what I had been hoping for—that Rey came from nowhere in particular. After The Force Awakens, I wrote:

Personally, I’d like it if her parentage were unremarkable. Maybe it’s the socialist in me, but I’ve never liked the idea that the Force is based on eugenics; a genetic form of inherited wealth for the lucky 1%. I prefer to think of the Force as something that could potentially be unlocked by anyone who tries hard enough.

But I had resigned myself to the inevitable reveal that would tie her heritage into an existing lineage. What an absolute joy, then, that The Force is finally returned into everyone’s hands! Anil Dash describes this wonderfully in his post Every Last Jedi:

Though it’s well-grounded in the first definitions of The Force that we were introduced to in the original trilogy, The Last Jedi presents a radically inclusive new view of the Force that is bigger and broader than the Jedi religion which has thus-far colored our view of the entire Star Wars universe.

I was less keen on the sudden Force usage by Leia. I think it was the execution more than the idea that bothered me. Still, I realise that the problem lies just as much with me. See, lots of the criticism of this film comes from people (justifiably) saying “That’s not how The Force works!” in relation to Rey, Kylo Ren, or Luke Skywalker. I don’t share that reaction and I want to say, “Hey, who are we to decide how The Force works?”, but then during the Leia near-death scene, I found myself more or less thinking “That’s not how The Force works!”

This would be a good time to remind ourselves that, in the Star Wars universe, you can substitute the words “The Force” for “The Plot”—an invisible agency guiding actions and changing the course of events.

The first time I saw The Last Jedi, I began to really worry during the film’s climactic showdown. I wasn’t so much worried for the fate of the characters in peril; I was worried for the fate of the overarching narrative. When Luke showed up, my heart sank a little. A deus ex machina …and how did he get here exactly? And then when he emerges unscathed from a barrage of walker cannon fire, I thought “Aw no, they’ve changed the Jedi to be like superheroes …but that’s not the way The Force/Plot works!”

And then I had the rug pulled out from under me again. Yes! What a joyous bit of trickery! My faith in The Force/Plot was restored.

I know a lot of people didn’t like the Canto Bight diversion. Jessica described it as being quite prequel-y, and I can see that. And while I agree that any shot involving our heroes riding across the screen (on a Fathier, on a scout walker) just didn’t work, I liked the world-expanding scope of the caper subplot.

Still, I preferred the Galactica-like war of attrition as the Resistance is steadily reduced in size as they try to escape the relentless pursuit of the First Order. It felt like proper space opera. In some ways, it reminded of Alastair Reynolds but without the realism of the laws of physics (there’s nothing quite as egregious here as J.J. Abrams’ cosy galaxy where the destruction of a system can be seen in real time from the surface of another planet, but The Last Jedi showed again that Star Wars remains firmly in the space fantasy genre rather than hard sci-fi).

Oh, and of course I loved the porgs. But then, I never had a problem with ewoks, so treat my appraisal with a pinch of salt.

I loved seeing the west of coast of Ireland get so much screen time. Beehive huts in a Star Wars film! Mind you, that made it harder for me to get immersed in the story. I kept thinking, “Now, is that Skellig Michael? Or is it on the Dingle peninsula? Or Donegal? Or west Clare?”

For all its global success, Star Wars has always had a very personal relationship with everyone it touches. The films themselves are only part of the reason why people respond to them. The other part is what people bring with them; where they are in life at the moment they’re introduced to this world. And frankly, the films are only part of this symbiosis. As much as people like to sneer at the toys and merchandising as a cheap consumerist ploy, they played a significant part in unlocking my imagination. Growing up in a small town on the coast of Ireland, the Star Wars universe—the world, the characters—was a playground for me to make up stories …just as it was for any young child anywhere in the world.

One of my favourite shots in The Last Jedi looks like it could’ve come from the mind of that young child: an X-wing submerged in the waters of the rocky coast of Ireland. It was as though Rian Johnson had a direct line to my childhood self.

And yet, I think the reason why The Last Jedi works so well is that Rian Johnson makes no concessions to my childhood, or anyone else’s. This is his film. Of all the millions of us who were transported by this universe as children, only he gets to put his story onto the screen and into the saga. There are two ways to react to this. You can quite correctly exclaim “That’s not how I would do it!”, or you can go with it …even if that means letting go of some deeply-held feelings about what could’ve, should’ve, would’ve happened if it were our story.

That said, I completely understand why people might take against this film. Like I said, Rian Johnson makes no concessions. That’s in stark contrast to The Force Awakens. I wrote at the time:

Han Solo picked up the audience like it was a child that had fallen asleep in the car, and he gently tucked us into our familiar childhood room where we can continue to dream. And then, with a tender brush of his hand across the cheek, he left us.

The Last Jedi, on the other hand, thrusts us into this new narrative in the same way you might teach someone to swim by throwing them into the ocean from the peak of Skellig Michael. The polarised reactions to the film are from people sinking or swimming.

I choose to swim. To go with it. To let go. To let the past die.

And yet, one of my favourite takeaways from The Last Jedi is how it offers a healthy approach to dealing with events from the past. Y’see, there was always something that bothered me in the original trilogy. It was one of Yoda’s gnomic pronouncements in The Empire Strikes Back:

Try not. Do. Or do not. There is no try.

That always struck me as a very bro-ish “crushing it” approach to life. That’s why I was delighted that Rian Johnson had Yoda himself refute that attitude completely:

The greatest teacher, failure is.

That’s exactly what Luke needed to hear. It was also what I—many decades removed from my childhood—needed to hear.

Ubiquity and consistency

I keep thinking about this post from Baldur Bjarnason, Over-engineering is under-engineering. It took me a while to get my head around what he was saying, but now that (I think) I understand it, I find it to be very astute.

Let’s take a single interface element, say, a dropdown menu. This is the example Laura uses in her article for 24 Ways called Accessibility Through Semantic HTML. You’ve got two choices, broadly speaking:

  1. Use the HTML select element.
  2. Create your own dropdown widget using JavaScript (working with divs and spans).

The advantage of the first choice is that it’s lightweight, it works everywhere, and the browser does all the hard work for you.

But…

You don’t get complete control. Because the browser is doing the heavy lifting, you can’t craft the details of the dropdown to look identical on different browser/OS combinations.

That’s where the second option comes in. By scripting your own dropdown, you get complete control over the appearance and behaviour of the widget. The disadvantage is that, because you’re now doing all the work instead of the browser, it’s up to you to do all the work—that means lots of JavaScript, thinking about edge cases, and making the whole thing accessible.

This is the point that Baldur makes: no matter how much you over-engineer your own custom solution, there’ll always be something that falls between the cracks. So, ironically, the over-engineered solution—when compared to the simple under-engineered native browser solution—ends up being under-engineered.

Is it worth it? Rian Rietveld asks:

It is impossible to style select option. But is that really necessary? Is it worth abandoning the native browser behavior for a complete rewrite in JavaScript of the functionality?

The answer, as ever, is it depends. It depends on your priorities. If your priority is having consistent control over the details, then foregoing native browser functionality in favour of scripting everything yourself aligns with your goals.

But I’m reminded of something that Eric often says:

The web does not value consistency. The web values ubiquity.

Ubiquity; universality; accessibility—however you want to label it, it’s what lies at the heart of the World Wide Web. It’s the idea that anyone should be able to access a resource, regardless of technical or personal constraints. It’s an admirable goal, and what’s even more admirable is that the web succeeds in this goal! But sometimes something’s gotta give, and that something is control. Rian again:

The days that a website must be pixel perfect and must look the same in every browser are over. There are so many devices these days, that an identical design for all is not doable. Or we must take a huge effort for custom form elements design.

So far I’ve only been looking at the micro scale of a single interface element, but this tension between ubiquity and consistency plays out at larger scales too. Take page navigations. That’s literally what browsers do. Click on a link, and the browser fetches that URL, displaying progress at it goes. The alternative, as exemplified by single page apps, is to do all of that for yourself using JavaScript: figure out the routing, show some kind of progress, load some JSON, parse it, convert it into HTML, and update the DOM.

Personally, I tend to go for the first option. Partly that’s because I like to apply the rule of least power, but mostly it’s because I’m very lazy (I also have qualms about sending a whole lotta JavaScript down the wire just so the end user gets to do something that their browser would do for them anyway). But I get it. I understand why others might wish for greater control, even if it comes with a price tag of fragility.

I think Jake’s navigation transitions proposal is fascinating. What if there were a browser-native way to get more control over how page navigations happen? I reckon that would cover the justification of 90% of single page apps.

That’s a great way of examining these kinds of decisions and questioning how this tension could be resolved. If people are frustrated by the lack of control in browser-native navigations, let’s figure out a way to give them more control. If people are frustrated by the lack of styling for select elements, maybe we should figure out a way of giving them more control over styling.

Hang on though. I feel like I’ve painted a divisive picture, like you have to make a choice between ubiquity or consistency. But the rather wonderful truth is that, on the web, you can have your cake and eat it. That’s what I was getting at with the three-step approach I describe in Resilient Web Design:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

Like, say…

  1. The user needs to select an item from a list of options.
  2. Use a select element.
  3. Use JavaScript to replace that native element with a widget of your own devising.

Or…

  1. The user needs to navigate to another page.
  2. Use an a element with an href attribute.
  3. Use JavaScript to intercept that click, add a nice transition, and pull in the content using Ajax.

The pushback I get from people in the control/consistency camp is that this sounds like more work. It kinda is. But honestly, in my experience, it’s not that much more work. Also, and I realise I’m contradicting the part where I said I’m lazy, but that’s why it’s called work. This is our job. It’s not about what we prefer; it’s about serving the needs of the people who use what we build.

Anyway, if I were to rephrase my three-step process in terms of under-engineering and over-engineering, it might look something like this:

  1. Start with user needs.
  2. Build an under-engineered solution—one that might not offer you much control, but that works for everyone.
  3. Layer on a more over-engineered solution—one that might not work for everyone, but that offers you more control.

Ubiquity, then consistency.

Origin story

In an excellent piece called The First Web Apps: 5 Apps That Shaped the Internet as We Know It, Matthew Guay wrote:

The world wide web wasn’t supposed to be this fun. Berners-Lee imagined the internet as a place to collaborate around text, somewhere to share research data and thesis papers.

In his somewhat confused talk at FFConf this year, James Kyle said:

The web was designed to share documents.

Douglas Crockford said

The web was not designed to do any of things it is doing. It was intended to be a simple—even primitive—document retrieval system.

Some rando on Hacker News declared:

Essentially every single aspect of the web is terrible. It was designed as a static document presentation system with hyperlinks.

It appears to be a universally accepted truth. The web was designed for sharing documents, and was never meant for the kind of applications we can build these days.

I don’t think that’s quite right. I think it’s fairer to say that the first use case for the web was document retrieval. And yes, that initial use case certainly influenced the first iteration of HTML. But right from the start, the vision for the web wasn’t constrained by what it was being asked to do at the time. (I mean, if you need an example of vision, Tim Berners-Lee called it the World Wide Web when it was just on one computer!)

The original people working on the web—Tim Berners-Lee, Robert Cailliau, Jean-Francois Groff, etc.—didn’t to try define the edges of what the web would be capable of. Quite the opposite. All of them really wanted a more interactive read-write web where documents could not only be read, but also edited and updated.

As for the idea of having a programming language in browsers (as well as a markup language), Tim Berners-Lee was all for it …as long as it could be truly ubiquitous.

To say that the web was made for sharing documents is like saying that the internet was made for email. It’s true in the sense that it was the most popular use case, but that never defined the limits of the system.

The secret sauce of the internet lies in its flexibility—it’s a deliberately dumb network that doesn’t care about the specifics of what runs on it. This lesson was then passed on to the web—another deliberately simple system designed to be agnostic to use cases.

It’s true that the web of today is very, very different to its initial incarnation. We got CSS; we got JavaScript; HTML has evolved; HTTP has evolved; URLs have …well, cool URIs don’t change, but you get the idea. The web is like the ship of Theseus—so much of it has been changed and added to over time. That doesn’t mean its initial design was flawed—just the opposite. It means that its initial design wasn’t unnecessarily rigid. The simplicity of the early web wasn’t a bug, it was a feature.

The web (like the internet upon which it runs) was designed to be flexible, and to adjust to future use-cases that couldn’t be predicted in advance. The best proof of this flexibility is the fact that we can and do now build rich interactive applications on the World Wide Web. If the web had truly been designed only for documents, that wouldn’t be possible.

Nosediving

Nosedive is the first episode of season three of Black Mirror.

It’s fairly light-hearted by the standards of Black Mirror, but all the more chilling for that. It depicts a dysutopia where people rate one another for points that unlock preferential treatment. It’s like a twisted version of the whuffie from Cory Doctorow’s Down And Out In The Magic Kingdom. Cory himself points out that reputation economies are a terrible idea.

Nosedive has become a handy shortcut for pointing to the dangers of social media (in the same way that Minority Report was a handy shortcut for gestural interfaces and Her is a handy shortcut for voice interfaces).

“Social media is bad, m’kay?” is an understandable but, I think, fairly shallow reading of Nosedive. The problem isn’t with the apps, it’s with the system. A world in which we desperately need to keep our score up if we want to have any hope of advancing? That’s a nightmare scenario.

The thing is …that system exists today. Credit scores are literally a means of applying a numeric value to human beings.

Nosedive depicts a world where your score determines which seats you get in a restaurant, or which model of car you can rent. Meanwhile, in our world, your score determines whether or not you can get a mortgage.

Nosedive depicts a world in which you know your own score. Meanwhile, in our world, good luck with that:

It is very difficult for a consumer to know in advance whether they have a high enough credit score to be accepted for credit with a given lender. This situation is due to the complexity and structure of credit scoring, which differs from one lender to another.

Lenders need not reveal their credit score head, nor need they reveal the minimum credit score required for the applicant to be accepted. Owing only to this lack of information to the consumer, it is impossible for him or her to know in advance if they will pass a lender’s credit scoring requirements.

Black Mirror has a good track record of exposing what’s unsavoury about our current time and place. On the surface, Nosedive seems to be an exposé on the dangers of going to far with the presentation of self in everyday life. Scratch a little deeper though, and it reveals an even more uncomfortable truth: that we’re living in a world driven by systems even worse than what’s depicted in this dystopia.

How about this for a nightmare scenario:

Two years ago Douglas Rushkoff had an unpleasant encounter outside his Brooklyn home. Taking out the rubbish on Christmas Eve, he was mugged — held at knife-point by an assailant who took his money, his phone and his bank cards. Shaken, he went back indoors and sent an email to his local residents’ group to warn them about what had happened.

“I got two emails back within the hour,” he says. “Not from people asking if I was OK, but complaining that I’d posted the exact spot where the mugging had taken place — because it might adversely affect their property values.”

Getaway

It had been a while since we had a movie night at Clearleft so I organised one for last night. We usually manage to get through two movies, and there’s always a unifying theme decided ahead of time.

For last night, I decided that the broad theme would be …transport. But then, through voting on Slack, people could decide what the specific mode of transport would be. The choices were:

  • taxi,
  • getaway car,
  • truck, or
  • submarine.

Nobody voted for submarines. That’s a shame, but in retrospect it’s easy to understand—submarine films aren’t about transport at all. Quite the opposite. Submarine films are about being trapped in a metal womb/tomb (and many’s the spaceship film that qualifies as a submarine movie).

There were some votes for taxis and trucks, but the getaway car was the winner. I then revealed which films had been pre-selected for each mode of transport.

Taxi

Getaway car

Shorts: Getaway Driver, The Getaway

Truck

Submarine

I thought Baby Driver would be a shoe-in for the first film, but enough people had already seen it quite recently to put it out of the running. We watched Wheelman instead, which was like Locke meets Drive.

So what would the second film be?

Well, some of those films in the full list could potentially fall into more than one category. The taxi in Collateral is (kinda) being used as a getaway car. And if you expand the criterion to getaway vehicle, then Furiosa’s war rig surely counts, right?

Okay, we were just looking for an excuse to watch Fury Road again. I mean, c’mon, it was the black and chrome edition! I had the great fortune of seeing that on the big screen a while back and I’ve been raving about it ever since. Besides, you really don’t need an excuse to rewatch Fury Road. I loved it the first time I saw it, and it just keeps getting better and better each time. The editing! The sound! The world-building!

With every viewing, it feels more and more like the film for our time. It may have been a bit of stretch to watch it under the thematic umbrella of getaway vehicles, but it’s a getaway for our current political climate: instead of the typical plot involving a gang driving at full tilt from a bank heist, imagine one where the gang turns around, ousts the bankers, and replaces the whole banking system with a matriarchal community.

Hope is a mistake”, Max mansplains (maxplains?) to Furiosa at one point. He’s wrong. Judicious hope is what drives us forward (or, this case, back …to the citadel). Watching Fury Road again, I drew hope from the character of Nux. An alt-warboy in thrall to a demagogue and raised on a diet of fake news (Valhalla! V8!) can not only be turned by tenderness, he can become an ally to those working for a better world.

Witness!

An associative trail

Every now and then, I like to revisit Vannevar Bush’s classic article from the July 1945 edition of the Atlantic Monthly called As We May Think in which he describes a theoretical machine called the memex.

A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.

It consists of a desk, and while it can presumably be operated from a distance, it is primarily the piece of furniture at which he works. On the top are slanting translucent screens, on which material can be projected for convenient reading. There is a keyboard, and sets of buttons and levers. Otherwise it looks like an ordinary desk.

1945! Apart from its analogue rather than digital nature, it’s a remarkably prescient vision. In particular, there’s the idea of “associative trails”:

Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified. The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities.

Many decades later, Anne Washington ponders what a legal memex might look like:

My legal Memex builds a network of the people and laws available in the public records of politicians and organizations. The infrastructure for this vision relies on open data, free access to law, and instantaneously availability.

As John Sheridan from the UK’s National Archives points out, hypertext is the perfect medium for laws:

Despite the drafter’s best efforts to create a narrative structure that tells a story through the flow of provisions, legislation is intrinsically non-linear content. It positively lends itself to a hypertext based approach. The need for legislation to escape the confines of the printed form predates the all major innovators and innovations in hypertext, from Vannevar Bush’s vision in ” As We May Think“, to Ted Nelson’s coining of the term “hypertext”, through to and Berners-Lee’s breakthrough world wide web. I like to think that Nelson’s concept of transclusion was foreshadowed several decades earlier by the textual amendment (where one Act explicitly alters – inserts, omits or amends – the text of another Act, an approach introduced to UK legislation at the beginning of the 20th century).

That’s from a piece called Deeply Intertwingled Laws. The verb “to intertwingle” was another one of Ted Nelson’s neologisms.

There’s an associative trail from Vannevar Bush to Ted Nelson that takes some other interesting turns…

Picture a new American naval recruit in 1945, getting ready to ship out to the pacific to fight against the Japanese. Just as the ship as leaving the harbour, word comes through that the war is over. And so instead of fighting across the islands of the pacific, this young man finds himself in a hut on the Philippines, reading whatever is to hand. There’s a copy of The Atlantic Monthly, the one with an article called As We May Think. The sailor was Douglas Engelbart, and a few years later when he was deciding how he wanted to spend the rest of his life, that article led him to pursue the goal of augmenting human intellect. He gave the mother of all demos, featuring NLS, a working hypermedia system.

Later, thanks to Bill Atkinson, we’d get another system called Hypercard. It was advertised with the motto Freedom to Associate, in an advertising campaign that directly referenced Vannevar Bush.

And now I’m using the World Wide Web, a hypermedia system that takes in the whole planet, to create an associative trail. In this post, I’m linking (without asking anyone for permission) to six different sources, and in doing so, I’m creating a unique associative trail. And because this post has a URL (that won’t change), you are free to take it and make it part of your own associative trail on your digital memex.

Hooked and booked

At Booking.com, they do a lot of A/B testing.

At Booking.com, they’ve got a lot of dark patterns.

I think there might be a connection.

A/B testing is a great way of finding out what happens when you introduce a change. But it can’t tell you why.

The problem is that, in a data-driven environment, decisions ultimately come down to whether something works or not. But just because something works, doesn’t mean it’s a good thing.

If I were trying to convince you to buy a product, or use a service, one way I could accomplish that would be to literally put a gun to your head. It would work. Except it’s not exactly a good solution, is it? But if we were to judge by the numbers (100% of people threatened with a gun did what we wanted), it would appear to be the right solution.

When speaking about A/B testing at Booking.com, Stuart Frisby emphasised why it’s so central to their way of working:

One of the core principles of our organisation is that we want to be very customer-focused. And A/B testing is really a way for us to institutionalise that customer focus.

I’m not so sure. I think A/B testing is a way to institutionalise a focus on business goals—increasing sales, growth, conversion, and all of that. Now, ideally, those goals would align completely with the customer’s goals; happy customers should mean more sales …but more sales doesn’t necessarily mean happy customers. Using business metrics (sales, growth, conversion) as a proxy for customer satisfaction might not always work …and is clearly not the case with many of these kinds of sites. Whatever the company values might say, a company’s true focus is on whatever they’re measuring as success criteria. If that’s customer satisfaction, then the company is indeed customer-focused. But if the measurements are entirely about what works for sales and conversions, then that’s the real focus of the company.

I’m not saying A/B testing is bad—far from it! (although it can sometimes be taken to the extreme). I feel it’s best wielded in combination with usability testing with real users—seeing their faces, feeling their frustration, sharing their joy.

In short, I think that A/B testing needs to be counterbalanced. There should be some kind of mechanism for getting the answer to “why?” whenever A/B testing provides to the answer to “what?” In-person testing could be one way of providing that balance. Or it could be somebody’s job to always ask “why?” and determine if a solution is qualitatively—and not just quantitatively—good. (And if you look around at your company and don’t see anyone doing that, maybe that’s a role for you.)

If there really is a connection between having a data-driven culture of A/B testing, and a product that’s filled with dark patterns, then the disturbing conclusion is that dark patterns work …at least in the short term.

Brighton conferences

I’ve been to four conferences in two weeks. I wasn’t speaking at any of them so I was able to relax and enjoy the talks.

There was UX Brighton on November 3rd, featuring a terrific opening keynote from Boxman.

James Box speaking at UX Brighton 2017

One week later, I was in the Duke of York’s cinema for FFConf along with all the other Clearleft frontend devs—it’s always a thought-provoking day out.

FFConf 2017 Day 2

Yesterday, I went to Meaning in the daytime, and Bytes in the evening.

It was amazing to get to see @ambrwlsn90 speak at #bytesconf tonight. She is a brilliant speaker! 🙌🏻

Every one of those events was in Brighton. That’s pretty good going for a town this size …and that’s not even counting the regular events like Async, Codebar, and Ladies That UX.

What is a Progressive Web App?

It seems like any new field goes through an inevitable growth spurt that involves “defining the damn thing.” For the first few years of the IA Summit, every second presentation seemed to be about defining what Information Architecture actually is. See also: UX. See also: Content Strategy.

Now it seems to be happening with Progressive Web Apps …which is odd, considering the damn thing is defined damn well.

I’ve written before about the naming of Progressive Web Apps. On the whole, I think it’s a pretty good term, especially if you’re trying to convince the marketing team.

Regardless of the specifics of the name, what I like about Progressive Web Apps is that they have a clear definition. It reminds me of Responsive Web Design. Whatever you think of that name, it comes with a clear list of requirements:

  1. A fluid layout,
  2. Fluid images, and
  3. Media queries.

Likewise, Progressive Web Apps consist of:

  1. HTTPS,
  2. A service worker, and
  3. A Web App Manifest.

There’s more you can do in addition to that (just as there’s plenty more you can do on a responsive site), but the core definition is nice and clear.

Except, for some reason, that clarity is being lost.

Here’s a post by Ben Halpern called What the heck is a “Progressive Web App”? Seriously.

I have a really hard time describing what a progressive web app actually is.

He points to Google’s intro to Progressive Web Apps:

Progressive Web Apps are user experiences that have the reach of the web, and are:

  • Reliable - Load instantly and never show the downasaur, even in uncertain network conditions.
  • Fast - Respond quickly to user interactions with silky smooth animations and no janky scrolling.
  • Engaging - Feel like a natural app on the device, with an immersive user experience.

Those are great descriptions of the benefits of Progressive Web Apps. Perfect material for convincing your clients or your boss. But that appears on developers.google.com …surely it would be more beneficial for that audience to know the technologies that comprise Progressive Web Apps?

Ben Halpern again:

Google’s continued use of the term “quality” in describing things leaves me with a ton of confusion. It really seems like they want PWA to be a general term that doesn’t imply any particular implementation, and have it be focused around the user experience, but all I see over the web is confusion as to what they mean by these things. My website is already “engaging” and “immersive”, does that mean it’s a PWA?

I think it’s important to use the right language for the right audience.

If you’re talking to the business people, tell them about the return on investment you get from Progressive Web Apps.

If you’re talking to the marketing people, tell them about the experiential benefits of Progressive Web Apps.

But if you’re talking to developers, tell them that a Progressive Web App is a website served over HTTPS with a service worker and manifest file.