Tags: medium



Friday, March 9th, 2018

A workshop on building for resilience

In February, I tried out a new workshop two times—once at Webstock in New Zealand, and once in Hong Kong.

The workshop is called The Progressive Web: Building for Resilience. Here’s an excerpt form the blurb:

This workshop will show you to to think in a progressive way that works with the grain of the web. Together we’ll peel back the layers of the web and build upwards, creating experiences that work for everyone while making the best of cutting-edge browser technologies. From URL design to Progressive Web Apps, this journey will cover each stage of technological advancement.

Basically, it’s the workshop version of Resilient Web Design. If that book is the theory, this workshop is the practice.

Tim recently posted his tips for running workshops and there’s a lot in there that resonates with me. Like Tim, I’ve become less and less reliant on slides. In fact, this workshop—like my workshop on evaluating technology—has no slides. Instead it’s all about the exercises and going with the flow.

After starting with a warm-up, I canvas the room to see if there any specific topics, tools or technologies that people are particularly interested in covering. I’ll note those (on post-its slapped on the wall) for reference throughout the day, to try to make sure that those particular things are touched on at some point. Then I start with a thought experiment…

First of all, I get everyone to call out websites, services and apps that they use almost every day: Twitter, Facebook, Gmail, Slack, Google Docs, and so on. Those all get documented on the wall. Then it’s time to ask of each product, “What is the core functionality?” The idea here is to get beneath the surface-level verbs like swiping, tapping and dragging to get to the real purpose of a service: buying, selling, sharing, reading, writing, collaborating, and so on.

At this point I inform the attendees that the year is 1995. And now we’re going to build these services using the technology of this time. This is a playful way of getting answers to the question “What’s the simplest technology to enable the core functionality?” It’s mostly forms, links, and lots of heavy lifting on the server.

Then the real fun begins. “Enhance!” Moving forward in time, we get to add styles, we add interactivity with JavaScript, then Ajax, and then we get to really have fun with technologies like web sockets, geolocation, local storage, right the way up to service workers, notifications, and background sync. And the beauty of it all is that, if any of those technologies aren’t supported in a particular browser or device, the core functionality is still available.

Next, we apply this layered mindset to a new service. I split the attendees into groups, and each of them gets a procedurally-generated startup idea …generated by shuffling some cards. This is an exercise I first tried when I was teaching in Porto:

I made five cards with types of sites on them: news, social network, shopping, travel, and learning. Another five cards had subjects: books, music, food, pets, and cars. And another five cards had audiences: students, parents, the elderly, commuters, and teachers. Everyone was dealt a random card from each deck, resulting in briefs like “a travel site about food for the elderly” or “a social network about music for commuters.”

The first few exercises are good creative fun: come up with a name, then a logo, then a business model. Then it’s time to build. It starts with URL design. Then it’s content prioritisation (for a representative URL). Then it’s layout (sketching!). The enhancements have begun. “How might this URL benefit from Ajax?” “How might this URL benefit from geolocation?” “How might this URL benefit from offline storage?” “How might this URL benefit from a service worker?”

Workshop team 4 Workshop team 3 Workshop team 2 Workshop team 1

At this point, we’ve applied the layered, progressive approach at the scale of an entire service, and at the scale of an individual URL. Finally, we apply the same approach at the level of a component. It might be a navigation, or a carousel, or an interactive widget. In each case, the same process applies: “What’s the core functionality? What’s the simplest technology to enable that functionality? Enhance!”

Along the way, there are plenty of rabbit holes we can go down. Whether it’s accessibility, or progressive web apps, or pattern libraries, I go along with whatever people are curious about. But all of it ties back to the progressive, layered mindset I’m hoping to foster.

By the end of the day, I’m hoping that an attendee has one of two reactions:

  1. “What a waste of time! Everything in that workshop was blindingly obvious!” (in which case, excellent!—they’re already thinking in a progressive way), or
  2. “That workshop has completely changed the way I think about building on the web!” (I’m being hyperbolic here, but at the very least I’m hoping to impart a new perspective).

Having given the workshop a few times, I’m really pleased with how it went (and more important, I’m pleased that people enjoyed it). If this sounds like something that your company or team would enjoy, get in touch and we can take it from there.

Tuesday, March 6th, 2018

Minimal viable service worker

I really, really like service workers. They’re one of those technologies that have such clear benefits to users that it seems like a no-brainer to add a service worker to just about any website.

The thing is, every website is different. So the service worker strategy for every website needs to be different too.

Still, I was wondering if it would be possible to create a service worker script that would work for most websites. Here’s the script I came up with.

The logic works like this:

  • If there’s a request for an HTML page, fetch it from the network and store a copy in a cache (but if the network request fails, try looking in the cache instead).
  • For any other files, look for a copy in the cache first but meanwhile fetch a fresh version from the network to update the cache (and if there’s no existing version in the cache, fetch the file from the network and store a copy of it in the cache).

So HTML files are served network-first, while all other files are served cache-first, but in both cases a fresh copy is always put in the cache. The idea is that HTML content will always be fresh (unless there’s a problem with the network), while all other content—images, style sheets, scripts—might be slightly stale, but get refreshed with every request.

My original attempt was riddled with errors. Jake came to my rescue and we revised the script into something that actually worked. In the process, my misunderstanding of how await works led Jake to write a great blog post on await vs return vs return await.

I got there in the end and the script seems solid enough. It’s a fairly simplistic strategy that could work for quite a few sites, but it has some issues…

Service workers don’t perform any automatic cleanup of caches—that’s up to you to do (usually during the activate event). This script doesn’t do any cleanup so the cache might grow and grow and grow. For that reason, I think the script is best suited for fairly small sites.

The strategy also assumes that a file will either be fetched from the network or the cache. There’s no contingency for when both attempts fail. So there’s no fallback offline page, for example.

I decided to test it in the wild, but I expanded it slightly to fix the fallback issue. The version on the Ampersand 2018 website includes a worst-case-scenario option to show a custom offline page that has been pre-cached. (By the way, if you haven’t got a ticket for Ampersand yet, get a ticket now—it’s going to be superb day of web typography nerdery.)

Anyway, this fairly basic script seems to be delivering some good performance improvements. If you’ve got a site that you think would benefit from this network/caching strategy, and it’s served over HTTPS, then:

  1. Feel free to download the script or copy and paste it into a file called serviceworker.js,
  2. Put that file in the root directory of your website,
  3. Add this in a script element at the bottom of your HTML pages:

if (navigator.serviceWorker && !navigator.serviceWorker.controller) { navigator.serviceWorker.register('/serviceworker.js'); }

You can also use the script as a starting point. You might find issues specific to your particular website. That’s okay—you can tweak and adjust the script to suit your needs.

If this minimal service worker script proves in any way useful to you, thank Jake.

Friday, March 2nd, 2018

Just change it

Amber and I often have meta conversations about the nature of learning and teaching. We swap books and share ideas and experiences whenever we’re trying to learn something or trying to teach something. A topic that comes up again and again is the idea of “the curse of knowledge“—it’s the focus of Steven Pinker’s book The Sense Of Style. That’s when the author/teacher can’t remember what it’s like not to know something, which makes for a frustrating reading/learning experience.

This is one of the reasons why I encourage people to blog about stuff as they’re learning it; not when they’ve internalised it. The perspective that comes with being in the moment of figuring something out is invaluable to others. I honestly think that most explanatory books shouldn’t be written by experts—the “curse of knowledge” can become almost insurmountable.

I often think about this when I’m reading through the installation instructions for frameworks, libraries, and other web technologies. I find myself put off by documentation that assumes I’ve got a certain level of pre-existing knowledge. But now instead of letting it get me down, I use it as an opportunity to try and bridge that gap.

The brilliant Safia Abdalla wrote a post a while back called How do I get started contributing to open source?. I definitely don’t have the programming chops to contribute much to a codebase, but I thoroughly agree with Safia’s observation:

If you’re interested in contributing to open source to improve your communication and empathy skills, you’re definitely making the right call. A lot of open source tools could definitely benefit from improvements in the documentation, accessibility, and evangelism departments.

What really jumps out at me is when instructions use words like “simply” or “just”. I’m with Brad:

“Just” makes me feel like an idiot. “Just” presumes I come from a specific background, studied certain courses in university, am fluent in certain technologies, and have read all the right books, articles, and resources. “Just” is a dangerous word.

But rather than letting that feeling overwhelm me, I now try to fix the text. Here are a few examples of changes I’ve suggested, usually via pull requests on Github repos:

They all have different codebases in different programming languages, but they’re all intended for humans, so having clear and kind documentation is a shared goal.

I like suggesting these kinds of changes. That initial feeling of frustration I get from reading the documentation gets turned into a warm fuzzy feeling from lending a helping hand.

Thursday, March 1st, 2018

Faraway February

For the shortest month of the year, February managed to pack a lot in. I was away for most of the month. I had the great honour of being asked back to speak at Webstock in New Zealand this year—they even asked me to open the show!

I had no intention of going straight to New Zealand and then turning around to get on the first flight back, so I made sure to stretch the trip out (which also helps to mitigate the inevitable jet lag). Jessica and I went to Hong Kong first, stayed there for a few nights, then went on Sydney for a while (and caught up with Charlotte while we were out there), before finally making our way to Wellington. Then, after Webstock was all wrapped up, we retraced the same route in reverse. Many flat whites, dumplings, and rays of sunshine later, we arrived back in the UK.

As well as giving the opening keynote at Webstock, I did a full-day workshop, and I also ran a workshop in Hong Kong on the way back. So technically it was a work trip, but I am extremely fortunate that I get to go on adventures like this and still get to call it work.

Wednesday, February 28th, 2018

Offline itineraries with service workers

The Trivago website is a progressive web app. That means it

  1. is served over HTTPS,
  2. has a web app manifest JSON file, and it
  3. has a service worker script.

The service worker provides an opportunity for a nice bit of fun branding—if you lose your internet connection, the site provides a neat little maze game you can play. Cute!

That’s a fairly simple example of how service workers can enhance the user experience when the dreaded offline situation arises. But it strikes me that the travel industry is the perfect place to imagine other opportunities for offline enhancements.

Travel sites often provide itineraries—think airlines, trains, or hotels. The itineraries consist of places, times, and contact information. This is exactly the kind of information that you might find yourself trying to retrieve in an emergency situation, like maybe in a cab on the way to the airport or train station. Perhaps you’re stuck in traffic, in a tunnel. Or maybe you don’t have a data plan for the country you’re currently in. Either way, wouldn’t it be great if you could hit the website for your airline or hotel and get your itinerary, even if you’re offline.

Alright, let’s think this through…

Let’s assume that an individual itinerary has its own URL. That URL is a web page of information, mostly text, with perhaps an image or two (like a map). Now when you make your booking, let’s have the service worker cache that URL (and its assets) for offline access.

Hmm …but there’s a good chance that the device you make the booking on is not the same device that you’d have with you out and and about. Because caches are local to the browser, that’s a problem.

Okay, but of these kinds of sites have some kind of log-in mechanism. So we could update the log-in flow a bit: when a user logs in, check to see if they have any itineraries assigned to them, and if they do, fire off an event to the service worker (using postMessage) to cache the URLs of the itineraries.

Now that the itineraries are cached, the final step is to create a custom offline page. As well as the usual “Sorry, the internet’s down” message, we can say “Sorry, the internet’s down …but here are your itineraries”. (This is kind of like the pattern you see on blogs like mine, Ethan’s, or Mike’s—a custom offline page that lists cached URLs of articles you’ve previously visited).

That’s just one pattern off the top of my head. It’s fun to imagine the different ways that service workers could be used to enhance the experience of just about any site, but they seem particularly relevant to travel sites—dodgy internet connections and travelling go hand-in-hand. At Clearleft, we’ve been working with quite a few travel-related clients lately so that’s why these scenarios are on my mind: booking holidays, flights, and so on. But, as I’ve said before and I’ll say again, every website can benefit from becoming a progressive web app.

Monday, February 26th, 2018

Ends and means

The latest edition of the excellent History Of The Web newsletter is called The Day(s) The Web Fought Back. It recounts the first time that websites stood up against bad legislation in the form of the Communications Decency Act (CDA), and goes to recount the even more effective use of blackout protests against SOPA and PIPA.

I remember feeling very heartened to see WikiPedia, Google and others take a stand on January 18th, 2012. But I also remember feeling uneasy. In this particular case, companies were lobbying for a cause I agreed with. But what if they were lobbying for a cause I didn’t agree with? Large corporations using their power to influence politics seems like a very bad idea. Isn’t it still a bad idea, even if I happen to agree with the cause?

Cloudflare quite rightly kicked The Daily Stormer off their roster of customers. Then the CEO of Cloudflare quite rightly wrote this in a company-wide memo:

Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power.

There’s an uncomfortable tension here. When do the ends justify the means? Isn’t the whole point of having principles that they hold true even in the direst circumstances? Why even claim that corporations shouldn’t influence politics if you’re going to make an exception for net neutrality? Why even claim that free speech is sacrosanct if you make an exception for nazi scum?

Those two examples are pretty extreme and I can easily justify the exceptions to myself. Net neutrality is too important. Stopping fascism is too important. But where do I draw the line? At what point does something become “too important?”

There are more subtle examples of corporations wielding their power. Google are constantly using their monopoly position in search and browser marketshare to exert influence over website-builders. In theory, that’s bad. But in practice, I find myself agreeing with specific instances. Prioritising mobile-friendly sites? Sounds good to me. Penalising intrusive ads? Again, that seems okey-dokey to me. But surely that’s not the point. So what if I happen to agree with the ends being pursued? The fact that a company the size and power of Google is using their monopoly for any influence is worrying, regardless of whether I agree with the specific instances. But I kept my mouth shut.

Now I see Google abusing their monopoly again, this time with AMP. They may call the preferential treatment of Google-hosted AMP-formatted pages a “carrot”, but let’s be honest, it’s an abuse of power, plain and simple.

By the way, I have no doubt that the engineers working on AMP have the best of intentions. We are all pursuing the same ends. We all want a faster web. But we disagree on the means. If Google search results gave preferential treatment to any fast web pages, that would be fine. But by only giving preferential treatment to pages written in a format that they created, and hosted on their own servers, they are effectively forcing everyone to use AMP. I know for a fact that there are plenty of publications who are producing AMP content, not because they are sold on the benefits of the technology, but because they feel strong-armed into doing it in order to compete.

If the ends justify the means, then it’s easy to write off Google’s abuse of power. Those well-intentioned AMP engineers honestly think that they have the best interests of the web at heart:

We were worried about the web not existing anymore due to native apps and walled gardens killing it off. We wanted to make the web competitive. We saw a sense of urgency and thus we decided to build on the extensible web to build AMP instead of waiting for standard and browsers and websites to catch up. I stand behind this process. I’m a practical guy.

There’s real hubris and audacity in thinking that one company should be able to tackle fixing the whole web. I think the AMP team are genuinely upset and hurt that people aren’t cheering them on. Perhaps they will dismiss the criticisms as outpourings of “Why wasn’t I consulted?” But that would be a mistake. The many thoughtful people who are extremely critical of AMP are on the same side as the AMP team when it comes the end-goal of better, faster websites. But burning the web to save it? No thanks.

Ben Thompson goes into more detail on the tension between the ends and the means in The Aggregator Paradox:

The problem with Google’s actions should be obvious: the company is leveraging its monopoly in search to push the AMP format, and the company is leveraging its dominant position in browsers to punish sites with bad ads. That seems bad!

And yet, from a user perspective, the options I presented at the beginning — fast loading web pages with responsive designs that look great on mobile and the elimination of pop-up ads, ad overlays, and autoplaying videos with sounds — sounds pretty appealing!

From that perspective, there’s a moral argument to be made for wielding monopoly power like Google is doing. No doubt the AMP team feel it would be morally wrong for Google not to use its influence in search to give preferential treatment to AMP pages.

Going back to the opening examples of online blackouts, was it morally wrong for companies to use their power to influence politics? Or would it have been morally wrong for them not to have used their influence?

When do the ends justify the means?

Here’s a more subtle example than Google AMP, but one which has me just as worried for the future of the web. Mozilla announced that any new web features they add to their browser will require HTTPS.

The end-goal here is one I agree with: HTTPS everywhere. On the face of it, the means of reaching that goal seem reasonable. After all, we already require HTTPS for sensitive JavaScript APIs like geolocation or service workers. But the devil is in the details:

Effective immediately, all new features that are web-exposed are to be restricted to secure contexts. Web-exposed means that the feature is observable from a web page or server, whether through JavaScript, CSS, HTTP, media formats, etc. A feature can be anything from an extension of an existing IDL-defined object, a new CSS property, a new HTTP response header, to bigger features such as WebVR.

Emphasis mine.

This is a step too far. Again, I am in total agreement that we should be encouraging everyone to switch to HTTPS. But requiring HTTPS in order to use CSS? The ends don’t justify the means.

If there were valid security reasons for making HTTPS a requirement, I would be all for enforcing this. But these are two totally separate areas. Enforcing HTTPS by withholding CSS support is no different to enforcing AMP by withholding search placement. In some ways, I think it might actually be worse.

There’s an assumption in this decision that websites are being made by professionals who will know how to switch to HTTPS. But the web is for everyone. Not just for everyone to use. It’s for everyone to build.

One of my greatest fears for the web is that building it becomes the domain of a professional priesthood. Anything that raises the bar to writing some HTML or CSS makes me very worried. Usually it’s toolchains that make things more complex, but in this case the barrier to entry is being brought right into the browser itself.

I’m trying to imagine future Codebar evenings, helping people to make their first websites, but now having to tell them that some CSS will be off-limits until they meet the entry requirements of HTTPS …even though CSS and HTTPS have literally nothing to do with one another. (And yes, there will be an exception for localhost and I really hope there’ll be an exception for file: as well, but that’s simply postponing the disappointment.)

No doubt Mozilla (and the W3C Technical Architecture Group) believe that they are doing the right thing. Perhaps they think it would be morally wrong if browsers didn’t enforce HTTPS even for unrelated features like new CSS properties. They believe that, in this particular case, the ends justify the means.

I strongly disagree. If you also disagree, I encourage you to make your voice heard. Remember, this isn’t about whether you think that we should all switch to HTTPS—we’re all in agreement on that. This is about whether it’s okay to create collateral damage by deliberately denying people access to web features in order to further a completely separate agenda.

This isn’t about you or me. This is about all those people who could potentially become makers of the web. We should be welcoming them, not creating barriers for them to overcome.

Thursday, February 1st, 2018

Global Diversity CFP Day—Brighton edition

There are enough middle-aged straight white men like me speaking at conferences. That’s why the Global Diversity Call-For-Proposals Day is happening this Saturday, February 3rd.

The purpose is two-fold. One is to encourage a diverse range of people to submit talk proposals to conferences. The other is to help with the specifics—coming with ideas, writing a good title and abstract, preparing the presentation, and all that.

Julie is organising the Brighton edition. Clearleft are providing the venue—68 Middle Street. I’ll be on hand to facilitate. Rosa and Dot will be doing the real work, mentoring the attendees.

If you’ve ever thought about submitting a talk proposal to a conference but just don’t know where to start, or if you’re just interested in the idea, please do come along on Saturday. It’s starts at 11am and will be all wrapped up by 3pm.

See you there!

How to cross post to Medium

Remy outlines the process he uses for POSSEing to Medium now that they’ve removed their IFTTT integration.

At some point during 2017, Medium decided to pull their IFTTT applets that allows content to be posted into Medium. Which I think is a pretty shitty move since there was no notification that the applet was pulled (I only noticed after Medium just didn’t contain a few of my posts), and it smacks of “Medium should be the original source”…which may be fine for some people, but I’m expecting my own content to outlast the Medium web site.

Tuesday, January 30th, 2018

Famous first words

Monday, January 29th, 2018

GDPR and Google Analytics

Enforcement of the European Union’s General Data Protection Regulation is coming very, very soon. Look busy. This regulation is not limited to companies based in the EU—it applies to any service anywhere in the world that can be used by citizens of the EU.

It’s less about data protection and more like a user’s bill of rights. That’s good. Cennydd has written a techie’s rough guide to GDPR.

The Open Data Institute’s Jeni Tennison wrote down her thoughts on how it could change data portability in particular. While she welcomes GDPR, she has some misgivings.

Blaine—who really needs to get a blog—shared his concerns in the form of the online equivalent of interpretive dance …a twitter thread (it’s called a thread because it inevitably gets all tangled, and it’s easy to break.)

The interesting thing about the so-called “cookie law” is that it makes no mention of cookies whatsoever. It doesn’t list any specific technology. Instead it states that any means of tracking or identifying users across websites requires disclosure. So if you’re setting a cookie just to manage state—so that users can log in, or keep items in a shopping basket—the legislation doesn’t apply. But as soon as your site allows a third-party to set a cookie, it’s banner time.

Google Analytics is a classic example of a third-party service that uses cookies to track people across domains. That’s pretty much why it exists. We, as site owners, get to use this incredibly powerful tool, and all we have to do in return is add one little snippet of JavaScript to our pages. In doing so, we’re allowing a third party to read or write a cookie from their domain.

Before Google Analytics, Google—the search engine business—was able to identify and track what users were searching for, and which search results they clicked on. But as soon as the user left google.com, the trail went cold. By creating an enormously useful analytics product that only required site owners to add a single line of JavaScript, Google—the online advertising business—gained the ability to keep track of users across most of the web, whether they were on a site owned by Google or not.

Under the old “cookie law”, using a third-party cookie-setting service like that meant you had to inform any of your users who were citizens of the EU. With GDPR, that changes. Now you have to get consent. A dismissible little overlay isn’t going to cut it any more. Implied consent isn’t enough.

Now this situation raises an interesting question. Who’s responsible for getting consent? Is it the site owner or the third party whose script is the conduit for the tracking?

In the first scenario, you’d need to wait for an explicit agreement from a visitor to your site before triggering the Google Analytics functionality. Suddenly it’s not as simple as adding a single line of JavaScript to your site.

In the second scenario, you don’t do anything differently than before—you just add that single line of JavaScript. But now that script would need to launch the interface for getting consent before doing any tracking. Google Analytics would go from being something invisible to something that directly impacts the user experience of your site.

I’m just using Google Analytics as an example here because it’s so widespread. This also applies to third-party sharing buttons—Twitter, Facebook, etc.—and of course, advertising.

In the case of advertising, it gets even thornier because quite often, the site owner has no idea which third party is about to do the tracking. Many, many sites use intermediary services (y’know, ‘cause bloated ad scripts aren’t slowing down sites enough so let’s throw some just-in-time bidding into the mix too). You could get consent for the intermediary service, but not for the final advert—neither you nor your site’s user would have any idea what they were consenting to.

Interesting times. One way or another, a massive amount of the web—every website using Google Analytics, embedded YouTube videos, Facebook comments, embedded tweets, or third-party advertisements—will be liable under GDPR.

It’s almost as if the ubiquitous surveillance of people’s every move on the web wasn’t a very good idea in the first place.

Thursday, January 25th, 2018

Design ops for design systems

Leading Design was one of the best events I attended last year. To be honest, that surprised me—I wasn’t sure how relevant it would be to me, but it turned out to be the most on-the-nose conference I could’ve wished for.

Seeing as the event was all about design leadership, there was inevitably some talk of design ops. But I noticed that the term was being used in two different ways.

Sometimes a speaker would talk about design ops and mean “operations, specifically for designers.” That means all the usual office practicalities—equipment, furniture, software—that designers might need to do their jobs. For example, one of the speakers recommended having a dedicated design ops person rather than trying to juggle that yourself. That’s good advice, as long as you understand what’s meant by design ops in that context.

There’s another context of use for the phrase “design ops”, and it’s one that we use far more often at Clearleft. It’s related to design systems.

Now, “design system” is itself a term that can be ambiguous. See also “pattern library” and “style guide”. Quite a few people have had a stab at disambiguating those terms, and I think there’s general agreement—a design system is the overall big-picture “thing” that can contain a pattern library, and/or a style guide, and/or much more besides:

None of those great posts attempt to define design ops, and that’s totally fair, because they’re all attempting to define things—style guides, pattern libraries, and design systems—whereas design ops isn’t a thing, it’s a practice. But I do think that design ops follows on nicely from design systems. I think that design ops is the practice of adopting and using a design system.

There are plenty of posts out there about the challenges of getting people to use a design system, and while very few of them use the term design ops, I think that’s what all of them are about:

Clearly design systems and design ops are very closely related: you really can’t have one without the other. What I find interesting is that a lot of the challenges relating to design systems (and pattern libraries, and style guides) might be technical, whereas the challenges of design ops are almost entirely cultural.

I realise that tying design ops directly to design systems is somewhat limiting, and the truth is that design ops can encompass much more. I like Andy’s description:

Design Ops is essentially the practice of reducing operational inefficiencies in the design workflow through process and technological advancements.

Now, in theory, that can encompass any operational stuff—equipment, furniture, software—but in practice, when we’re dealing with design ops, 90% of the time it’s related to a design system. I guess I could use a whole new term (design systems ops?) but I think the term design ops works well …as long as everyone involved is clear on the kind of design ops we’re all talking about.

Saturday, January 20th, 2018

Needs must

I got a follow-up comment to my follow-up post about the follow-up comment I got on my original post about Google Analytics. Keep up.

I made the point that, from a front-end performance perspective, server logs have no impact whereas a JavaScript-based analytics solution must have some impact on the end user. Paul Anthony says:

Google won the analytics war because dropping one line of JS in the footer and handing a tried and tested interface to customers is an obvious no brainer in comparison to setting up an open source option that needs a cron job to parse the files, a database to store the results and doesn’t provide mobile interface.

Good point. Dropping one snippet of JavaScript into your front-end codebase is certainly an easier solution …easier for you, that is. The cost is passed on to your users. This is a classic example of where user needs and developer needs are in opposition. I’ve said it before and I’ll say it again:

Given the choice between making something my problem, and making something the user’s problem, I’ll choose to make it my problem every time.

It’s true that this often means doing more work. That’s why it’s called work. This is literally what our jobs are supposed to entail: we put in the work to make life easier for users. We’re supposed to be saving them time, not passing it along.

The example of Google Analytics is pretty extreme, I’ll grant you. The cost to the user of adding that snippet of JavaScript—if you’ve configured things reasonably well—is pretty small (again, just from a performance perspective; there’s still the cost of allowing Google to track them across domains), and the cost to you of setting up a comparable analytics system based on server logs can indeed be disproportionately high. But this tension between user needs and developer needs is something I see play out again and again.

I’ve often thought the HTML design principle called the priority of constituencies could be adopted by web developers:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity. In other words costs or difficulties to the user should be given more weight than costs to authors.

In Resilient Web Design, I documented the three-step approach I take when I’m building anything on the web:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

Now I’m wondering if I should’ve clarified that second step further. When I talk about choosing “the simplest possible technology”, what I mean is “the simplest possible technology for the user”, not “the simplest possible technology for the developer.”

For example, suppose I were going to build a news website. The core functionality is fairly easy to identify: providing the news. Next comes the step where I choose the simplest possible technology. Now, if I were a developer who had plenty of experience building JavaScript-driven single page apps, I might conclude that the simplest route for me would be to render the news via JavaScript. But that would be a fragile starting point if I’m trying to reach as many people as possible (I might well end up building a swishy JavaScript-driven single page app in step three, but step two should almost certainly be good ol’ HTML).

Time and time again, I see decisions that favour developer convenience over user needs. Don’t get me wrong—as a developer, I absolutely want developer convenience …but not at the expense of user needs.

I know that “empathy” is an over-used word in the world of user experience and design, but with good reason. I think we should try to remind ourselves of why we make our architectural decisions by invoking who those decisions benefit. For example, “This tech stack is best option for our team”, or “This solution is the best for the widest range of users.” Then, given the choice, favour user needs in the decision-making process.

There will always be situations where, given time and budget constraints, we end up choosing solutions that are easier for us, but not the best for our users. And that’s okay, as long as we acknowledge that compromise and strive to do better next time.

But when the best solutions for us as developers become enshrined as the best possible solutions, then we are failing the people we serve.

That doesn’t mean we must become hairshirt-wearing martyrs; developer convenience is important …but not as important as user needs. Start with user needs.

Friday, January 19th, 2018


I wrote about Google Analytics yesterday. As usual, I syndicated the post to Ev’s blog, and I got an interesting response over there. Kelly Burgett set me straight on some of the finer details of how goals work, and finished with this thought:

You mention “delivering a performant, accessible, responsive, scalable website isn’t enough” as if it should be, and I have to disagree. It’s not enough for a business to simply have a great website if you are unable to understand performance of channel marketing, track user demographics and behavior on-site, and optimize your site/brand based on that data. I’ve seen a lot of ugly sites who have done exceptionally well in terms of ROI, simply because they are getting the data they need from the site in order make better business decisions. If your site cannot do that (ie. through data collection, often third party scripts), then your beautifully-designed site can only take you so far.

That makes an excellent case for having analytics. But that’s not necessarily the same as having Google analytics, or even JavaScript-driven analytics at all.

By far the most useful information you get from analytics is around where people have come from, where did they go next, and what kind of device are they using. None of that information requires JavaScript. It’s all available from your server logs.

I don’t want to come across all old-man-yell-at-cloud here, but I’m trying to remember at what point self-hosted software for analysing your log traffic became not good enough.

Here’s the thing: logging on the server has no effect on the user experience. It’s basically free, in terms of performance. Logging via JavaScript, by its very nature, has some cost. Even if its negligible, that’s one more request, and that’s one more bit of processing for the CPU.

All of the data that you can only get via JavaScript (in-page actions, heat maps, etc.) are, in my experience, better handled by dedicated software. To me, that kind of more precise data feels different to analytics in the sense of funnels, conversions, goals and all that stuff.

So in order to get more fine-grained data to analyse, our analytics software has now doubled down on a technology—JavaScript—that has an impact on the end user, where previously the act of observation could be done at a distance.

There are also blind spots that come with JavaScript-based tracking. According to Google Analytics, 0% of your customers don’t have JavaScript. That’s not necessarily true, but there’s literally no way for Google Analytics—which relies on JavaScript—to even do its job in the absence of JavaScript. That can lead to a dangerous situation where you might be led to think that 100% of your potential customers are getting by, when actually a proportion might be struggling, but you’ll never find out about it.

Related: according to Google Analytics, 0% of your customers are using ad-blockers that block requests to Google’s servers. Again, that’s not necessarily a true fact.

So I completely agree than analytics are a good thing to have for your business. But it does not follow that Google Analytics is a good thing for your business. Other options are available.

I feel like the assumption that “analytics = Google Analytics” is like the slippery slope in reverse. If we’re all agreed that analytics are important, then aren’t we also all agreed that JavaScript-based tracking is important?

In a word, no.

This reminds me of the arguments made in favour of intrusive, bloated advertising scripts. All of the arguments focus on the need for advertising—to stay in business, to pay the writers—which are all great reasons for advertising, but have nothing to do with JavaScript, which is at the root of the problem. Everyone I know who uses an ad-blocker—including me—doesn’t use it to stop seeing adverts, but to stop the performance of the page being degraded (and to avoid being tracked across domains).

So let’s not confuse the means with the ends. If you need to have advertising, that doesn’t mean you need to have horribly bloated JavaScript-based advertising. If you need analytics, that doesn’t mean you need an analytics script on your front end.

Thursday, January 18th, 2018

Analysing analytics

Hell is other people’s JavaScript.

There’s nothing quite so crushing as building a beautifully performant website only to have it infested with a plague of third-party scripts that add to the weight of each page and reduce the responsiveness, making a mockery of your well-considered performance budget.

Trent has been writing about this:

My latest realization is that delivering a performant, accessible, responsive, scalable website isn’t enough: I also need to consider the impact of third-party scripts.

He’s started the process by itemising third-party scripts. Frustratingly though, there’s rarely one single culprit that you can point to—it’s the cumulative effect of “just one more beacon” and “just one more analytics script” and “just one more A/B testing tool” that adds up to a crappy experience that warms your user’s hands by ensuring your site is constantly draining their battery.

Actually, having just said that there’s rarely one single culprit, Adobe Tag Manager is often at the root of third-party problems. That and adverts. It’s like opening the door of your beautifully curated dream home, and inviting a pack of diarrhetic elephants in: “Please, crap wherever you like.”

But even the more well-behaved third-party scripts can get out of hand. Google Analytics is so ubiquitous that it’s hardly even considered in the list of potentially harmful third-party scripts. On the whole, it’s a fairly well-behaved citizen of your site’s population of third-party scripts (y’know, leaving aside the whole surveillance capitalism business model that allows you to use such a useful tool for free in exchange for Google tracking your site’s visitors across the web and selling the insights from that data to advertisers).

The initial analytics script that you—asynchronously—load into your page isn’t very big. But depending on how you’ve configured your Google Analytics account, that might just be the start of a longer chain of downloads and event handlers.

Ed recently gave a lunchtime presentation at Clearleft on using Google Analytics—he professes modesty but he really knows his stuff. He was making sure that everyone knew how to set up goals’n’stuff.

As I understand it, there are two main categories of goals: events and destinations (there are also durations and pages, but they feel similar to destinations). You use events to answer questions like “Did the user click on this button?” or “Did the user click on that search field?”. You use destinations to answer questions like “Did the user arrive at this page?” or “Did the user come from that page?”

You can add as many goals to your site’s analytics as you want. That’s an intoxicating offer. The problem is that there is potentially a cost for each goal you create. It’s an invisible cost. It’s paid by the user in the currency of JavaScript sent down the wire (I wish that the Google Analytics admin interface were more like the old interface for Google Fonts, where each extra file you added literally pushed a needle higher on a dial).

It strikes me that the event-based goals would necessarily require more JavaScript in order to listen out for those clicks and fire off that information. The destination-based goals should be able to get all the information needed from regular page navigations.

So I have a hypothesis. I think that destination-based goals are less harmful to performance than event-based goals. I might well be wrong about that, and if I am, please let me know.

With that hypothesis in mind, and until I learn otherwise, I’ve got two rules of thumb to offer when it comes to using Google Analytics:

  1. Try to keep the number of goals to a minimum.
  2. If you must create a goal, favour destinations over events.

Sunday, January 7th, 2018

Monday, January 1st, 2018

Words I wrote in 2017

I wrote 78 blog posts in 2017. That works out at an average of six and a half blog posts per month. I’ll take it.

Here are some pieces of writing from 2017 that I’m relatively happy with:

Going Rogue. A look at the ethical questions raised by Rogue One

In AMP we trust. My unease with Google’s AMP format was growing by the day.

A minority report on artificial intelligence. Revisiting two of Spielberg’s films after a decade and a half.

Progressing the web. I really don’t want progressive web apps to just try to imitate native apps. They can be so much more.

CSS. Simple, yes, but not easy.

Intolerable. A screed. I still get very, very angry when I think about how that manifestbro duped people.

Акула. Recounting a story told by a taxi driver.

Hooked and booked. Does A/B testing lead to dark patterns?

Ubiquity and consistency. Different approaches to building on the web.

I hope there’s something in there that you like. It always a nice bonus when other people like something I’ve written, but I write for myself first and foremost. Writing is how I figure out what I think. I will, of course, continue to write and publish on my website in 2018. I’d really like it if you did the same.

Food I ate in 2017

I did a fair bit of travelling in 2017, which I always enjoy. I particularly enjoy it when Jessica comes with me and we get to sample the cuisine of other countries.

Portugal will always be a culinary hotspot for me, particularly Porto (“tripas à moda do Porto” is one of the best things I’ve ever tasted). When I was teaching at the New Digital School in Porto back in February, I took full advantage of the culinary landscape. A seafood rice (and goose barnacles) at O Gaveto in Matosinhos was a particular highlight.

Goose barnacles. Seafood rice.

The most unexpected thing I ate in Porto was when I wandered off for lunch on my own one day. I ended up in a little place where, when I walked in, it was kind of like that bit in the Western when the music stops and everyone turns to look. This was clearly a place for locals. The owner didn’t speak any English. I didn’t speak any Portuguese. But we figured it out. She mimed something sandwich-like and said a word I wasn’t familiar with: bifana. Okay, I said. Then she mimed the universal action for drinking, so I said “agua.” She looked at with a very confused expression. “Agua!? Não. Cerveja!” Who am I to argue? Anyway, she produced this thing which was basically some wet meat in a bun. It didn’t look very appetising. But this was the kind of situation where I couldn’t back out of eating it. So I took a bite and …it was delicious! Like, really, really delicious.

This sandwich was delicious and I have no idea what was in it. I speak no Portuguese and the café owner spoke no English.

Later in February, we went to Pittsburgh to visit Cindy and Matt. We were there for my birthday, so Cindy prepared the most amazing meal. She reproduced a dish from the French Laundry—sous-vide lobster on orzo. It was divine!

Lobster tail on orzo with a Parmesan crisp.

Later in the year, we went to Singapore for the first time. The culture of hawker centres makes it the ideal place for trying lots of different foods. There were some real revelations in there.

chicken rice fishball noodles laksa grilled pork

We visited lots of other great places like Reykjavík, Lisbon, Barcelona, and Nuremberg. But as well as sampling the cuisine of distant locations, I had some very fine food right here in Brighton, home to Trollburger, purveyors of the best burger you’ll ever eat.

Checked in at Trollburger. The Hellfire! 🌶 Troll’s Fiery Breath and bolognese fries from @trollburger. Burning crusader. Having a delicious Nightfire burger from @trollburgerBN1.

I also have a thing for hot wings, so it’s very fortunate that The Joker, home to the best wings in Brighton, is just around the corner from the dance studio where Jessica goes for ballet. Regular wing nights became a thing in 2017.

Checked in at The Joker. Lunch break at FFConf. — with Graham Checked in at The Joker. with Jessica Checked in at The Joker. Wing night! — with Jessica

I started a little routine in 2017 where I’d take a break from work in the middle of the afternoon, wander down to the seafront, and buy a single oyster. It only took a few minutes out of the day but it was a great little dose of perspective each time.

Today’s oyster. Today’s oyster. Today’s oyster. Today’s oyster. Today’s oyster. Today’s oyster on the beach. Today’s oyster on the beach.

But when I think of my favourite meals of 2017, most of them were home-cooked.

Sirloin steak with thyme. 🥩 Sous-vide pork tenderloin stuffed with capers and herbs. Roasting pork, apples, and onions. 🐷🍏 Fabada Asturiana. Rib of beef with potatoes and broad bean, fennel and burrata. Grillin’ chicken. A bountiful table. Grilling lamb. Summertime on a plate. Rib of beef, carrots, carrot-top chimichurri, and kale. The roast chicken angel watches over its flock of side dishes. Ribeye.

Saturday, December 30th, 2017

Audio I listened to in 2017

I huffduffed 290 pieces of audio in 2017. I’ve still got a bit of a backlog of items I haven’t listened to yet, but I thought I’d share some of my favourite items from the past year. Here are twelve pieces of audio, one for each month of 2017…

Donald Hoffman’s TED talk, Do we see reality as it really is?. TED talks are supposed to blow your mind, right? (22:15)

How to Become Batman on Invisibilia. Alix Spiegel and Lulu Miller challenge you to think of blindness as social construct. Hear ‘em out. (58:02)

Where to find what’s disappeared online, and a whole lot more: the Internet Archive on Public Radio International. I just love hearing Brewster Kahle’s enthusiasm and excitement. (42:43)

Every Tuesday At Nine on Irish Music Stories. I’ve been really enjoying Shannon Heaton’s podcast this year. This one digs into that certain something that happens at an Irish music session. (40:50)

Adam Buxton talks to Brian Eno (part two is here). A fun and interesting chat about Brian Eno’s life and work. (53:10 and 46:35)

Nick Cave and Warren Ellis on Kreative Kontrol. This was far more revealing than I expected: genuine and unpretentious. (57:07)

Paul Lloyd at Patterns Day. All the talks at Patterns Day were brilliant. Paul’s really stuck with me. (28:21)

James Gleick on Time Travel at The Long Now. There were so many great talks from The Long Now’s seminars on long-term thinking. Nicky Case and Jennifer Pahlka were standouts too. (1:20:31)

Long Distance on Reply All. It all starts with a simple phone call. (47:27)

The King of Tears on Revisionist History. Malcolm Gladwell’s style suits podcasting very well. I liked this episode about country songwriter Bobby Braddock. Related: Jon’s Troika episode on tearjerkers. (42:14)

Feet on the Ground, Eyes on the Stars: The True Story of a Real Rocket Man with G.A. “Jim” Ogle. This was easily my favourite podcast episode of 2017. It’s on the User Defenders podcast but it’s not about UX. Instead, host Jason Ogle interviews his father, a rocket scientist who worked on everything from Apollo to every space shuttle mission. His story is fascinating. (2:38:21)

R.E.M. on Song Exploder. Breaking down the song Try Not To Breathe from Automatic For The People. (16:15)

I’ve gone back and added the tag “2017roundup” to each of these items. So if you’d like to subscribe to a podcast of just these episodes, here are the links:

Thursday, December 28th, 2017

Books I read in 2017

Here are the books I read in 2017. It’s not as many as I hoped.

I set myself a constraint this year so that I’d have to alternate between reading fiction and non-fiction: no reading two fiction books back-to-back, and no reading two non-fiction books back-to-back. I quite like the balanced book diet that resulted. I think I might keep it going.

Anyway, in order of consumption, here are those books…

Leviathan Wakes by James S.A. Corey


I had already seen—and quite enjoyed—the first series of the television adaption of The Expanse so I figured I’d dive into the books that everyone kept telling me about. The book was fun …but no more than that. I don’t think I’m invested enough to read any of the further books. In some ways, I think this makes for better TV than reading (despite the TV’s shows annoying “slow motion in zero G” trope that somewhat lessens the hard sci-fi credentials).

Black Box Thinking by Matthew Syed


This was recommended by James Box, and on the whole, I really liked it. There’s a lot of anecdata though. Still, the fundamental premise is a good one, comparing the attitudes towards risk in two different industries; aviation and healthcare. A little bit more trimming down would’ve helped the book—it dragged on just a bit too long.

The Separation by Christopher Priest


I need to read at least one Christopher Priest book a year. They’re in a league of their own, somehow outside the normal rules of criticism. This one is a true stand-out. As ever, it messes with your head and gets weirder as it goes on. If you haven’t read any Christopher Priest, I reckon this would be a great one to start with.

Deep Sea and Foreign Going by Rose George


Recommended by both Jessica and Danielle, this is a well-crafted look into life on board a cargo ship, as well as an examination of ocean-going logistics. If you liked the Containers podcast, you’ll like this. I found it a little bit episodic—more like a collection of magazine articles sometimes—but still enjoyable.

Bloodchild by Octavia E. Butler

A false start. This is a short story, not a novel—I didn’t know that when I downloaded it to my Kindle. It’s an excellent short story though. Still, I felt it didn’t count in my zigzagging between fiction and non-fiction so I followed it with…

Star Maker by Olaf Stapledon


Science fiction from the 1930s. The breadth of imagination is quite staggering, even if the writing is sometimes a bit of a slog. Still, it seems remarkably ahead of its time in many ways.

The Sense Of Style by Steven Pinker


I spent a portion of 2017 writing a book so I was eager to read Steven Pinker’s take on a style guide, having thoroughly enjoyed The Language Instinct and The Blank Slate. This book starts with a bang—a critique of some examples of great writing. Then there’s some good practical advice, and then there’s a bit of a laundry list of non-rules. Typical of Pinker, the points about unclear writing are illustrated with humorous real-world examples. Overall, a good guide but perhaps a little longer than it needs to be.

Aurora by Kim Stanley Robinson


I loved everything about this book.

Writing On The Wall by Tom Standage


I’ve read of all of Tom Standage’s books but none of them have ever matched the brilliance of The Victorian Internet. This one was frustratingly shallow. Every now and then there were glimpses of a better book. There’s a chapter on radio that gets genuinely exciting and intriguing. If Tom Standage wrote a whole book on that, I’d read it in a heartbeat. But in this collection of social media through the ages, it just reminded me of how much better he can be.

Grass by Sheri S. Tepper


Recommended by Jessica and Denise, this was my first Sheri S. Tepper book. It took me a while to get into it, but I enjoyed it. There’s nothing groundbreaking here, but it’s a solid planetary romance.

Bird By Bird by Anne Lamott


This has been recommended to me by more people than I can recall. I was very glad to finally get to read it (myself and Amber did a book swap: I gave her A Sense Of Style and she gave me this). As a guide to writing, it’s got some solid advice, humorously delivered, but there were also moments where I found the style grating. Still worth reading though.

The Gradual by Christopher Priest


I just can’t get enough of Christopher Priest. I saw that his latest book was in the local library and I snapped it up. This one is set entirely in the Dream Archipelago. Yet again, the weirdness increases as the book progresses. It’s not up there with The Islanders or The Adjacent, but it’s as unsettling as any of his best books.

A Brief History of Everyone Who Ever Lived by Adam Rutherford


I think this was the best non-fiction book I read this year. It’s divided into two halves. The first half, which I preferred, dealt with the sweep of human history as told through our genes. The second half deals with modern-day stories in the press that begin “Scientists say…” It was mostly Adam Rutherford gritting his teeth in frustration as he points out that “it’s a bit more complicated than that.” Thoroughly enjoyable, well written, and educational.

A Closed And Common Orbit by Becky Chambers


I had read the first book in this series, A Long Way to a Small, Angry Planet, and thought it was so-so. It read strangely like fan fiction, and didn’t have much of a though-line. But multiple people said that this second outing was a big improvement. They weren’t wrong. This is definitely a better book. The story is relatively straightforward, and as with all good sci-fi, it’s not really telling us about a future society—it’s telling us about the world we live in. The book isn’t remarkable but it’s solid.

The Dream Machine: J.C.R Licklider And The Revolution That Made Computing Possible by M. Mitchell Waldrop


This is the kind of book that could have been written just for me. The ARPANET, Turing, Norbert Wiener and Cybernetics, Xerox PARC, the internet, the web …it’s all in here. I enjoyed it, but it was a long slog. I’m not sure if using J.C.R. Licklider as the unifying factor in all these threads really worked. And maybe it was just the length of the book getting to me, but by the time I was two-thirds of the way through, I was getting weary of the dudes. Yes, there were a lot of remarkable men involved in these stories, but my heart sank with every chapter that went by without a single woman being mentioned. I found it ironic that so many intelligent people had the vision to imagine a world of human-computer symbiosis, but lacked the vision to challenge the status quo of the societal structures they were in.

Broken Monsters by Lauren Beukes


Lauren defies genre-pigeonholing once again. This is sort of a horror, sort of a detective story, and sort of a social commentary. It works well, although I was nervous about the Detroit setting sometimes veering into ruin porn. I don’t think it’s up there with Zoo City or The Shining Girls, but it’s certainly a page-turner.

Accessibility For Everyone by Laura Kalbag


Because the previous non-fiction book I read was so long, I really fancied something short and to-the-point. A Book Apart to the rescue. You can be guaranteed that any book from that publisher will be worth reading, and this is no exception.

Ninefox Gambit by Yoon Ha Lee


There was a lot of buzz around this book, and it came highly recommended by Danielle. It’s thoroughly dizzying in its world-building; you’re plunged right into the thick of things with no word of explanation or exposition. I like that. There were times when I thought that maybe I had missed some important information, because I was having such a hard time following what was going on, but then I’d realise that the sense of disorientation was entirely deliberate. Good stuff …although for some reason I ended up liking it more than loving it.

High Performance Browser Networking by Ilya Grigorik


A recommendation from Harry. The whole book is available online for free. That’s how I’ve been reading it—in a browser tab. In fact, I have to confess that I haven’t finished it. I’m dipping in and out. There’s a lot of very detailed information on how networks and browsers work. I’m not sure how much of it is going into my brain, but I very much appreciate having this resource to hand.

A Fire Upon The Deep by Vernor Vinge

I picked up a trade paperback copy of this sci-fi book at The Tattered Cover bookstore in Denver when I was there for An Event Apart earlier this month. I had heard it mentioned often and it sounds like my kind of yarn. I’m about halfway through it now and so far, so good.

There you have it.

It’s tough to pick a clear best. In non-fiction, I reckon Adam Rutherford’s A Brief History of Everyone Who Ever Lived just about pips Steven Pinker’s A Sense of Style. In fiction, Christopher Priest’s The Separation comes close, but Kim Stanley Robinson’s Aurora remains my favourite.

Like I said, not as many books as I would like. And of those twenty works, only seven were written by women—I need to do better in 2018.

Tuesday, December 26th, 2017

The Last Jedi

If you haven’t seen The Last Jedi (yet), please stop reading. Spoilers ahoy.

I’ve been listening to many, many podcast episodes about the latest Star Wars film. They’re all here on Huffduffer. You can subscribe to a feed of just those episodes if you want.

I am well aware that the last thing anybody wants or needs is one more hot take on this film, but what the heck? I figured I’d jot down my somewhat simplistic thoughts.

I loved it.

But I wasn’t sure at first. I’ve talked to other people who felt similarly on first viewing—they weren’t sure if they liked it or not. I know some people who, on reflection, decided they definitely didn’t like it. I completely understand that.

A second viewing helped to cement my positive feelings towards this film. This is starting to become a trend: I didn’t think much of Rogue One on first viewing, but a second watch reversed my opinion completely. Maybe I just find it hard to really get into the flow when I’m seeing a new Star Wars film for the very first time—an event that I once thought would never occur again.

My first viewing of The Last Jedi wasn’t helped by having the worst seats in the house. My original plan was to see it with Jessica at a minute past midnight in The Duke Of York’s in Brighton. I bought front-row tickets as soon as they were available. But then it turned out that we were going to be in Seattle at that time instead. We quickly grabbed whatever tickets were left. Those seats were right at the front and far edge of the cinema, so the screen was more trapezoid than rectangular. The lights went down, the fanfare blared, and the opening crawl begin its march up …and to the left. My brain tried to compensate for the perspective effects but it was hard. Is Snoke’s face supposed to look like that? Does that person really have such a tiny head?

But while the spectacle was somewhat marred, the story unfolded in all its surprising delight. I thoroughly enjoyed the feeling of having the narrative rug repeatedly pulled out from under me.

I loved the unexpected end of Snoke in his vampiric boudoir. Let’s face it, he was the least interesting part of The Force Awakens—a two-dimensional evil mastermind. To despatch him in the middle of the middle chapter was the biggest signal that The Last Jedi was not simply going to retread the beats of the original trilogy.

I loved the reveal of Rey’s parentage. This was what I had been hoping for—that Rey came from nowhere in particular. After The Force Awakens, I wrote:

Personally, I’d like it if her parentage were unremarkable. Maybe it’s the socialist in me, but I’ve never liked the idea that the Force is based on eugenics; a genetic form of inherited wealth for the lucky 1%. I prefer to think of the Force as something that could potentially be unlocked by anyone who tries hard enough.

But I had resigned myself to the inevitable reveal that would tie her heritage into an existing lineage. What an absolute joy, then, that The Force is finally returned into everyone’s hands! Anil Dash describes this wonderfully in his post Every Last Jedi:

Though it’s well-grounded in the first definitions of The Force that we were introduced to in the original trilogy, The Last Jedi presents a radically inclusive new view of the Force that is bigger and broader than the Jedi religion which has thus-far colored our view of the entire Star Wars universe.

I was less keen on the sudden Force usage by Leia. I think it was the execution more than the idea that bothered me. Still, I realise that the problem lies just as much with me. See, lots of the criticism of this film comes from people (justifiably) saying “That’s not how The Force works!” in relation to Rey, Kylo Ren, or Luke Skywalker. I don’t share that reaction and I want to say, “Hey, who are we to decide how The Force works?”, but then during the Leia near-death scene, I found myself more or less thinking “That’s not how The Force works!”

This would be a good time to remind ourselves that, in the Star Wars universe, you can substitute the words “The Force” for “The Plot”—an invisible agency guiding actions and changing the course of events.

The first time I saw The Last Jedi, I began to really worry during the film’s climactic showdown. I wasn’t so much worried for the fate of the characters in peril; I was worried for the fate of the overarching narrative. When Luke showed up, my heart sank a little. A deus ex machina …and how did he get here exactly? And then when he emerges unscathed from a barrage of walker cannon fire, I thought “Aw no, they’ve changed the Jedi to be like superheroes …but that’s not the way The Force/Plot works!”

And then I had the rug pulled out from under me again. Yes! What a joyous bit of trickery! My faith in The Force/Plot was restored.

I know a lot of people didn’t like the Canto Bight diversion. Jessica described it as being quite prequel-y, and I can see that. And while I agree that any shot involving our heroes riding across the screen (on a Fathier, on a scout walker) just didn’t work, I liked the world-expanding scope of the caper subplot.

Still, I preferred the Galactica-like war of attrition as the Resistance is steadily reduced in size as they try to escape the relentless pursuit of the First Order. It felt like proper space opera. In some ways, it reminded of Alastair Reynolds but without the realism of the laws of physics (there’s nothing quite as egregious here as J.J. Abrams’ cosy galaxy where the destruction of a system can be seen in real time from the surface of another planet, but The Last Jedi showed again that Star Wars remains firmly in the space fantasy genre rather than hard sci-fi).

Oh, and of course I loved the porgs. But then, I never had a problem with ewoks, so treat my appraisal with a pinch of salt.

I loved seeing the west of coast of Ireland get so much screen time. Beehive huts in a Star Wars film! Mind you, that made it harder for me to get immersed in the story. I kept thinking, “Now, is that Skellig Michael? Or is it on the Dingle peninsula? Or Donegal? Or west Clare?”

For all its global success, Star Wars has always had a very personal relationship with everyone it touches. The films themselves are only part of the reason why people respond to them. The other part is what people bring with them; where they are in life at the moment they’re introduced to this world. And frankly, the films are only part of this symbiosis. As much as people like to sneer at the toys and merchandising as a cheap consumerist ploy, they played a significant part in unlocking my imagination. Growing up in a small town on the coast of Ireland, the Star Wars universe—the world, the characters—was a playground for me to make up stories …just as it was for any young child anywhere in the world.

One of my favourite shots in The Last Jedi looks like it could’ve come from the mind of that young child: an X-wing submerged in the waters of the rocky coast of Ireland. It was as though Rian Johnson had a direct line to my childhood self.

And yet, I think the reason why The Last Jedi works so well is that Rian Johnson makes no concessions to my childhood, or anyone else’s. This is his film. Of all the millions of us who were transported by this universe as children, only he gets to put his story onto the screen and into the saga. There are two ways to react to this. You can quite correctly exclaim “That’s not how I would do it!”, or you can go with it …even if that means letting go of some deeply-held feelings about what could’ve, should’ve, would’ve happened if it were our story.

That said, I completely understand why people might take against this film. Like I said, Rian Johnson makes no concessions. That’s in stark contrast to The Force Awakens. I wrote at the time:

Han Solo picked up the audience like it was a child that had fallen asleep in the car, and he gently tucked us into our familiar childhood room where we can continue to dream. And then, with a tender brush of his hand across the cheek, he left us.

The Last Jedi, on the other hand, thrusts us into this new narrative in the same way you might teach someone to swim by throwing them into the ocean from the peak of Skellig Michael. The polarised reactions to the film are from people sinking or swimming.

I choose to swim. To go with it. To let go. To let the past die.

And yet, one of my favourite takeaways from The Last Jedi is how it offers a healthy approach to dealing with events from the past. Y’see, there was always something that bothered me in the original trilogy. It was one of Yoda’s gnomic pronouncements in The Empire Strikes Back:

Try not. Do. Or do not. There is no try.

That always struck me as a very bro-ish “crushing it” approach to life. That’s why I was delighted that Rian Johnson had Yoda himself refute that attitude completely:

The greatest teacher, failure is.

That’s exactly what Luke needed to hear. It was also what I—many decades removed from my childhood—needed to hear.