Tags: 2e

82

sparkline

Sunday, February 21st, 2021

Reading resonances

In today’s world of algorithmic recommendation engines, it’s nice to experience some serendipity every now and then. I remember how nice it was when two books I read in sequence had a wonderful echo in their descriptions of fermentation:

There’s a lovely resonance in reading @RobinSloan’s Sourdough back to back with @EdYong209’s I Contain Multitudes. One’s fiction, one’s non-fiction, but they’re both microbepunk.

Robin agreed:

OMG I’m so glad these books presented themselves to you together—I think it’s a great pairing, too. And certainly, some of Ed’s writing about microbes was in my head as I was writing the novel!

I experienced another resonant echo when I finished reading Rebecca Solnit’s A Paradise Built in Hell and then starting reading Rutger Bregman’s Humankind. Both books share a common theme—that human beings are fundamentally decent—but the first chapter of Humankind was mentioning the exact same events that are chronicled in A Paradise Built in Hell; the Blitz, September 11th, Katrina, and more. Then he cites from that book directly. The two books were published a decade apart, and it was just happenstance that I ended up reading them in quick succession.

I recommend both books. Humankind is thoroughly enjoyable, but it has one maddeningly frustrating flaw. A Paradise Built in Hell isn’t the only work that influenced Bregman—he also cites Yuval Noah Harari’s Sapiens. Here’s what I thought of Sapiens:

Yuval Noah Harari has fixated on some ideas that make a mess of the narrative arc of Sapiens. In particular, he believes that the agricultural revolution was, as he describes it, “history’s biggest fraud.” In the absence of any recorded evidence for this, he instead provides idyllic descriptions of the hunter-gatherer lifestyle that have as much foundation in reality as the paleo diet.

Humankind echoes this fabrication. Again, the giveaway is that the footnotes dry up when the author is describing the idyllic pre-historical nomadic lifestyle. Compare it with, for instance, this description of the founding of Jericho—possibly the world’s oldest city—where researchers are at pains to point out that we can’t possibly know what life was like before written records.

I worry that Yuval Noah Harari’s imaginings are being treated as “truthy” by Rutger Bregman. It’s not a trend I like.

Still, apart from that annoying detour, Humankind is a great read. So is A Paradise Built in Hell. Try them together.

Thursday, February 18th, 2021

Checked in at Queen's Park. The pond in the park — with Jessica map

Checked in at Queen’s Park. The pond in the park — with Jessica

Wednesday, October 14th, 2020

Saving forms

I added a long-overdue enhancement to The Session recently. Here’s the scenario…

You’re on a web page with a comment form. You type your well-considered thoughts into a textarea field. But then something happens. Maybe you accidentally navigate away from the page or maybe your network connection goes down right when you try to submit the form.

This is a textbook case for storing data locally on the user’s device …at least until it has safely been transmitted to the server. So that’s what I set about doing.

My first decision was choosing how to store the data locally. There are multiple APIs available: sessionStorage, IndexedDB, localStorage. It was clear that sessionStorage wasn’t right for this particular use case: I needed the data to be saved across browser sessions. So it was down to IndexedDB or localStorage. IndexedDB is the more versatile and powerful—because it’s asynchronous—but localStorage is nice and straightforward so I decided on that. I’m not sure if that was the right decision though.

Alright, so I’m going to store the contents of a form in localStorage. It accepts key/value pairs. I’ll make the key the current URL. The value will be the contents of that textarea. I can store other form fields too. Even though localStorage technically only stores one value, that value can be a JSON object so in reality you can store multiple values with one key (just remember to parse the JSON when you retrieve it).

Now I know what I’m going to store (the textarea contents) and how I’m going to store it (localStorage). The next question is when should I do it?

I could play it safe and store the comment whenever the user presses a key within the textarea. But that seems like overkill. It would be more efficient to only save when the user leaves the current page for any reason.

Alright then, I’ll use the unload event. No! Bad Jeremy! If I use that then the browser can’t reliably add the current page to the cache it uses for faster back-forwards navigations. The page life cycle is complicated.

So beforeunload then? Well, maybe. But modern browsers also support a pagehide event that looks like a better option.

In either case, just adding a listener for the event could screw up the caching of the page for back-forwards navigations. I should only listen for the event if I know that I need to store the contents of the textarea. And in order to know if the user has interacted with the textarea, I’m back to listening for key presses again.

But wait a minute! I don’t have to listen for every key press. If the user has typed anything, that’s enough for me. I only need to listen for the first key press in the textarea.

Handily, addEventListener accepts an object of options. One of those options is called “once”. If I set that to true, then the event listener is only fired once.

So I set up a cascade of event listeners. If the user types anything into the textarea, that fires an event listener (just once) that then adds the event listener for when the page is unloaded—and that’s when the textarea contents are put into localStorage.

I’ve abstracted my code into a gist. Here’s what it does:

  1. Cut the mustard. If this browser doesn’t support localStorage, bail out.
  2. Set the localStorage key to be the current URL.
  3. If there’s already an entry for the current URL, update the textarea with the value in localStorage.
  4. Write a function to store the contents of the textarea in localStorage but don’t call the function yet.
  5. The first time that a key is pressed inside the textarea, start listening for the page being unloaded.
  6. When the page is being unloaded, invoke that function that stores the contents of the textarea in localStorage.
  7. When the form is submitted, remove the entry in localStorage for the current URL.

That last step isn’t something I’m doing on The Session. Instead I’m relying on getting something back from the server to indicate that the form was successfully submitted. If you can do something like that, I’d recommend that instead of listening to the form submission event. After all, something could still go wrong between the form being submitted and the data being received by the server.

Still, this bit of code is better than nothing. Remember, it’s intended as an enhancement. You should be able to drop it into any project and improve the user experience a little bit. Ideally, no one will ever notice it’s there—it’s the kind of enhancement that only kicks in when something goes wrong. A little smidgen of resilient web design. A defensive enhancement.

Monday, September 7th, 2020

Checked in at Duke of York's Picturehouse. A cinema to ourselves for T E N E T — with Jessica map

Checked in at Duke of York’s Picturehouse. A cinema to ourselves for T E N E T — with Jessica

Saturday, September 5th, 2020

Playing The Pride Of Petravore (hornpipe) on mandolin:

https://thesession.org/tunes/82

https://www.youtube.com/watch?v=Ew5DDYJq6cE

Monday, August 17th, 2020

Design Principles For The Web—the links

I’m speaking today at an online edition of An Event Apart called Front-End Focus. I’ll be opening up the show with a talk called Design Principles For The Web, which ironically doesn’t have much of a front-end focus:

Designing and developing on the web can feel like a never-ending crusade against the unknown. Design principles are one way of unifying your team to better fight this battle. But as well as the design principles specific to your product or service, there are core principles underpinning the very fabric of the World Wide Web itself. Together, we’ll dive into applying these design principles to build websites that are resilient, performant, accessible, and beautiful.

That explains why I’ve been writing so much about design principles …well, that and the fact that I’m mildly obsessed with them.

To avoid technical difficulties, I’ve pre-recorded the talk. So while that’s playing, I’ll be spamming the accompanying chat window with related links. Then I’ll do a live Q&A.

Should you be interested in the links that I’ll be bombarding the attendees with, I’ve gathered them here in one place (and they’re also on the website of An Event Apart). The narrative structure of the talk might not be clear from scanning down a list of links, but there’s some good stuff here that you can dive into if you want to know what the inside of my head is like.

References

adactio.com

Wikipedia

Friday, July 31st, 2020

Checked in at Pelicano. Iced latte — with Jessica map

Checked in at Pelicano. Iced latte — with Jessica

Saturday, June 13th, 2020

Gormless

I sometimes watch programmes on TG4, the Irish language broadcaster that posts most shows online. Even though I’m watching with subtitles on, I figure it can’t be bad for keeping my very rudimentary Irish from atrophying completely.

I’m usually watching music programmes but occassionally I’ll catch a bit of the news (or “nuacht”). Their coverage of the protests in America reminded me of a peculiar quirk of the Irish language. The Black community would be described as “daoine gorm” (pronunced “deenee gurum”), which literally translated would mean “blue people”. In Irish, the skin colour is referred to as “gorm”—blue.

This isn’t one of those linguistic colour differences like the way the Japanese word ao means blue and green. Irish has a perfectly serviceable word for the colour black, “dubh” (pronounced “duv”). But the term “fear dubh” (“far duv”) which literally means “black man” was already taken. It’s used to describe the devil. Not ideal.

In any case, this blue/black confusion in Irish reminded me of a delicious tale of schadenfreude. When I was writing about the difference between intentions and actions, I said:

Sometimes bad outcomes are the result of good intentions. Less often, good outcomes can be the result of bad intentions.

Back in 2017, the Geeky Gaeilgeoir wrote a post called Even Racists Got the Blues. In it, she disects the terrible translation job done by an Irish-American racist sporting a T-shirt that reads:

Gorm Chónaí Ábhar.

That’s completely nonsensical in Irish, but the intent behind the words was to say “Blue Lives Matter.” Except… even if it made grammatical sense, what this idiot actually wrote would translate as:

Black Lives Matter.

What a wonderful chef’s kiss of an own goal!

If only it were a tattoo.

Saturday, May 30th, 2020

Programming CSS to perform Sass colour functions

I wrote recently about moving away from Sass to using native CSS features. I had this to say on the topic of mixins in Sass:

These can be very useful, but now there’s a lot that you can do just in CSS with calc(). The built-in darken() and lighten() mixins are handy though when it comes to colours.

I know we will be getting these in the future but we’re not there yet with CSS.

Anyway, I had all this in the back of my mind when I was reading Lea’s excellent feature in this month’s Increment: A user’s guide to CSS variables. She’s written about a really clever technique of combining custom properites with hsl() colour values for creating colour palettes. (See also: Una’s post on dynamic colour theming with pure CSS.)

As so often happens when I’m reading something written by Lea—or seeing her give a talk—light bulbs started popping over my head (my usual response to Lea’s knowledge bombs is either “I didn’t know you could do that!” or “I never thought of doing that!”).

I immediately set about implementing this technique over on The Session. The trick here is to use separate custom properties for the hue, saturation, and lightness parts of hsl() colour values. Then, when you want to lighten or darken the colour—say, on hover—you can update the lightness part.

I’ve made a Codepen to show what I’m doing.

Let’s say I’m styling a button element. I make custom propertes for hsl() values:

button {
  --button-colour-hue: 19;
  --button-colour-saturation: 82%;
  --button-colour-lightness: 38%;
  background-color: hsl(
    var(--button-colour-hue),
    var(--button-colour-saturation),
    var(--button-colour-lightness)
  );
}

For my buttons, I want the borders to be slightly darker than the background colour. When I was using Sass, I used the darken() function to this. Now I use calc(). Here’s how I make the borders 10% darker:

border-color: hsl(
  var(--button-colour-hue),
  var(--button-colour-saturation),
  calc(var(--button-colour-lightness) - 10%)
);

That calc() function is substracting a percentage from a percentage: 38% minus 10% in this case. The borders will have a lightness of 28%.

I make the bottom border even darker and the top border lighter to give a feeling of depth.

On The Session there’s a “cancel” button style that’s deep red.

Here’s how I set its colour:

.cancel {
  --button-colour-hue: 0;
  --button-colour-saturation: 100%;
  --button-colour-lightness: 40%;
}

That’s it. The existing button declarations take care of assigning the right shades for the border colours.

Here’s another example. Site admins see buttons for some actions only available to them. I want those buttons to have their own colour:

.admin {
  --button-colour-hue: 45;
  --button-colour-saturation: 100%;
  --button-colour-lightness: 40%;
}

You get the idea. It doesn’t matter how many differently-coloured buttons I create, the effect of darkening or lightening their borders is all taken care of.

So it turns out that the lighten() and darken() functions from Sass are available to us in CSS by using a combination of custom properties, hsl(), and calc().

I’m also using this combination to lighten or darken background and border colours on :hover. You can poke around the Codepen if you want to see that in action.

I love seeing the combinatorial power of these different bits of CSS coming together. It really is a remarkably powerful programming language.

Tuesday, April 28th, 2020

Modified machete

The Rise Of Skywalker arrives on Disney Plus on the fourth of May (a date often referred to as Star Wars Day, even though May 25th is and always will be the real Star Wars Day). Time to begin a Star Wars movie marathon. But in which order?

Back when there were a mere two trilogies, this was already a vexing problem if someone were watching the films for the first time. You could watch the six films in episode order:

  1. The Phantom Menace
  2. Attack Of The Clones
  3. Revenge Of The Sith
  4. A New Hope
  5. The Empire Strikes Back
  6. The Return Of The Jedi

But then you’re spoiling the grand reveal in episode five.

Alright then, how about release order?

  1. A New Hope
  2. The Empire Strikes Back
  3. Return Of The Jedi
  4. The Phantom Menace
  5. Attack Of The Clones
  6. Revenge Of The Sith

But then you’re front-loading the big pay-off, and you’re finishing with a big set-up.

This conundrum was solved with the machete order. It suggests omitting The Phantom Menace, not because it’s crap, but because nothing happens in it that isn’t covered in the first five minutes of Attack Of The Clones. The machete order is:

  1. A New Hope
  2. The Empire Strikes Back
  3. Attack Of The Clones
  4. Revenge Of The Sith
  5. Return Of The Jedi

It’s kind of brilliant. You get to keep the big reveal in The Empire Strikes Back, and then through flashback, you see how this came to be. Best of all, the pay-off in Return Of The Jedi has even more resonance because you’ve just seen Anakin’s downfall in Revenge Of The Sith.

With the release of the new sequel trilogy, an adjusted machete order is a pretty straightforward way to see the whole saga:

  1. A New Hope
  2. The Empire Strikes Back
  3. The Phantom Menace (optional)
  4. Attack Of The Clones
  5. Revenge Of The Sith
  6. Return Of The Jedi
  7. The Force Awakens
  8. The Last Jedi
  9. The Rise Of Skywalker

Done. But …what if you want to include the standalone films too?

If you slot them in in release order, they break up the flow:

  1. A New Hope
  2. The Empire Strikes Back
  3. The Phantom Menace (optional)
  4. Attack Of The Clones
  5. Revenge Of The Sith
  6. Return Of The Jedi
  7. The Force Awakens
  8. Rogue One
  9. The Last Jedi
  10. Solo
  11. The Rise Of Skywalker

I’m planning to watch all eleven films. This was my initial plan:

  1. Rogue One
  2. A New Hope
  3. The Empire Strikes Back
  4. The Phantom Menace
  5. Attack Of The Clones
  6. Revenge Of The Sith
  7. Solo
  8. Return Of The Jedi
  9. The Force Awakens
  10. The Last Jedi
  11. The Rise Of Skywalker

I definitely want to have Rogue One lead straight into A New Hope. The problem is where to put Solo. I don’t want to interrupt the Sith/Jedi setup/payoff.

So here’s my current plan, which I have already begun:

  1. Solo
  2. Rogue One
  3. A New Hope
  4. The Empire Strikes Back
  5. The Phantom Menace
  6. Attack Of The Clones
  7. Revenge Of The Sith
  8. Return Of The Jedi
  9. The Force Awakens
  10. The Last Jedi
  11. The Rise Of Skywalker

This way, the two standalone films work as world-building for the saga and don’t interrupt the flow once the main story is underway.

I think this works pretty well. Neither Solo nor Rogue One require any prior knowledge to be enjoyed.

And just in case you’re thinking that perhaps I’m overthinking it a bit and maybe I’ve got too much time on my hands …the world has too much time on its hands right now! Thanks to The Situation, I can not only take the time to plan and execute the viewing order for a Star Wars movie marathon, I can feel good about it. Stay home, they said. Literally saving lives, they said. Happy to oblige!

Wednesday, March 11th, 2020

A curl in every port

A few years back, Zach Bloom wrote The History of the URL: Path, Fragment, Query, and Auth. He recently expanded on it and republished it on the Cloudflare blog as The History of the URL. It’s well worth the time to read the whole thing. It’s packed full of fascinating tidbits.

In the section on ports, Zach says:

The timeline of Gopher and HTTP can be evidenced by their default port numbers. Gopher is 70, HTTP 80. The HTTP port was assigned (likely by Jon Postel at the IANA) at the request of Tim Berners-Lee sometime between 1990 and 1992.

Ooh, I can give you an exact date! It was January 24th, 1992. I know this because of the hack week in CERN last year to recreate the first ever web browser.

Kimberly was spelunking down the original source code, when she came across this line in the HTUtils.h file:

#define TCP_PORT 80 /* Allocated to http by Jon Postel/ISI 24-Jan-92 */

We showed this to Jean-François Groff, who worked on the original web technologies like libwww, the forerunner to libcurl. He remembers that day. It felt like they had “made it”, receiving the official blessing of Jon Postel (in the same RFC, incidentally, that gave port 70 to Gopher).

Then he told us something interesting about the next line of code:

#define OLD_TCP_PORT 2784 /* Try the old one if no answer on 80 */

Port 2784? That seems like an odd choice. Most of us would choose something easy to remember.

Well, it turns out that 2784 is easy to remember if you’re Tim Berners-Lee.

Those were the last four digits of his parents’ phone number.

Monday, December 9th, 2019

Checked in at Plough & Stars. Sunday night session 🎻🎵 map

Checked in at Plough & Stars. Sunday night session 🎻🎵

Sunday, November 24th, 2019

Checked in at British Airways First Lounge map

Checked in at British Airways First Lounge

Sunday, August 25th, 2019

Checked in at Chicago Brewhouse. Chicago dog! — with Jessica map

Checked in at Chicago Brewhouse. Chicago dog! — with Jessica

Saturday, August 10th, 2019

Crossing

I’m going to America. But this time it’s going to be a bit different.

Here’s the backstory: I need to get to Chicago for An Event Apart in a couple of weeks. Jessica and I were talking about maybe going to Florida first to hang out with her family on the beach for a bit. We just needed to figure out the travel logistics.

Here’s the next variable to add in to the mix: Jessica is really into ballet. Like, really into ballet. She also likes boats, ships, and all things nautical.

Those two things are normally unrelated, but then a while back, Jessica tweeted this:

OMG @ENBallet on a SHIP crossing the ATLANTIC.

Dance the Atlantic 2019 Cruise

I chuckled at that, and almost immediately dismissed it as being something from another world. But then I looked at the dates, and wouldn’t you know it, it would work out perfectly for our planned travel to Florida and Chicago.

Sooo… we’re crossing the Atlantic ocean on the Queen Mary II. With ballet dancers.

It’s not a cruise. It’s a crossing:

The first rule about traveling between America and England aboard the Queen Mary 2, the flagship of the Cunard Line and the world’s largest ocean liner, is to never refer to your adventure as a cruise. You are, it is understood, making a crossing. The second rule is to refrain, when speaking to those who travel frequently on Cunard’s ships, from calling them regulars. The term of art — it is best pronounced while approximating Maggie Smith’s cut-glass accent on “Downton Abbey” — is Cunardists.

Because of the black-tie gala dinners taking place during the voyage, I am now the owner of tuxedo. I think all this dressing up is kind of like cosplay for the class system. This should be …interesting.

By all accounts, internet connectivity is non-existent on the crossing, so I’m going to be incommunicado. Don’t bother sending me any email—I won’t see it.

We sail from Southampton tomorrow. We arrive in New York a week later.

See you on the other side!

Saturday, June 29th, 2019

Checked in at Royal Pavilion Gardens. with Jessica map

Checked in at Royal Pavilion Gardens. with Jessica

Wednesday, June 19th, 2019

Toast

Shockwaves rippled across the web standards community recently when it appeared that Google Chrome was unilaterally implementing a new element called toast. It turns out that’s not the case, but the confusion is understandable.

First off, this all kicked off with the announcement of “intent to implement”. That makes it sounds like Google are intending to, well, …implement this. In fact “intent to implement” really means “intend to mess around with this behind a flag”. The language is definitely confusing and this is something that will hopefully be addressed.

Secondly, Chrome isn’t going to ship a toast element. Instead, this is a proposal for a custom element currently called std-toast. I’m assuming that should the experiment prove successful, it’s not a foregone conclusion that the final element name will be called toast (minus the sexually-transmitted-disease prefix). If this turns out to be a useful feature, there will surely be a discussion between implementators about the naming of the finished element.

This is the ideal candidate for a web component. It makes total sense to create a custom element along the lines of std-toast. At first I was confused about why this was happening inside of a browser instead of first being created as a standalone web component, but it turns out that there’s been a fair bit of research looking at existing implementations in libraries and web components. So this actually looks like a good example of paving an existing cowpath.

But it didn’t come across that way. The timing of announcements felt like this was something that was happening without prior discussion. Terence Eden writes:

It feels like a Google-designed, Google-approved, Google-benefiting idea which has been dumped onto the Web without any consideration for others.

I know that isn’t the case. And I know how many dedicated people have worked hard on this proposal.

Adrian Roselli also remarks on the optics of this situation:

To be clear, while I think there is value in minting a native HTML element to fill a defined gap, I am wary of the approach Google has taken. A repo from a new-to-the-industry Googler getting a lot of promotion from Googlers, with Googlers on social media doing damage control for the blowback, WHATWG Googlers handling questions on the repo, and Google AMP strongly supporting it (to reduce its own footprint), all add up to raise alarm bells with those who advocated for a community-driven, needs-based, accessible web.

Dave Cramer made a similar point:

But my concern wasn’t so much about the nature of the new elements, but of how we learned about them and what that says about how web standardization works.

So there’s a general feeling (outside of Google) that there’s something screwy here about the order of events. A lot discussion and research seems to have happened in isolation before announcing the intent to implement:

It does not appear that any discussions happened with other browser vendors or standards bodies before the intent to implement.

Why is this a problem? Google is seeking feedback on a solution, not on how to solve the problem.

Going back to my early confusion about putting a web component directly into a browser, this question on Discourse echoes my initial reaction:

Why not release std-toast (and other elements in development) as libraries first?

It turns out that std-toast and other in-browser web components are part of an idea called layered APIs. In theory this is an initiative in the spirit of the extensible web manifesto.

The extensible web movement focused on exposing low-level APIs to developers: the fetch API, the cache API, custom elements, Houdini, and all of those other building blocks. Layered APIs, on the other hand, focuses on high-level features …like, say, an HTML element for displaying “toast” notifications.

Layered APIs is an interesting idea, but I’m worried that it could be used to circumvent discussion between implementers. It’s a route to unilaterally creating new browser features first and standardising after the fact. I know that’s how many features already end up in browsers, but I think that the sooner that authors, implementers, and standards bodies get a say, the better.

I certainly don’t think this is a good look for Google given the debacle of AMP’s “my way or the highway” rollout. I know that’s a completely different team, but the external perception of Google amongst developers has been damaged by the AMP project’s anti-competitive abuse of Google’s power in search.

Right now, a lot of people are jumpy about Microsoft’s move to Chromium for Edge. My friends at Microsoft have been reassuring me that while it’s always a shame to reduce browser engine diversity, this could actually be a good thing for the standards process: Microsoft could theoretically keep Google in check when it comes to what features are introduced to the Chromium engine.

But that only works if there is some kind of standards process. Layered APIs in general—and std-toast in particular—hint at a future where a single browser vendor can plough ahead on their own. I sincerely hope that’s a misreading of the situation and that this has all been an exercise in miscommunication and misunderstanding.

Like Dave Cramer says:

I hear a lot about how anyone can contribute to the web platform. We’ve all heard the preaching about incubation, the Extensible Web, working in public, paving the cowpaths, and so on. But to an outside observer this feels like Google making all the decisions, in private, and then asking for public comment after the feature has been designed.

Monday, March 11th, 2019

T minus one

I’m back at CERN.

I’m back at CERN because tomorrow, March 12th, 2019, is exactly thirty years on from when Tim Berners-Lee submitted his original “vague but exciting” Information Management: A Proposal. Tomorrow morning, bright and early, there’s an event at CERN called Web@30.

Thanks to my neglibable contribution to the recreation of the WorldWideWeb browser, I’ve wrangled an invitation to attend. Remy and Martin are here too, and I know that the rest of the team are with us in spirit.

I’m so excited about this! I’m such a nerd for web history, it’s going to be like Christmas for me.

If you’re up early enough, you can watch the event on a livestream. The whole thing will be over by mid-morning. Then, Remy and I will take an afternoon flight back to England …just in time for the evening event at London’s Science Museum.

Saturday, March 9th, 2019

Updating email addresses with Mailchimp’s API

I’ve been using Mailchimp for years now to send out a weekly newsletter from The Session. But I never visit the Mailchimp website. Instead, I use the API to create a campaign each week, and then send it out. I also use the API whenever a member of The Session updates their email preferences (or changes their details).

I got an email from Mailchimp that their old API was being deprecated and I’d need to update to their more recent one. The code I was using had been happily running for about seven years, but now I’d have to change it.

Luckily, Drew has written a really handy Mailchimp API wrapper for PHP, the language that The Session’s codebase is in. Thanks, Drew! I downloaded that wrapper and updated my code accordingly.

Everything went pretty smoothly. I was able to create campaigns, send campaigns, add new subscribers, and delete subscribers. But I ran into an issue when I wanted to update someone’s email address (on The Session, you can edit your details at any time, including your email address).

Here’s the set up:

use \DrewM\MailChimp\MailChimp;
$MailChimp = new MailChimp('abc123abc123abc123abc123abc123-us1');
$list_id = 'b1234346';
$subscriber_hash = $MailChimp -> subscriberHash('currentemail@example.com');
$endpoint = 'lists/'.$listID.'/members/'.$subscriber_hash;

Now to update details, according to the API, I can use the patch method on that endpoint:

$MailChimp -> patch($endpoint, [
    'email_address' => 'newemail@example.com'
]);

But that doesn’t work. Mailchimp effectively treats email addresses as unique IDs for subscribers. So the only way to change someone’s email address appears to be to delete them, and then subscribe them fresh with the new email address:

$MailChimp -> delete($endpoint);
$newendpoint = 'lists/'.$listID.'/members';
$MailChimp -> post($newendpoint, [
    'email_address' => 'newemail@example.com',
    'status' => 'subscribed'
]);

That’s somewhat annoying, as the previous version of the API allowed email addresses to be updated, but this workaround isn’t too arduous.

Anyway, I figured it share this just in case it was useful for anyone else migrating to the newer API.

Update: Belay that. Turns out that you can update email addresses, but you have to be sure to include the status value:

$MailChimp -> patch($endpoint, [
    'email_address' => 'newemail@example.com',
    'status' => 'subscribed'
]);

Okay, that’s a lot more straightforward. Ignore everything I said.

Tuesday, February 19th, 2019

Interaction 19

Right before heading to Geneva to spend the week hacking at CERN, I was in Seattle with a sizable Clearleft contingent to attend Interaction 19, the annual conference put on by the Interaction Design Association.

Ben has rounded up the highlights from my fellow Clearlefties. There are some good talks listed there: John Maeda, Nelly Ben Hayoun, and Jon Bell were thoroughly enjoyable. Some other talks were just okay, and there was one talk, by IXDA president Alok Nandi, that was almost impressive in how rambling and incoherent it was. It was like being in a scene from Silicon Valley. I remember clapping at the end; not out of appreciation, but out of relief.

If truth be told, Interaction 19 had about a day’s worth of really great content …spread out over three days. To be fair, that’s par for the course. When we went to Interaction 17 in New York, the hit/miss ratio was about the same:

There were some really good talks at the event, but alas, the muti-track format made it difficult to see all of them. Continuous partial FOMO was the order of the day.

And as I said at the time:

To be honest, the conference was only part of the motivation for the trip. Spending a week in New York with a gaggle of Clearlefties was its own reward.

So I’m willing to cut Interaction 19 a lot of slack. Even if quite a few of the talks were just so-so, getting to hang with Clearlefties in Seattle during snowmageddon was a lot of fun (and you’ll be pleased to hear that we didn’t even resort to cannibalism to survive).

But while the content of the conference was fair to middling, the organisation of it was a shambles:

Imagine the Fyre Festival but in downtown Seattle in winter. Welcome to @ixdconf. #ixd19

They sold more tickets than there were seats. I ended up watching the first morning’s keynotes being streamed to a screen in a conference room in a different building.

Now, I’ve been at events with keynotes that have overflow rooms—South by Southwest does this. But that’s at a different scale. This is a conference with a known number of attendees, each one of them spending over a thousand dollars to attend. I’m pretty sure that a first-come, first-served policy isn’t the best way of treating those attendees.

Anyway, here’s what I submitted for that round-up of the best talks, but which, for reasons of prudence, was omitted from the final post:

I really enjoyed the keynote by Liz Jackson on inclusive design. I would’ve enjoyed it even more if I could’ve seen it in person. Instead I watched it live-streamed to a meeting room two buildings over because the conference sold more tickets than they had seats for. This was after queueing in the cold for registration. So I feel like I learned a lot from Interaction 19 …about how not to organise a conference.

Still, as Ben notes:

We all enjoyed ourselves thoroughly, despite best efforts by the West Coast snow to disrupt the entire city.

I’m going to be back in Seattle in just under two weeks for An Event Apart. Now that’s a conference! It runs like a well-oiled machine, and every talk in its single track has been curated for excellence …with one exception.