Tags: sharing



Deep linking with fragmentions

When I was marking up Resilient Web Design I wanted to make sure that people could link to individual sections within a chapter. So I added IDs to all the headings. There’s no UI to expose that though—like the hover pattern that some sites use to show that something is linkable—so unless you know the IDs are there, there’s no way of getting at them other than “view source.”

But if you’re reading a passage in Resilient Web Design and you highlight some text, you’ll notice that the URL updates to include that text after a hash symbol. If that updated URL gets shared, then anyone following it should be sent straight to that string of text within the page. That’s fragmentions in action:

Fragmentions find the first matching word or phrase in a document and focus its closest surrounding element. The match is determined by the case-sensitive string following the # (or ## double-hash)

It’s a similar idea to Eric and Simon’s proposal to use CSS selectors as fragment identifiers, but using plain text instead. You can find out more about the genesis of fragmentions from Kevin. I’m using Jonathon Neal’s script with some handy updates from Matthew.

I’m using the fragmention support to power the index of the book. It relies on JavaScript to work though, so Matthew has come to the rescue again and created a version of the site with IDs for each item linked from the index (I must get around to merging that).

The fragmention functionality is ticking along nicely with one problem…

I’ve tweaked the typography of Resilient Web Design to within an inch of its life, including a crude but effective technique to avoid widowed words at the end of a paragraph. The last two words of every paragraph are separated by a UTF-8 no-break space character instead of a regular space.

That solves the widowed words problem, but it confuses the fragmention script. Any selected text that includes the last two words of a paragraph fails to match. I’ve tried tweaking the script, but I’m stumped. If you fancy having a go, please have at it.

Update: And fixed! Thanks to Lee.

Exploring web technologies

Last week, I had two really enjoyable experiences discussing completely opposite ends of the web technology stack.

Tuesday is Codebar day here in Brighton. Clearleft hosted it at 68 Middle Street last week. I really, really enjoy coaching at Codebar. I particularly like teaching the absolute basics of HTML and CSS. There’s something so rewarding about seeing the “a-ha!” moments when concepts click with people. I also love answering the inevitable questions that arise, like “why is it like that?”, or “how do I do this?”

Fantastic coding tonight! Great to see you all. Thanks for coming and thanks @68MiddleSt & @clearleft for having us.

Thursday was devoted to the opposite end of the spectrum. I ran a workshop at Clearleft with some developers from one of our clients. The whole day was dedicated to exploring and evaluating up-and-coming web technologies. Basically, it was a chance to geek out about all the stuff I’ve been linking to or writing about. During the workshop I ended up making a lot of use of my tagging system here on adactio.com:

Prioritising topics for discussion.

Web components and service workers ended up at the top of the list of technologies to tackle, which was fortuitous, given my recent thoughts on comparing the two:

First of all, ask the question “who benefits from this technology?” In the case of service workers, it’s the end users. They get faster websites that handle network failure better. In the case of web components, there are no direct end-user benefits. Web components exist to make developers lives easier. That’s absolutely fine, but any developer convenience gained by the use of web components can’t come at the expense of the user—that price is too high.

The next question we usually ask when we’re evaluating a technology is “how well does it work?” Personally, I think it’s just as important to ask “how well does it fail?”

Those two questions turned out to be a good framework for the whole workshop. The question of how to evaluate technologies is something I’ve been thinking about a lot lately. I’m pretty sure it will be what my next conference talk is going to be all about.

You can read more about the structure of the workshop over on the Clearleft site. I’m looking forward to running it again sometime. But I’m equally looking forward to getting back to the basics at the next Codebar.

Independently published

Jessica writes about The Heroine’s Journey.

Remy explains Why I love working with the web.

Ludwig dreams of designers and developers working Together.

Charlotte documents her technique Teaching the order of margins in CSS.

Craig field-tests The Leica Q.

Robin thinks about The New Web Typography.

Michael dives deep into A Complete History of the Millennium Falcon.

What do they all have in common? Nothing …other than the fact that each person chose to write on their own website. I’m grateful for that. These are all wonderful pieces of writing—they deserve a long life.

Huffduffing for podcasters

I was pointed to this discussion thread which is talking about how to make podcast episodes findable for services like Huffduffer.

The logic behind Huffduffer’s bookmarklet goes something like this…

  1. Find any a elements that have href values ending in “.mp3” or “.m4a”.
  2. If there’s just one audio on the page, use that.
  3. If there are multiple audio, offer a list to the user and have them choose.

If that doesn’t work…

  1. Look for a link element with a rel value of “enclosure”.
  2. Look for a meta element property value of “og:audio”.
  3. Look for audio elements and grab either the src attribute of the element itself, or the src attribute of any source elements within the audio element.

If that doesn’t work…

  1. Try to find a link to an RSS feed (a link that looks like “rss” or “feed” or “atom”).
  2. If there is a feed, parse that for enclosure elements and present that list to the user.

That covers 80-90% of use cases. There are still situations where the actual audio file for a podcast episode is heavily obfuscated—either with clickjacking JavaScript “download” links, or links that point to a redirection to the actual file.

If you have a podcast and you want your episodes to be sharable and huffduffable, you have a few options:

Have a link to the audio file for the episode somewhere on the page, something like:

<a href="/path/to/file.mp3">download</a>

That’s the simplest option. If you’re hosting with Soundcloud, this is pretty much impossible to accomplish: they deliberately obfuscate and time-limit the audio file, even if you want it to be downloadable (that “download” link literally only allows a user to download that file in that moment).

If you don’t want a visible link on the page, you could use metadata in the head of your document. Either:

<link rel="enclosure" href="/path/to/file.mp3">


<meta property="og:audio" content="/path/to/file.mp3">

And if you want to encourage people to huffduff an episode of your podcast, you can also include a “huffduff it” link, like this:

<a href="https://huffduffer.com/add?page=referrer">huffduff it</a>

You can also use ?page=referer—that misspelling has become canonised thanks to HTTP.

There you go, my podcasting friends. However you decide to do it, I hope you’ll make your episodes sharable.

Small lessons, loosely learned

When Charlotte published her end-of-year report, she outlined her plans for 2016, which included “Document my JavaScript learning journey.”

I want to get into the habit of writing one JavaScript post every week to make sure I keep up with learning it. Even if it’s just a few words about some relevant terminology; it can be as long or short as desired or time allows, as long as it happens.

An excellent plan! If you really want to make sure you’ve understood something, write down an explanation of it. There’s nothing quite like writing to really test your grasp of an idea. Even if nobody else ever reads it, it’s still an extremely valuable exercise for yourself.

Charlotte has already started. Here’s a short post on using JavaScript to pick a random an item at random from an array:

Math.round(Math.random() * (array.length - 1))

It might seem like a small thing, but look what you have to understand:

  • How Math.round works (pretty straightforward—it rounds a floating point number to the nearest whole number).
  • How Math.random works (less straightforward—it gives a random number between zero and one, meaning you have to multiply it to do anything useful with the result).
  • How array.length works (seems straightforward—it gives you the total number of items in an array, but then catches you out when you realise there can never be an index with that total value because the indices are counted from zero …which gives rise to an entire class of programming error).

I really like this approach to learning: document each small thing as you go instead of waiting until all those individual pieces click together. That’s the approach Andy took when he was learning CSS and it led to him writing a book on the subject.

When it comes to problem-solving in general (and JavaScript in particular), I have a similar bias towards single-purpose solutions. Rather than creating a monolithic framework that attempts to solve all possible problems, I prefer a collection of smaller scripts that only do one thing, but do it really well; the UNIX philosophy.

Take, for example, Remy’s bind.js. It does two-way data-binding and nothing else. If you only need one-way data-binding, there’s Simulacra.js, which takes a similar minimalist, hands-off approach.

This approach of breaking large problems down into a collection of smaller problems also came up in a completely unrelated discussion at work recently. I floated the idea of starting a book club at Clearleft. Quite a few people are into the idea, but they’re not sure they could commit the time to reading a book on a deadline. Fair enough. Perhaps we could have the book club on a chapter-by-chapter basis? I don’t think that would work all that well for novels, but it did make me think of something related to Charlotte’s stated goal of learning more JavaScript…

Graham has been raving about the You Don’t Know JS book series by the supersmart Kyle Simpson. I suggested to Charlotte that we read through the books at the rate of one chapter a week. The first book is called Up and Going and our first chapter will be Into Programming, starting this week. Then at the end of the week we’ll get together to compare notes.

I’m hoping that by doing this together, there’s more chance that we’ll actually see it through to completion:

Why can I hit deadlines imposed by others, but not those imposed by myself?


We’ve been doing a lot of soul-searching at Clearleft recently; examining our values; trying to make implicit unspoken assumptions explicit and spoken. That process has unearthed some activities that have been at the heart of our company from the very start—sharing, teaching, and nurturing. After all, Clearleft would never have been formed if it weren’t for the generosity of people out there on the web sharing with myself, Andy, and Richard.

One of the values/mottos/watchwords that’s emerging is “Share what you learn.” I like that a lot. It echoes the original slogan of the World Wide Web project, “Share what you know.” It’s been a driving force behind our writing, speaking, and events.

In the same spirit, we’ve been running internship programmes for many years now. John is the latest of a long line of alumni that includes Anna, Emil, and James.

By the way—and this should go without saying, but apparently it still needs to be said—the internships are always, always paid. I know that there are other industries where unpaid internships are the norm. I’ve even heard otherwise-intelligent people defend those unpaid internships for the experience they offer. But what kind of message does it send to someone about the worth of their work when you withhold payment for it? Our industry is young. Let’s not fall foul of the pernicious traps set by older industries that have habitualised exploitation.

In the past couple of years, Andy concocted a new internship scheme:

So this year we decided to try a different approach by scouring the end of year degree shows for hot new talent. We found them not in the interaction courses as we’d expected, but from the worlds of Product Design, Digital Design and Robotics. We assembled a team of three interns , with a range of complementary skills, gave them a space on the mezzanine floor of our new building, and set them a high level brief.

The first such programme resulted in Chüne. The latest Clearleft internship project has just come to an end. The result is Notice.

This time ‘round, the three young graduates were Chloe, Chris and Monika. They each have differing but complementary skill sets: Chloe is a user interface designer; Chris is a product designer; Monika is an artist who knows her way around hardware hacking and coding.

I’ll miss having this lot in the Clearleft office.

Once again, they were set a fairly loose brief. They should come up with something “to enrich the lives of local residents” and it should have a physical and digital component to it.

They got stuck in to researching and brainstorming ideas. At the end of each week, we’d all gather together to get a playback of what they were coming up with. It was at these playbacks that the interns were introduced to a concept that they will no doubt encounter again in their professional lives: seagulling AKA the swoop and poop. For once, it was the Clearlefties who were in the position of being swoop-and-poopers, rather than swoop-and-poopies.

Playback at Clearleft

As the midway point of the internship approached, there were some interesting ideas, but no clear “winner” to pursue. Something else was happening around this time too: dConstruct 2015.

Chloe, Monika and Chris at dConstruct

The interns pitched in with helping out at the event, and in return, we kidnapped some of the speakers—namely John Willshire and Chris Noessel—to offer them some guidance.

There was also plenty of inspiration to be had from the dConstruct talks themselves. One talk in particular struck a chord: Dan Hill’s The City Of Things …especially the bit where he railed against the terrible state of planning application notices:

Most of the time, it ends up down the bottom of the lamppost—soiled and soggy and forgotten. This should be an amazing thing!

Hmm… sounds like something that could enrich the lives of local residents.

Not long after that, Matt Webb came to visit. He encouraged the interns to focus in on just the two ideas that really excited them rather then the 5 or 6 that they were considering. So at the next playback, they presented two potential projects—one about biking and the other about city planning. They put it to a vote and the second project won by a landslide.

That was the genesis of Notice. After that, they pulled out all the stops.

Exciting things are afoot with the @Clearleftintern project.

Not content with designing one device, they came up with a range of three devices to match the differing scope of planning applications. They set about making a working prototype of the device intended for the most common applications.

Monika and Chris, hacking

Last week marked the end of the project and the grand unveiling.

Playing with the @notice_city prototype. Chris breaks it down. Playback time. Unveiling.

They’ve done a great job. All the details are on the website, including this little note I wrote about the project:

This internship programme was an experiment for Clearleft. We wanted to see what would happen if you put through talented young people in a room together for three months to work on a fairly loose brief. Crucially, we wanted to see work that wasn’t directly related to our day-­to-­day dealings with web design.

We offered feedback and advice, but we received so much more in return. Monika, Chloe, and Chris brought an energy and enthusiasm to the Clearleft office that was invigorating. And the quality of the work they produced together exceeded our wildest expectations.

We hereby declare this experiment a success!

Personally, I think the work they’ve produced is very strong indeed. It would be a shame for it to end now. Perhaps there’s a way that it could be funded for further development. Here’s hoping.

Out on the streets of Brighton Prototype

As impressed as I am with the work, I’m even more impressed with the people. They’re not just talented and hard work—they’re a jolly nice bunch to have around.

I’m going to miss them.

The terrific trio!

My first Service Worker

I’ve made no secret of the fact that I’m really excited about Service Workers. I’m not alone. At the Coldfront conference in Copenhagen, pretty much every talk mentioned Service Workers.

Obviously I’m excited about what Service Workers enable: offline caching, background processes, push notifications, and all sorts of other goodies that allow the web to compete with native. But more than that, I’m really excited about the way that the Service Worker spec has been designed. Instead of being an all-or-nothing technology that you have to bet the farm on, it has been deliberately crafted to be used as an enhancement on top of existing sites (oh, how I wish that web components would follow a similar path).

I’ve got plenty of ideas on how Service Workers could be used to enhance a community site like The Session or the kind of events sites that we produce at Clearleft, but to begin with, I figured it would make sense to use my own personal site as a playground.

To start with, I’ve already conquered the first hurdle: serving my site over HTTPS. Service Workers require a secure connection. But you can play around with running a Service Worker locally if you run a copy of your site on localhost.

That’s how I started experimenting with Service Workers: serving on localhost, and stopping and starting my local Apache server with apachectl stop and apachectl start on the command line.

That reminds of another interesting use case for Service Workers: it’s not just about the user’s network connection failing (say, going into a train tunnel); it’s also about your web server not always being available. Both scenarios are covered equally.

I would never have even attempted to start if it weren’t for the existing examples from people who have been generous enough to share their work:

Also, I knew that Jake was coming to FF Conf so if I got stumped, I could pester him. That’s exactly what ended up happening (thanks, Jake!).

So if you decide to play around with Service Workers, please, please share your experience.

It’s entirely up to you how you use Service Workers. I figured for a personal site like this, it would be nice to:

  1. Explicitly cache resources like CSS, JavaScript, and some images.
  2. Cache the homepage so it can be displayed even when the network connection fails.
  3. For other pages, have a fallback “offline” page to display when the network connection fails.

So now I’ve got a Service Worker up and running on adactio.com. It will only work in Chrome, Android, Opera, and the forthcoming version of Firefox …and that’s just fine. It’s an enhancement. As more and more browsers start supporting it, this Service Worker will become more and more useful.

How very future friendly!

The code

If you’re interested in the nitty-gritty of what my Service Worker is doing, read on. If, on the other hand, code is not your bag, now would be a good time to bow out.

If you want to jump straight to the finished code, here’s a gist. Feel free to take it, break it, copy it, improve it, or do anything else you want with it.

To start with, let’s establish exactly what a Service Worker is. I like this definition by Matt Gaunt:

A service worker is a script that is run by your browser in the background, separate from a web page, opening the door to features which don’t need a web page or user interaction.


From inside my site’s global JavaScript file—or I could do this from a script element inside my pages—I’m going to do a quick bit of feature detection for Service Workers. If the browser supports it, then I’m going register my Service Worker by pointing to another JavaScript file, which sits at the root of my site:

if (navigator.serviceWorker) {
  navigator.serviceWorker.register('/serviceworker.js', {
    scope: '/'

The serviceworker.js file sits in the root of my site so that it can act on any requests to my domain. If I put it somewhere like /js/serviceworker.js, then it would only be able to act on requests to the /js directory.

Once that file has been loaded, the installation of the Service Worker can begin. That means the script will be installed in the user’s browser …and it will live there even after the user has left my website.


I’m making the installation of the Service Worker dependent on a function called updateStaticCache that will populate a cache with the files I want to store:

self.addEventListener('install', function (event) {

That updateStaticCache function will be used for storing items in a cache. I’m going to make sure that the cache has a version number in its name, exactly as described in the Guardian’s use case. That way, when I want to update the cache, I only need to update the version number.

var staticCacheName = 'static';
var version = 'v1::';

Here’s the updateStaticCache function that puts the items I want into the cache. I’m storing my JavaScript, my CSS, some images referenced in the CSS, the home page of my site, and a page for displaying when offline.

function updateStaticCache() {
  return caches.open(version + staticCacheName)
    .then(function (cache) {
      return cache.addAll([

Because those items are part of the return statement for the Promise created by caches.open, the Service Worker won’t install until all of those items are in the cache. So you might want to keep them to a minimum.

You can still put other items in the cache, and not make them part of the return statement. That way, they’ll get added to the cache in their own good time, and the installation of the Service Worker won’t be delayed:

function updateStaticCache() {
  return caches.open(version + staticCacheName)
    .then(function (cache) {
      return cache.addAll([

Another option is to use completely different caches, but I’ve decided to just use one cache for now.


When the activate event fires, it’s a good opportunity to clean up any caches that are out of date (by looking for anything that doesn’t match the current version number). I copied this straight from Nicolas’s code:

self.addEventListener('activate', function (event) {
      .then(function (keys) {
        return Promise.all(keys
          .filter(function (key) {
            return key.indexOf(version) !== 0;
          .map(function (key) {
            return caches.delete(key);


The fetch event is fired every time the browser is going to request a file from my site. The magic of Service Worker is that I can intercept that request before it happens and decide what to do with it:

self.addEventListener('fetch', function (event) {
  var request = event.request;

POST requests

For a start, I’m going to just back off from any requests that aren’t GET requests:

if (request.method !== 'GET') {

That’s basically just replicating what the browser would do anyway. But even here I could decide to fall back to my offline page if the request doesn’t succeed. I do that using a catch clause appended to the fetch statement:

if (request.method !== 'GET') {
          .catch(function () {
              return caches.match('/offline');

HTML requests

I’m going to treat requests for pages differently to requests for files. If the browser is requesting a page, then here’s the order I want:

  1. Try fetching the page from the network first.
  2. If that doesn’t work, try looking for the page in the cache.
  3. If all else fails, show the offline page.

First of all, I need to test to see if the request is for an HTML document. I’m doing this by sniffing the Accept headers, which probably isn’t the safest method:

if (request.headers.get('Accept').indexOf('text/html') !== -1) {

Now I try to fetch the page from the network:


If the network is working fine, this will return the response from the site and I’ll pass that along.

But if that doesn’t work, I’m going to look for a match in the cache. Time for a catch clause:

.catch(function () {
  return caches.match(request);

So now the whole event.respondWith statement looks like this:

    .catch(function () {
      return caches.match(request)

Finally, I need to take care of the situation when the page can’t be fetched from the network and it can’t be found in the cache.

Now, I first tried to do this by adding a catch clause to the caches.match statement, like this:

return caches.match(request)
  .catch(function () {
    return caches.match('/offline');

That didn’t work and for the life of me, I couldn’t figure out why. Then Jake set me straight. It turns out that caches.match will always return a response …even if that response is undefined. So a catch clause will never be triggered. Instead I need to return the offline page if the response from the cache is falsey:

return caches.match(request)
  .then(function (response) {
    return response || caches.match('/offline');

With that cleared up, my code for handing HTML requests looks like this:

  fetch(request, { credentials: 'include' })
    .catch(function () {
      return caches.match(request)
        .then(function (response) {
          return response || caches.match('/offline');

Actually, there’s one more thing I’m doing with HTML requests. If the network request succeeds, I stash the response in the cache.

Well, that’s not exactly true. I stash a copy of the response in the cache. That’s because you’re only allowed to read the value of a response once. So if I want to do anything with it, I have to clone it:

var copy = response.clone();
caches.open(version + staticCacheName)
  .then(function (cache) {
    cache.put(request, copy);

I do that right before returning the actual response. Here’s how it fits together:

if (request.headers.get('Accept').indexOf('text/html') !== -1) {
    fetch(request, { credentials: 'include' })
      .then(function (response) {
        var copy = response.clone();
        caches.open(version + staticCacheName)
          .then(function (cache) {
            cache.put(request, copy);
        return response;
      .catch(function () {
        return caches.match(request)
          .then(function (response) {
            return response || caches.match('/offline');

Okay. So that’s requests for pages taken care of.

File requests

I want to handle requests for files differently to requests for pages. Here’s my list of priorities:

  1. Look for the file in the cache first.
  2. If that doesn’t work, make a network request.
  3. If all else fails, and it’s a request for an image, show a placeholder.

Step one: try getting the file from the cache:


Step two: if that didn’t work, go out to the network. Now remember, I can’t use a catch clause here, because caches.match will always return something: either a response or undefined. So here’s what I do:

    .then(function (response) {
      return response || fetch(request);

Now that I’m back to dealing with a fetch statement, I can use a catch clause to take care of the third and final step: if the network request doesn’t succeed, check to see if the request was for an image, and if so, display a placeholder:

.catch(function () {
  if (request.headers.get('Accept').indexOf('image') !== -1) {
    return new Response('<svg>...</svg>',  { headers: { 'Content-Type': 'image/svg+xml' }});

I could point to a placeholder image in the cache, but I’ve decided to send an SVG on the fly using a new Response object.

Here’s how the whole thing looks:

    .then(function (response) {
      return response || fetch(request)
        .catch(function () {
          if (request.headers.get('Accept').indexOf('image') !== -1) {
            return new Response('<svg>...</svg>', { headers: { 'Content-Type': 'image/svg+xml' }});

The overall shape of my code to handle fetch events now looks like this:

self.addEventListener('fetch', function (event) {
  var request = event.request;
  // Non-GET requests
  if (request.method !== 'GET') {
  // HTML requests
  if (request.headers.get('Accept').indexOf('text/html') !== -1) {
  // Non-HTML requests

Feel free to peruse the code.

Next steps

The code I’m running now is fine for a first stab, but there’s room for improvement.

Right now I’m stashing any HTML pages the user visits into the cache. I don’t think that will get out of control—I imagine most people only ever visit just a handful of pages on my site. But there’s the chance that the cache could get quite bloated. Ideally I’d have some way of keeping the cache nice and lean.

I was thinking: maybe I should have a separate cache for HTML pages, and limit the number in that cache to, say, 20 or 30 items. Every time I push something new into that cache, I could pop the oldest item out.

I could imagine doing something similar for images: keeping a cache of just the most recent 10 or 20.

If you fancy having a go at coding that up, let me know.

Lessons learned

There were a few gotchas along the way. I already mentioned the fact that caches.match will always return something so you can’t use catch clauses to handle situations where a file isn’t found in the cache.

Something else worth noting is that this:


…is functionally equivalent to this:

  .then(function (response) {
    return response;

That’s probably obvious but it took me a while to realise. Likewise:


…is the same as:

  .then(function (response) {
    return response;

Here’s another thing… you’ll notice that sometimes I’ve used:


…but sometimes I’ve used:

fetch(request, { credentials: 'include' } );

That’s because, by default, a fetch request doesn’t include cookies. That’s fine if the request is for a static file, but if it’s for a potentially-dynamic HTML page, you probably want to make sure that the Service Worker request is no different from a regular browser request. You can do that by passing through that second (optional) argument.

But probably the trickiest thing is getting your head around the idea of Promises. Writing JavaScript is generally a fairly procedural affair, but once you start dealing with then clauses, you have to come to grips with the fact that the contents of those clauses will return asynchronously. So statements written after the then clause will probably execute before the code inside the clause. It’s kind of hard to explain, but if you find problems with your Service Worker code, check to see if that’s the cause.

And remember, please share your code and your gotchas: it’s early days for Service Workers so every implementation counts.


I got some very useful feedback from Jake after I published this…

Expires headers

By default, JavaScript files on my server are cached for a month. But a Service Worker script probably shouldn’t be cached at all (or cached for a very, very short time). I’ve updated my .htaccess rules accordingly:

<FilesMatch "serviceworker.js">
  ExpiresDefault "now"

If a request is initiated by the browser, I don’t need to say:

fetch(request, { credentials: 'include' } );

It’s enough to just say:


I set the scope parameter of my Service Worker to be “/” …but because the Service Worker is sitting in the root directory anyway, I don’t really need to do that. I could just register it with:

if (navigator.serviceWorker) {

If, on the other hand, the Service Worker file were sitting in a folder, but I wanted it to act on the whole site, then I would need to specify the scope:

if (navigator.serviceWorker) {
  navigator.serviceWorker.register('/path/to/serviceworker.js', {
    scope: '/'

…and I’d also need to send a special header. So it’s probably easiest to just put Service Worker scripts in the root directory.

Syndicating to Medium

When I brainpuked my thoughts on Google’s AMP project, I finished up by saying it was one more option for the Indie Web approach to syndication:

When I publish something on adactio.com in HTML, it already gets syndicated to different places. This is the Indie Web idea of POSSE: Publish (on your) Own Site, Syndicate Elsewhere. As well as providing RSS feeds, I’ve also got Twitter bots that syndicate to Twitter. An If This, Then That script pushes posts to Facebook. And if I publish a photo, it goes to Flickr. Now that Medium is finally providing a publishing API, I’ll probably start syndicating articles there as well. The more, the merrier.

Until Medium provided an API, I didn’t see much point in Medium. Let me clarify: I didn’t see much point in it for me. I’ve already got a website where I can publish whatever I like. For someone who doesn’t have their own website, I guess Medium—like Facebook, Twitter, Tumblr, etc.—provides a place to publish. I think this is what people mean when they use the word “platform” in a digital—rather than a North Sea oil drilling—sense.

Publishing exclusively on somebody else’s site works pretty well right up until the day the platform turns out to be a trap door and disappears from under you.

But I’m really puzzled by people who already have their own website choosing to publish on Medium instead. A shiny content farm is still a content farm.

“It’s the reach!” I’m told. That makes me sad. The whole point of the World Wide Web is that everybody has an equal opportunity to share their thoughts. You don’t need to ask anyone for permission. The gatekeepers of the previous century—record labels, book publishers, film producers—can’t stop you from making whatever you want and putting it out there for the world to see. And thanks to the principle of net neutrality baked into the design of TCP/IP, no one gets preferential treatment.

Notice that I said “people who already have their own website choosing to publish on Medium instead.” That last bit is important. Using Medium to publish copies of what you’ve already published on your own site gives you the best of both worlds: ownership and reach. That’s what Kevin does, for example. And Jeffrey. Until recently that was quite a pain in the ass, requiring a manual copy’n’paste process.

Back when Medium first launched, Dave Winer said:

Let me enter the URL of something I write in my own space, and have it appear here as a first class citizen. Indistinguishable to readers from something written here.

It still isn’t quite that simple, but now that Medium has a publishing API, it’s relatively straightforward to syndicate copies of your posts to Medium at the moment you publish on your own site.

Here’s what I did…

First of all, I signed up for a Medium account. For the longest time, even this simple step was off-limits for me because Medium used to require authentication using Twitter. By itself, that’s not a problem. The problem was that Medium demanded write permissions for my Twitter account. Just say no.

Now it’s possible to sign up for Medium using email so that rudeness is less of an issue (although I’d really like to see Medium stop being so demanding when it comes to Twitter permissions, especially as the interface copy bends over backwards to promise that Medium would never post to Twitter on my behalf …so why ask for permission to do just that?).

Once I had a Medium account, I needed two pieces of secret information in order to use the API.

The first piece is an access token.

I went to my settings on Medium and scrolled all the way to the bottom to the heading “Integration tokens”. I entered a description (“Syndication from adactio.com”) and pressed the “Get integration token” button.

Now I could use that token to get the second piece of information: my user ID.

I opened up a browser tab and went to this URL: https://api.medium.com/v1/me?accessToken= …adding my new secret integration token to the end.

That returns a JSON response. One of the fields in the JSON object has the name “id”. The value of that field is my user ID on Medium.

With those two pieces of information, I could make an authenticated POST request using cURL. Here’s the PHP code I’m using. It’s probably terrible but please feel free to use it, copy it, fork it, or do anything else you want with it.

When I run that code, I get a JSON response back from Medium’s API. Assuming I get a successful response, I can store the URL of the Medium copy and link out to it from here. That copy on Medium has a corresponding link rel="canonical" in the head of the document pointing back here to adactio.com.

That’s pretty much it. I added a checkbox to my posting interface so that sending a copy of a post to Medium is just a toggle away. I’ll tick that checkbox when I post this. You could be reading this on my site or you could be reading the copy on Medium.

The code I wrote is pretty similar to how I post notes to Twitter and photos to Flickr. In fact, posting to Medium is more straightforward: Flickr requires three bits of secret information; Twitter requires four.

What would make this cross-posting with Medium really interesting would be if it could work in both directions. Then I’d be able to use the (very nice) writing interface on Medium to publish on adactio.com.

That’s not so far-fetched. I’ve already got a micropub endpoint here on my site (here’s the code). That’s how I’m able to use Instagram to post photos to my own site (using OwnYourGram). I let Instagram keep a copy of my photo. I’d be happy to let Medium keep a copy of my post.

We could make history:

We need to break out of the model where all these systems are monolithic and standalone. There’s art in each individual system, but there’s a much greater art in the union of all the systems we create.

Someone will read this

After Responsive Field Day I had the chance to spend some extra time in Portland. I was staying with one Andy, with occasional welcome opportunities to hang out with the other Andy.

Over an artisanal, hand-crafted, free-range lunch one day, I took a moment to thank Andy B. I thanked him for a link. Links are very much his stock-in-trade, but there was one in particular that he had shared which stuck in my soul.

It started when he offered a bribe for a good link:

Paul Thompson won the bounty:

The link was to a page on Tilde Town, one of the many old-school web rings set up in the spirit of Paul Ford’s Tilde Club. The owner of this page had taken it upon himself to perform a really interesting—and surprisingly moving—experiment:

  1. Find blog posts where people have written “no one will ever read this”, and
  2. Read them aloud.

I’ve written before about how powerful the sound of a human voice can be. There was something about hearing these posts—which were written with a resigned acceptance of indifference—being given the time and respect to be read aloud. I listened to every single one, sometimes bemused, sometimes horrified, always fascinated.

You should listen to all of them too. They deserve it.

One in particular haunted me. It was written in 2008. After listening to it, I had to know more. I felt creepy and voyeuristic, but I transcribed a sentence from the audio file and pasted it in to Google.

I found her blog on the old my-diary.org domain. She only wrote nine entries in total. Her last one was in November 2009.

That was six years ago. I wonder how things turned out for her. I wonder if life got better for her when she left her teenage years behind. I wonder if she ever found peace.

I hope she’s okay.

Far afield

I spoke at Responsive Field Day here in Portland on Friday. It was an excellent event. All the talks were top notch.

The day flew by, with each talk clocking in at just 20 minutes, in batches of three followed by a quick panel discussion. It was a great format …but I knew it would be. See, Responsive Field Day was basically Responsive Day Out relocated to Portland.

Jason told me last year how inspired he was by the podcast recordings from Responsive Day Out and how much he and Lyza wanted to do a Responsive Day Out in Portland. I said “Go for it!” although I advised changing to the name to something a bit more American (having a “day out” at the seaside feels very British—a “field day” works perfectly as the US equivalent). Well, Jason, Lyza, and everyone at Cloud Four should feel very proud of their Responsive Field Day—it was wonderful.

As the day unfolded on Friday, I found myself being quite moved. It was genuinely touching to see my conference template replicated not only in format, but also in spirit. It was affordable (“Every expense spared!” was my motto), inclusive, diverse, and fast-paced. It was a lovely, lovely feeling to think that I had, in some small way, provided some inspiration for such a great event.

Jessica pointed out that isn’t the first time I’ve set up an event template for others to follow. When I organised the first Science Hack Day in London a few years ago, I never could have predicted how amazingly far Ariel would take the event. Fifty Science Hack Days in multiple countries—fifty! I am in awe of Ariel’s dedication. And every time I see pictures or video from a Science Hack Day in some far-flung location I’ve never been to, and I see the logo festooning the venue …I get such a warm fuzzy glow.

Y’know, when you’re making something—whether it’s an event, a website, a book, or anything else—it’s hard to imagine what kind of lifespan it might have. It’s probably just as well. I think it would be paralysing and overwhelming to even contemplate in advance. But in retrospect …it sure feels nice.

Small independent pieces, loosely joined

It was fascinating at Indie Web Camp Germany to see how much could be accomplished by taking some pre-existing small things and loosely joining them.

For example, there are already webmention and micropub plug-ins for quite a few CMSs. If you’re using Wordpress or Jekyll, you can get pretty far pretty quickly by making use of what people have already provided. And after that Indie Web Camp, you can add Drupal and Kirby to the list of CMSs with readily-available components.

I was somewhat surprised—and very pleased—that people made use of some little PHP snippets that I had posted as gists. I deliberately posted them as gists to show how minimal and barebones the code could be—no need for a whole project, or installers, or dockering the node to yeoman the gulp, or whatever it is the cool kids do these days.

This modular approach also worked well for interface elements. Glenn and Aaron worked on separate projects to create small JavaScript enhancements for posting interfaces. Assemble enough of these enhancements together and before you know it, you’ve got something approaching Medium.

By the end of the second day, I was amazed to see how much progress people had made. Like Johannes says:

I was pretty impressed by how much people got done. At the final demo session, everyone had something he or she had done to update their website – although I’m pretty sure that the end of this event will not be the end of their efforts to try and own their stuff online.

It was quite inspiring. In fact, I think I’ve been inspired to have an Indie Web Camp in Brighton. I’m thinking we could have it at the same time as Indie Web Camp Portland, which is on July 11th and 12th.

Save the dates.

Forgetting again

In an article entitled The future of loneliness Olivia Laing writes about the promises and disappointments provided by the internet as a means of sharing and communicating. This isn’t particularly new ground and she readily acknowledges the work of Sherry Turkle in this area. The article is the vanguard of a forthcoming book called The Lonely City. I’m hopeful that the book won’t be just another baseless luddite reactionary moral panic as exemplified by the likes of Andrew Keen and Susan Greenfield.

But there’s one section of the article where Laing stops providing any data (or even anecdotal evidence) and presents a supposition as though it were unquestionably fact:

With this has come the slowly dawning realisation that our digital traces will long outlive us.

Citation needed.

I recently wrote a short list of three things that are not true, but are constantly presented as if they were beyond question:

  1. Personal publishing is dead.
  2. JavaScript is ubiquitous.
  3. Privacy is dead.

But I didn’t include the most pernicious and widespread lie of all:

The internet never forgets.

This truism is so pervasive that it can be presented as a fait accompli, without any data to back it up. If you were to seek out the data to back up the claim, you would find that the opposite is true—the internet is in constant state of forgetting.

Laing writes:

Faced with the knowledge that nothing we say, no matter how trivial or silly, will ever be completely erased, we find it hard to take the risks that togetherness entails.

Really? Suppose I said my trivial and silly thing on Friendfeed. Everything that was ever posted to Friendfeed disappeared three days ago:

You will be able to view your posts, messages, and photos until April 9th. On April 9th, we’ll be shutting down FriendFeed and it will no longer be available.

What if I shared on Posterous? Or Vox (back when that domain name was a social network hosting 6 million URLs)? What about Pownce? Geocities?

These aren’t the exceptions—this is routine. And yet somehow, despite all the evidence to the contrary, we still keep a completely straight face and say “Be careful what you post online; it’ll be there forever!”

The problem here is a mismatch of expectations. We expect everything that we post online, no matter how trivial or silly, to remain forever. When instead it is callously destroyed, our expectation—which was fed by the “knowledge” that the internet never forgets—is turned upside down. That’s where the anger comes from; the mismatch between expected behaviour and the reality of this digital dark age.

Being frightened of an internet that never forgets is like being frightened of zombies or vampires. These things do indeed sound frightening, and there’s something within us that readily responds to them, but they bear no resemblance to reality.

If you want to imagine a truly frightening scenario, imagine an entire world in which people entrust their thoughts, their work, and pictures of their family to online services in the mistaken belief that the internet never forgets. Imagine the devastation when all of those trivial, silly, precious moments are wiped out. For some reason we have a hard time imagining that dystopia even though it has already played out time and time again.

I am far more frightened by an internet that never remembers than I am by an internet that never forgets.

And worst of all, by propagating the myth that the internet never forgets, we are encouraging people to focus in exactly the wrong area. Nobody worries about preserving what they put online. Why should they? They’re constantly being told that it will be there forever. The result is that their history is taken from them:

If we lose the past, we will live in an Orwellian world of the perpetual present, where anybody that controls what’s currently being put out there will be able to say what is true and what is not. This is a dreadful world. We don’t want to live in this world.

Brewster Kahle

URLy warning

I’m genuinely shocked that Jake thinks that Chrome hiding URLs is a good thing. On the one hand, he says:

The URL is the share button of the web, and it does that better than any other platform. Linkability and shareability is key to the web, we must never lose that…

I absolutely agree with him there. But I very much disagree when he says:

…and these changes do not lose that.

The method he describes for getting at a URL to share is this:

clicking the origin chip or hitting ⌘-L.

Your average user is no more likely to figure out how to do that then they are to figure out how to view source (something that Chrome buried as a “developer” feature some time ago).

Cennydd recently said of URLs:

I mostly agree with him. The protocol portion of the URL is pretty pointless, and the domain name and TLD are never what I would describe as “beautiful”. No, when I talk about beautiful URLs, I mean the path that comes after the protocol, domain name, and TLD gumpf …the very bit that Chrome is looking to hide.

URLs are universal. They work in Firefox, Chrome, Safari, Internet Explorer, cURL, wget, your iPhone, Android and even written down on sticky notes. They are the one universal syntax of the web. Don’t take that for granted.

URLs are for humans. Design them for humans.

Of course your average user probably won’t even know what a URL is, and nor should they. But they know what a link is. They know that, until now, they could copy the “link” from the top of their browser and paste it into an email, or a text message, or a word processing document.

If this Chrome experiment goes forward, we can kiss all that goodbye.

The security issue that Jake outlines is that browsers need to make the domain name portion of the URL clearly visible. I hope that the smart folks working on Chrome can figure out a way to do that without castrating the browser’s ability to easily share links.

It’s a classic case of:

  1. Something must be done!
  2. This (killing URLs) is something.
  3. Something has been done.

Technically, obfuscating the URL seems to solve the security issue. But technically, decapitation seems to solve a headache.

Writing from home

I’m not saying that this is a trend (the sample size is far too small to draw any general conclusions), but I’ve noticed some people make a gratifying return to publishing on their own websites.

Phil Coffman writes about being home again:

I wasn’t short on ideas or thoughts, but I had no real place to express them outside of Twitter.

I struggled to express my convictions on design and felt stifled in my desire to share my interests like I once had. I needed an online home again. And this is it.

Tim Kadlec echoes the importance of writing:

Someone recently emailed me asking for what advice I would give to someone new to web development. My answer was to get a blog and write. Write about everything. It doesn’t have to be some revolutionary technique or idea. It doesn’t matter if someone else has already talked aobut it. It doesn’t matter if you might be wrong—there are plenty of posts I look back on now and cringe. You don’t have to be a so called “expert”—if that sort of label even applies anymore to an industry that moves so rapidly. You don’t even have to be a good writer!

Writer Neil Gaiman is taking a hiatus from Twitter, but not from blogging:

I’m planning a social media sabbatical for the first 6 months … It’s about writing more and talking to the world less. It’s time. I plan to blog here MUCH more, as a way of warming up my fingers and my mind, and as a way of getting important information out into the world. I’m planning to be on Tumblr and Twitter and Facebook MUCH less.

If you are used to hanging out with me on Tumblr or Twitter or Facebook, you are very welcome here. Same me, only with more than 140 characters. It’ll be fun.

Joschi has been making websites for 14 years, and just started writing on his own website, kicking things off with an epic post:

I know that there will be a lot of work left when I’m going to publish this tomorrow. But in this case, I believe that even doing it imperfectly is still better than just talking about it.

That’s an important point. I’ve watched as talented, articulate designers and developers put off writing on their own website because they feel that it needs to be perfect (we are own worst clients sometimes). That’s something that Greg talks about over on the Happy Cog blog:

The pursuit of perfection must be countered by the very practical need to move forward. Our world seems to be spinning faster and faster, leaving less and less time to fret over every detail. “Make, do” doesn’t give any of us license to create crap. The quality still needs to be there but within reason, within the context of priorities.

And finally, I’ll repeat what Frank wrote at the cusp of the year:

I’m doubling down on my personal site in 2014. In light of the noisy, fragmented internet, I want a unified place for myself—the internet version of a quiet, cluttered cottage in the country. I’ll have you over for a visit when it’s finished.


A funny thing happened when I was in Berlin two weekends ago. I was walking down the street that my AirBnB apartment was on when I heard someone say “Jeremy Keith?” It turned out it was Andre Jay Meissner, one of the founders of the excellent Open Device Lab website. We had emailed but never met before. Small world!

The Twitter account for the open device lab in Nuremburg pointed out that it’s been one year since I wrote a blog post about the open device lab I set up:

Much as I’d love to take credit for the idea of an open device lab, it simply isn’t true. Jason and Lyza had been working on setting up the open device lab in Portland for quite a while when I flung open the doors of the Clearleft test lab. But I will take credit for the “Ah, fuck it!” attitude that I introduced to the idea of sharing test devices with the community. Partly because I had seen how long it was taking the Portland device lab to get off the ground while they did everything by the book, I decided to just wait for the worst to happen instead of planning for it:

There are potential pitfalls to opening up a testing suite like this. What about the insurance? What about theft? What about breakage? But the thing about potential pitfalls is that they’re just that: potential. I’m treating all of them as YAGNI issues. I’ll address any problems if and when they occur rather than planning for worst-case scenarios.

It proved to be a great policy. So far, nothing has gone wrong. And it also served as an example to other people thinking about opening up device labs at their companies: “don’t sweat it; I didn’t!”

But as far as anniversaries go, the one-year birthday of the Clearleft device lab is not the most significant event of April 30th. Today is the twentieth anniversary of the publication of one of the most important documents in technological history: the document that officially put the World Wide Web into the public domain.

Open device labs are a small, small part of working on the web but I like to think that they represent the same kind of spirit of openness and sharing that Tim Berners-Lee and his colleagues demonstrated at CERN:

I really, really like the way that communal device labs have taken off. It’s like a physical manifestation of the sharing and openness that has imbued the practice of web design and development right from the start. View source, mailing lists, blog posts, Stack Overflow, and Github are made of bits; device labs are made of atoms. But they are all open for you to use and contribute to.

At UX London I had dinner with a Swiss entrepreneur who was showing off his proprietary native app on his phone and proudly declaring that he had been granted a patent. He seemed like a nice chap, but his attitude kind of made my skin crawl. It seemed so antithetical to the spirit of sharing and openness that I’m used to from the web.

James Gleick once described the web as the patent that never was:

Tim Berners-Lee invented the Web and the Web browser — that is, the world as we now know it — pretty much single-handedly, starting in about 1989, when he was working as a software engineer at CERN, the particle-physics laboratory in Geneva. He didn’t patent it, or any part of it. On the contrary, he has labored tirelessly to keep cyberspace open and nonproprietary.

We are all reaping the benefits of Sir Tim’s kindness and generosity.

And be damned

Representing Clearleft, I wrote a little something for The Pastry Box. As Tantek pointed out, it’s somewhat ironic that it’s published on a third-party site, considering that I explicitly said “I really encourage you to publish on your own site.”

So here it is.

I had the great pleasure of organising the Responsive Day Out here in Brighton last month. It was a lovely gathering of front-end developers and designers getting together to swap stories and cry on one another’s shoulders about the challenges involved in responsive design.

There were some well-known names on the roster: people who speak at international conferences and whose work you’d be familiar with. But there were also some first-timers: people who had never spoken at a conference before.

So why would I, as a conference organiser, ask someone who has never spoken before to get up on stage and share their thoughts?

The answer is simple: their writing. Reading the intelligent and cogent blog posts and articles that they had published made me want to hear what they had to say …and I wanted to introduce their smarts to an audience. These people took the time to write down and publish their thoughts, and that led directly to their appearance at a web conference.

I really encourage you to publish on your own site. If you don’t have your own site, I think you should. In the meantime, there are plenty of other wonderful online publications: 24 Ways, Smashing Magazine, A List Apart. Why not get in touch with them if you’ve got an idea for an article?

To say that communication is a valuable skill when you’re working on the web would be quite an understatement. In a very real sense, the web was made to allow us all to share and communicate. Anybody can do it. That’s one of the great things about the web. You don’t need to ask anybody for permission. If you have an idea or a technique or a question that you want to share, all you need to do is publish it …as long as you take the time to write your thoughts down.

Write. Publish. Share. Speak.

Open device labs

It’s been just nine months since I threw open the doors to the device lab in the Clearleft office. The response in just the first week was fantastic — people started donating their devices to the communal pool, doubling, then tripling the amount of phones and tablets.

The idea of having a communal device lab wasn’t new; Jason had been talking about setting up a lab in Portland but the paperwork involved was bogging it down. So when I set up the Brighton lab, I deliberately took an “ah, fuck it!” attitude …and that was new:

There are potential pitfalls to opening up a testing suite like this. What about the insurance? What about theft? What about breakage? But the thing about potential pitfalls is that they’re just that: potential. I’m treating all of them as YAGNI issues. I’ll address any problems if and when they occur rather than planning for worst-case scenarios.

So far, so good.

Since then I’ve been vocally encouraging others to set up communal devices labs wherever they may be—linking and tweeting whenever anybody so much as mentioned the possibility of getting a device lab up and running. Then the Lab Up! site was established to help people do just that.

Now there’s a brand new site that’s not just for people setting up device labs, but also for people looking a device lab to use: OpenDeviceLab.com.

  • Help people to locate the right Open Device Lab for the job,
  • explain and promote the Open Device Lab movement,
  • attract Contributors and Sponsors to help and donate to ODLs.

It’s an excellent resource. Head on over there and find out where your nearest device lab is located. And if you can’t find one, think about setting one up.

I really, really like the way that communal device labs have taken off. It’s like a physical manifestation of the sharing and openness that has imbued the practice of web design and development right from the start. View source, mailing lists, blog posts, Stack Overflow, and Github are made of bits; device labs are made of atoms. But they are all open for you to use and contribute to.

Sharing pattern libraries

I’ve been huffduffing talks from this year’s South by Southwest, revisiting some of the ones I saw and catching up with some of the ones I missed.

There are some really design and development resources in there that I didn’t get to see in person:

One talk I did get to see was Andy’s CSS for Grown Ups: Maturing Best Practices.

CSS for Grown Ups: Maturing Best Practices on Huffduffer

It was excellent.

You can have a look through the slides.

He talks about different approaches to creating maintainable CSS for large-scale projects. He touches on naming conventions for classes, something that Nicolas Gallagher wrote about recently. And while he makes reference to SASS and Compass, Andy makes the compelling point that’s what more interesting than powerful tools is the arrival of powerful approaches like SMACSS and OOCSS.

Over and over again, Andy makes the point that we must think in terms of creating design systems, not simply styling individual pages—something that Paul has also spoken about. One of the most powerful tools for making that change in thinking is in the creation of style guides for the web and Paul has even created blog dedicated to the BBC’s style guide.

Andy referenced the BBC GEL style guide in his talk but pointed out that because it’s published as a PDF rather than markup, it isn’t as powerful as it could be—there’s inevitably a loss of signal when the patterns are translated into HTML and CSS. Someone from the BBC was in the audience, and in the Q and A portion, acknowledged that that was a really good point.

After the talk I got chatting with Lincoln Mongillo who worked on the recent responsive redesign of Starbucks.com. He mentioned that they created a markup and CSS style guide for the project. “You know what would be really cool?” I said. “If you published it!”

Here it is. It’s a comprehensive library of markup patterns; exactly the kind of resource that Anna wrote about in 24 Ways.

In my experience, creating a pattern library for any project is immensely valuable, whether you’re working in a team of two or a team of two hundred. I’ve found they work well as the next step after Style Tiles: a way of translating the visual vocabulary of a site into markup and CSS without getting bogged down in the specifics of page structure or layout (very handy for a Content First approach). The modularity of a pattern library enforces a healthy .

I’m really pleased to see more and more pattern libraries being made public. That’s one of the reasons why I shared my pattern primer and Dan shared his Pears theme for Wordpress:

Breaking interfaces down into patterns has been immensely helpful in learning and re-evaluating the best possible code to implement them.

But Pears isn’t about how I code these patterns—it’s a tool for creating your own.

I love that. These style guides and pattern libraries aren’t being published in an attempt to provide ready-made solutions—every project should have its own distinct pattern library. Instead, these pattern libraries are being published in a spirit of openness and sharing …a way of saying “Hey, this is what worked for us in these particular circumstances.”

For that, I am very grateful.

What technology wants

Technology enabled Sarah Churman to hear for the first time.

enabled her to capture that moment.

Networked technology enabled her to share that moment with the world.

enabled me to share it with you.

Re-finding five numbers

So, remember when I posted all those episodes of Simon Singh’s Five Numbers radio series on Pownce so that they’d have permanent URLs? Yeah, well, so much for that.

Fortunately Brian had saved all the MP3s. I’ve posted them on S3 and huffduffed them all. I can be fairly confident that Huffduffer won’t be going the way of Pownce, Magnolia, Geocities, and so many more.

Anyway, if you want to listen to the fifteen episodes of the three radio series’ on mathematics, you can subscribe to the podcast at https://huffduffer.com/adactio/tags/five+numbers/rss.

Or you can listen to each episode at these permanent URLs:

  1. Five Numbers

    1. A Countdown to Zero
    2. Simple as Pi
    3. The Golden Ratio
    4. The Imaginary Number
    5. Infinity
  2. Another Five Numbers

    1. The Number Four
    2. The Number Seven
    3. The Largest Prime Number
    4. Kepler’s Conjecture
    5. Game Theory
  3. A Further Five Numbers

    1. 1 — The Most Popular Number
    2. 2 — At The Double
    3. 6 Degrees of Separation
    4. 6.67 x 10^-11 – The Number That Defines the Universe
    5. 1729 — The First Taxicab Number