Tags: web

1233

sparkline

Tuesday, June 12th, 2018

Any Site can be a Progressive Web App - Jeremy Keith | DeltaV 2018 - YouTube

Here’s a really quick (ten minute) talk about the offline user experience that I gave at the Delta V conference recently. I’m quite happy with how it turned out—there’s something to be said for having a short and snappy time slot.

There’s a common misconception that making a Progressive Web App means creating a Single Page App with an app-shell architecture. But the truth is that literally any website can benefit from the performance boost that results from the combination of HTTPS + Service Worker + Web App Manifest.

Any Site can be a Progressive Web App - Jeremy Keith | DeltaV 2018

Tuesday, June 5th, 2018

Clearleft.com is a progressive web app

What’s that old saying? The cobbler’s children have no shoes that work offline. Or something.

It’s been over a year since the Clearleft site relaunched and I listed some of the next steps I had planned:

Service worker. It’s a no-brainer. Now that the Clearleft site is (finally!) running on HTTPS, having a simple service worker to cache static assets like CSS, JavaScript and some images seems like the obvious next step.

You know how it is. Those no-brainer tasks are exactly the kind of thing that end up on a to-do list without ever quite getting to-done. Meanwhile I’ve been writing and speaking about how any website can be a progressive web app. I think Alanis Morissette used to sing about this sort of situation.

Enough is enough! Clearleft.com is now a progressive web app. It has a manifest file and a service worker script.

The service worker logic is fairly straightforward, and taken almost verbatim from Going Offline. As you navigate around the site, the service worker applies different logic depending on the kind of file you’re requesting:

  • Pages are served fresh from the network, falling back to the cache when there’s a problem.
  • Everything else is served from the cache where possible, resorting to the network only if there’s no match in the cache—quite the performance boost!

In both cases, if a page or a file is retrieved from the network, it’s gets put into a cache. I’ve got one cache for pages, and another for everything else. And even if a file is retrieved from that cache, I still fire off a fetch request to grab a fresh copy for the cache. So while there’s a chance that a stale file might be served up, it will only ever be slightly stale, and the next time it’s requested, it’ll be fresh.

In the worst-case scenario, when a page can’t be retrieved from the network or the cache, you end up seeing a custom offline page. There you can see a list of any pages that are cached (meaning you can revisit them even without an internet connection).

A custom offline page showing a list of URLs.

It’s not ideal—page titles would be friendlier than URLs—but it’s a start. I’m sure I’ll revisit it soon. Honest.

Oh, and after a year of procrastinating about doing this, guess how long it took? About half a day. Admittedly, this isn’t my first progressive web app, and the more you build ‘em, the easier it gets. Still, it’s a classic example of a small investment of time leading to a big improvement in performance and user experience.

If you think your company’s website could benefit from being a progressive web app (and believe me, it definitely could), you have a couple of options:

  1. Arm yourself with a copy of Going Offline and give it a go yourself. Or
  2. Get in touch with Clearleft. We can help you. (See, I can say that with a straight face now that we’re practicing what we preach.)

Either way, don’t dilly dally …like I did.

Monday, June 4th, 2018

Get to Know Jeremy Keith

I had fun answering these questions.

Web Components Club – A journal about learning web components

Andy Bell is documenting is journey of getting to grips with web components. I think it’s so valuable to share like this as you’re learning, instead of waiting until you’ve learned it all—the fresh perspective is so useful!

Brendan Dawes - Holding the Door Open for Others

I continue to write stuff down on my little corner of the Web (does it have corners?) and I encourage you to do the same, as all these little bits of flotsam and jetsam become something a lot lot bigger.

Friday, June 1st, 2018

Document

A little while back, I showed Paul what I was working on with The Gęsiówka Story. I value his opinion and I really like the Bradshaw’s Guide project that he’s been working on. We’re both in complete agreement with Russell Davies’ call for an internet of unmonetisable enthusiasms. Call them side projects if you like, but for me, these are the things that the World Wide Web excels at.

These unomentisable enthusiasms/side projects are what got me hooked on the web in the first place. Fray.com—back when it was a website for personal stories—was what really made the web click for me. I had seen brochure sites, I had seen e-commerce sites, but it was seeing something built purely for the love of it that caused that lightbulb moment for me.

I told Paul about another site I remembered from that time (we’re talking about the mid-to-late nineties here). It was called Private Art. It was the work of one family, the children of Private Art Pranger who served in World War Two and wrote letters from the front. Without any expectations, I did a quick search, and amazingly, the site is still up!

Yes, it’s got tiled background images, and the framesetted content is in a pop-up window, but it works. The site hasn’t been updated for fifteen years but it works perfectly in a web browser today. That’s kind of amazing. We really shouldn’t take the longevity of our materials for granted. Could you imagine trying to open a word processing document from the late nineties on your computer today? You’d have a bad time.

Working on The Gęsiówka Story helped to remind me of some of the things that made me fall in love with the web in the first place. What I wrote about it is equally true of Private Art:

When we talk about documents on the web, we usually use the word “document” as a noun. But working on The Gęsiówka Story, I came to think of the word “document” as a verb.

The World Wide Web is a medium that’s works for quick, short-term lightweight bits of fun and also for long-term, deeper, slower, thoughtful archives of our collective culture.

The web is a many-splendoured thing.

Tuesday, May 29th, 2018

Scripting News: The Internet is going the wrong way

The Internet is a place for the people, like parks, libraries, museums, historic places. It’s okay if corporations want to exploit the net, like DisneyLand or cruise lines, but not at the expense of the natural features of the net.

Web Push Notifications Demo | Microsoft Edge Demos

Push notifications explained using astrology. But don’t worry, there’s also some code, just in case you prefer your explanations to also include models that actually work.

Saturday, May 26th, 2018

What you’re proud of

I wish there was a place where I could read the story of a person. Everybody’s journey is so different and beautiful; each one leads to who we are. It would be the anti-LinkedIn. And because you wouldn’t “engage with brands”, it would be the anti-Facebook, too. Instead, it would be a record of the beauty and diversity of humanity, and a thing to point to when someone asks, “who are you?”

Wednesday, May 23rd, 2018

Monday, May 21st, 2018

Tending the Digital Commons: A Small Ethics toward the Future

It is common to refer to universally popular social media sites like Facebook, Instagram, Snapchat, and Pinterest as “walled gardens.” But they are not gardens; they are walled industrial sites, within which users, for no financial compensation, produce data which the owners of the factories sift and then sell. Some of these factories (Twitter, Tumblr, and more recently Instagram) have transparent walls, by which I mean that you need an account to post anything but can view what has been posted on the open Web; others (Facebook, Snapchat) keep their walls mostly or wholly opaque. But they all exercise the same disciplinary control over those who create or share content on their domain.

Professor Alan Jacobs makes the case for the indie web:

We need to revivify the open Web and teach others—especially those who have never known the open Web—to learn to live extramurally: outside the walls.

What do I mean by “the open Web”? I mean the World Wide Web as created by Tim Berners-Lee and extended by later coders. The open Web is effectively a set of protocols that allows the creating, sharing, and experiencing of text, sounds, and images on any computer that is connected to the Internet and has installed on it a browser that can interpret information encoded in conformity with these protocols.

This resonated strongly with me:

To teach children how to own their own domains and make their own websites might seem a small thing. In many cases it will be a small thing. Yet it serves as a reminder that the online world does not merely exist, but is built, and built to meet the desires of certain very powerful people—but could be built differently.

I Don’t Know How to Waste Time on the Internet Anymore

I started a Twitter account, and fell into a world of good, dumb, weird jokes, links to new sites and interesting ideas. It was such an excellent place to waste time that I almost didn’t notice that the blogs and link-sharing sites I’d once spent hours on had become less and less viable. Where once we’d had a rich ecosystem of extremely stupid and funny sites on which we might procrastinate, we now had only Twitter and Facebook.

And then, one day, I think in 2013, Twitter and Facebook were not really very fun anymore. And worse, the fun things they had supplanted were never coming back. Forums were depopulated; blogs were shut down. Twitter, one agent of their death, became completely worthless: a water-drop-torture feed of performative outrage, self-promotion, and discussion of Twitter itself. Facebook had become, well … you’ve been on Facebook.

Friday, May 18th, 2018

Frustration

I had some problems with my bouzouki recently. Now, I know my bouzouki pretty well. I can navigate the strings and frets to make music. But this was a problem with the pickup under the saddle of the bouzouki’s bridge. So it wasn’t so much a musical problem as it was an electronics problem. I know nothing about electronics.

I found it incredibly frustrating. Not only did I have no idea how to fix the problem, but I also had no idea of the scope of the problem. Would it take five minutes or five days? Who knows? Not me.

My solution to a problem like this is to pay someone else to fix it. Even then I have to go through the process of having the problem explained to me by someone who understands and cares about electronics much more than me. I nod my head and try my best to look like I’m taking it all in, even though the truth is I have no particular desire to get to grips with the inner workings of pickups—I just want to make some music.

That feeling of frustration I get from having wiring issues with a musical instrument is the same feeling I get whenever something goes awry with my web server. I know just enough about servers to be dangerous. When something goes wrong, I feel very out of my depth, and again, I have no idea how long it will take the fix the problem: minutes, hours, days, or weeks.

I had a very bad day yesterday. I wanted to make a small change to the Clearleft website—one extra line of CSS. But the build process for the website is quite convoluted (and clever), automatically pulling in components from the site’s pattern library. Something somewhere in the pipeline went wrong—I still haven’t figured out what—and for a while there, the Clearleft website was down, thanks to me. (Luckily for me, Danielle saved the day …again. I’d be lost without her.)

I was feeling pretty down after that stressful day. I felt like an idiot for not knowing or understanding the wiring beneath the site.

But, on the other hand, considering I was only trying to edit a little bit of CSS, maybe the problem didn’t lie entirely with me.

There’s a principle underlying the architecture of the World Wide Web called The Rule of Least Power. It somewhat counterintuitively states that you should:

choose the least powerful language suitable for a given purpose.

Perhaps, given the relative simplicity of the task I was trying to accomplish, the plumbing was over-engineered. That complexity wouldn’t matter if I could circumvent it, but without the build process, there’s no way to change the markup, CSS, or JavaScript for the site.

Still, most of the time, the build process isn’t a hindrance, it’s a help: concatenation, minification, linting and all that good stuff. Most of my frustration when something in the wiring goes wrong is because of how it makes me feel …just like with the pickup in my bouzouki, or the server powering my website. It’s not just that I find this stuff hard, but that I also feel like it’s stuff I’m supposed to know, rather than stuff I want to know.

On that note…

Last week, Paul wrote about getting to grips with JavaScript. On the very same day, Brad wrote about his struggle to learn React.

I think it’s really, really, really great when people share their frustrations and struggles like this. It’s very reassuring for anyone else out there who’s feeling similarly frustrated who’s worried that the problem lies with them. Also, this kind of confessional feedback is absolute gold dust for anyone looking to write explanations or documentation for JavaScript or React while battling the curse of knowledge. As Paul says:

The challenge now is to remember the pain and anguish I endured, and bare that in mind when helping others find their own path through the knotted weeds of JavaScript.

How to display a “new version available” of your Progressive Web App

This is a good walkthrough of the flow you’d need to implement if you want to notify users of an updated service worker.

Wednesday, May 16th, 2018

HTTPS + service worker + web app manifest = progressive web app

I gave a quick talk at the Delta V conference in London last week called Any Site can be a Progressive Web App. I had ten minutes, but frankly I only needed enough time to say the title of the talk because, well, that was also the message.

There’s a common misconception that making a Progressive Web App means creating a Single Page App with an app-shell architecture. But the truth is that literally any website can benefit from the performance boost that results from the combination of HTTPS + Service Worker + Web App Manifest.

See how I define a progressive web app as being HTTPS + service worker + web app manifest? I’ve been doing that for a while. Here’s a post from last year called Progressing the web:

Literally any website can be a progressive web app:

That last step can be tricky if you’re new to service workers, but it’s not unsurmountable. It’s certainly a lot easier than completely rearchitecting your existing website to be a JavaScript-driven single page app.

Later I wrote a post called What is a Progressive Web App? where I compared the definition to responsive web design.

Regardless of the specifics of the name, what I like about Progressive Web Apps is that they have a clear definition. It reminds me of Responsive Web Design. Whatever you think of that name, it comes with a clear list of requirements:

  1. A fluid layout,
  2. Fluid images, and
  3. Media queries.

Likewise, Progressive Web Apps consist of:

  1. HTTPS,
  2. A service worker, and
  3. A Web App Manifest.

There’s more you can do in addition to that (just as there’s plenty more you can do on a responsive site), but the core definition is nice and clear.

But here’s the thing. Outside of the confines of my own website, it’s hard to find that definition anywhere.

On Google’s developer site, their definition uses adjectives like “reliable”, “fast”, and “engaging”. Those are all great adjectives, but more useful to a salesperson than a developer.

Over on the Mozilla Developer Network, their section on progressive web apps states:

Progressive web apps use modern web APIs along with traditional progressive enhancement strategy to create cross-platform web applications. These apps work everywhere and provide several features that give them the same user experience advantages as native apps. 

Hmm …I’m not so sure about that comparison to native apps (and I’m a little disturbed that the URL structure is /Apps/Progressive). So let’s click through to the introduction:

PWAs are web apps developed using a number of specific technologies and standard patterns to allow them to take advantage of both web and native app features.

Okay. Specific technologies. That’s good to hear. But instead of then listing those specific technologies, we’re given another list of adjectives (discoverable, installable, linkable, etc.). Again, like Google’s chosen adjectives, they’re very nice and desirable, but not exactly useful to someone who wants to get started making a progressive web app. That’s why I like to cut to the chase and say:

  • You need to be running on HTTPS,
  • Then you can add a service worker,
  • And don’t forget to add a web app manifest file.

If you do that, you’ve got a progressive web app. Now, to be fair, there’a lot that I’m leaving out. Your site should be fast. Your site should be responsive (it is, after all, on the web). There’s not much point mucking about with service workers if you haven’t sorted out the basics first. But those three things—HTTPS + service worker + web app manifest—are specifically what distinguishes a progressive web app. You can—and should—have a reliable, fast, engaging website before turning it into a progressive web app.

Jason has been thinking about progressive web apps a lot lately (he should write a book or something), and he said to me:

I agree with you on the three things that comprise a PWA, but as far as I can tell, you’re the first to declare it as such.

I was quite surprised by that. I always assumed that I was repeating the three ingredients of a progressive web app, not defining them. But looking through all the docs out there, Jason might be right. It’s surprising because I assumed it was obvious why those three things comprise a progressive web app—it’s because they’re testable.

Lighthouse, PWA Builder, Sonarwhal and other tools that evaluate your site will measure its progressive web app score based on the three defining factors (HTTPS, service worker, web app manifest). Then there’s Android’s Add to Home Screen prompt. Here finally we get a concrete description of what your site needs to do to pass muster:

  • Includes a web app manifest…
  • Served over HTTPS (required for service workers)
  • Has registered a service worker with a fetch event handler

(Although, as of this month, Chrome will no longer show the prompt automatically—you also have to write some JavaScript to handle the beforeinstallprompt  event).

Anyway, if you’re looking to turn your website into a progressive web app, here’s what you need to do (assuming it’s already performant and responsive):

  1. Switch over to HTTPS. Certbot can help you here.
  2. Add a web app manifest.
  3. Add a service worker to your site so that it responds even when there’s no network connection.

That last step might sound like an intimidating prospect, but help is at hand: I wrote Going Offline for exactly this situation.

Monday, May 14th, 2018

Taking Back The Web

This presentation on the indie web was delivered as the opening keynote at Webstock in Wellington, Zealand in February 2018.

Thank you. Thank you very much, Mike; it really is an honour and a privilege to be here. Kia Ora! Good morning.

This is quite intimidating. I’m supposed to open the show and there’s all these amazing, exciting speakers are going to be speaking for the next two days on amazing, exciting topics, so I’d better level up: I’d better talk about something exciting. So, let’s do it!

Economics

Yeah, we need to talk about capitalism. Capitalism is one of a few competing theories on how to structure an economy and the theory goes that you have a marketplace and the marketplace rules all and it is the marketplace that self-regulates through its kind of invisible hand. Pretty good theory, the idea being that through this invisible hand in the marketplace, wealth can be distributed relatively evenly; sort of a bell-curve distribution of wealth where some people have more, some people have less, but it kind of evens out. You see bell-curve distributions right for things like height and weight or IQ. Some have more, some have less, but the difference isn’t huge. That’s the idea, that economies could follow this bell-curve distribution.

A graph showing a bell-curve distribution.

But, without any kind of external regulation, what tends to happen in a capitalistic economy is that the rich get richer, the poor get poorer and it runs out of control. Instead of a bell-curve distribution you end up with something like this which is a power-law distribution, where you have the wealth concentrated in a small percentage and there’s a long tail of poverty.

A graph showing a power-law distribution.

That’s the thing about capitalism: it sounds great in theory, but not so good in practice.

Monopoly

The theory, I actually agree with, the theory being that competition is good and that competition is healthy. I think that’s a sound theory. I like competition: I think there should be a marketplace of competition, to try and avoid these kind of monopolies or duopolies that you get in these power-law distributions. The reason I say that is that we as web designers and web developers, we’ve seen what happens when monopolies kick in.

The icon for Microsoft Internet Explorer.

We were there: we were there when there was a monopoly; when Microsoft had an enormous share of the browser market, it was like in the high nineties, percentage of browser share with Internet Explorer, which they achieved because they had an enormous monopoly in the desktop market as well, they were able to bundle Internet Explorer with the Windows Operating System. Not exactly an invisible hand regulating there.

But we managed to dodge this bullet. I firmly believe that the more browsers we have, the better. I think a diversity of browsers is a really good thing. I know sometimes as developers, “Oh wouldn’t it be great if there was just one browser to develop for: wouldn’t that be great?” No. No it would not! We’ve been there and it wasn’t pretty. Firefox was pretty much a direct result of this monopoly situation in browsers. But like I say, we dodged this bullet. In some ways the web interpreted this monopoly as damage and routed around it. This idea of interpreting something as damaged and routing around it, that’s a phrase that comes from network architecture

Networks

As with economies, there are competing schools of thought on how you would structure a network; there’s many ways to structure a network.

A ring of small circles, each one connected to a larger circle in the middle.

One way, sort of a centralised network approach is, you have this hub and spoke model, where you have lots of smaller nodes connecting to a single large hub and then those large hubs can connect with one another and this is how the telegraph worked, then the telephone system worked like this. Airports still pretty much work like this. You’ve got regional airports connecting to a large hub and those large hubs connect to one another, and it’s a really good system. It works really well until the hub gets taken out. If the hub goes down, you’ve got these nodes that are stranded: they can’t connect to one another. That’s a single point of failure, that’s a vulnerability in this network architecture.

A circle of  small circles, with no connections between them.

It was the single point of failure, this vulnerability, that led to the idea of packet-switching and different network architectures that we saw in the ARPANET, which later became the Internet. The impetus there was to avoid the single point of failure. What if you took these nodes and gave them all some connections?

A scattering of circles, each one with some connections to other circles.

This is more like a bell-curve distribution of connections now. Some nodes have more connections than others, some have fewer, but there isn’t a huge difference between the amount of connections between nodes. Then the genius of packet-switching is that you just need to get the signal across the network by whatever route works best at the time. That way, if a node were to disappear, even a relatively well-connected one, you can route around the damage. Now you’re avoiding the single point of failure problem that you would have with a hub and spoke model.

The same scattering of circles, with one of them removed. The other circles are still connected.

Like I said, this kind of thinking came from the ARPANET, later the Internet and it was as a direct result of trying to avoid having that single point of failure in a command-and-control structure. If you’re in a military organisation, you don’t want to have that single point of failure. You’ve probably heard that the internet was created to withstand a nuclear attack and that’s not exactly the truth but the network architecture that we have today on the internet was influenced by avoiding that command-and-control, that centralised command-and-control structure.

Some people kind of think there’s sort of blood on the hands of the internet because it came from this military background, DARPA…Defence Advance Research Project. But actually, the thinking behind this was not to give one side the upper hand in case of a nuclear conflict: quite the opposite. The understanding was, if there were the chance of a nuclear conflict and you had a hub and spoke model of communication, then actually you know that if they take out your hub, you’re screwed, so you’d better strike first. Whereas if you’ve got this kind of distributed network, then if there’s the possibility of attack, you might be more willing to wait it out and see because you know you can survive that initial first strike. And so this network approach was not designed to give any one side the upper hand in case of a nuclear war, but to actually avoid nuclear war in the first place. On the understanding that this was in place for both sides, so Paul Baran and the other people working on the ARPANET, they were in favour of sharing this technology with the Russians, even at the height of the Cold War.

The World Wide Web

In this kind of network architecture, there’s no hubs, there’s no regional nodes, there’s just nodes on the network. It’s very egalitarian and the network can grow and shrink infinitely; it’s scale-free, you can just keep adding nodes to network and you don’t need to ask permission to add a node to this network. That same sort of architecture then influenced the World Wide Web, which is built on top of the Internet and Tim Berners-Lee uses this model where anybody can add a website to the World Wide Web: you don’t need to ask for permission, you just add one more node to the network; there’s no plan to it, there’s no structure and so it’s a mess! It’s a sprawling, beautiful mess with no structure.

Walled Gardens

And it’s funny, because in the early days of the Web it wasn’t clear that the Web was going to win; not at all. There was competition. You had services like Compuserve and AOL. I’m not talking about AOL the website. Before it was a website, it was this thing separate to the web, which was much more structured, much safer; these were kind of the walled gardens and they would make wonderful content for you and warn you if you tried to step outside the bounds of their walled gardens into the wild, lawless lands of the World Wide Web and say, ooh, you don’t want to go out there. And yet the World Wide Web won, despite its chaoticness, its lawlessness. People chose the Web and despite all the content that Compuserve and AOL and these other walled gardens were producing, they couldn’t compete with the wild and lawless nature of the Web. People left the walled gardens and went out into the Web.

And then a few years later, everyone went back into the walled gardens. Facebook, Twitter, Medium. Very similar in a way to AOL and Compuserve: the nice, well-designed places, safe spaces not like those nasty places out in the World Wide Web and they warn you if you’re about to head out into the World Wide Web. But there’s a difference this time round: AOL and Compuserve, they were producing content for us, to keep us in the walled gardens. With the case of Facebook, Medium, Twitter, we produce the content. These are the biggest media companies in the world and they don’t produce any media. We produce the media for them.

How did this happen? How did we end up with this situation when we returned into the walled gardens? What happened?

Facebook, I used to wonder, what is the point of Facebook? I mean this in the sense that when Facebook came along, there were lots of different social networks, but they all kinda had this idea of being about a single, social object. Flickr was about the photograph and upcoming.org was about the event and Dopplr was about travel. And somebody was telling me about Facebook and saying, “You should get on Facebook.” I was like, “Oh yeah? What’s it for?” He said: “Everyone’s on it.” “Yeah, but…what’s it for? Is it photographs, events, what is it?” And he was like, “Everyone’s on it.” And now I understand that it’s absolutely correct: the reason why everyone is on Facebook is because everyone is on Facebook. It’s a direct example of Metcalfe’s Law.

The icon for Facebook, superimposed on a graph showing a power-law distribution.

Again, it’s a power-law distribution: that the value of the network is proportional to the square of the number of nodes on a network. Basically, the more people on the network the better. The first person to have a fax machine, that’s a useless lump of plastic. As soon as one more person has a fax machine, it’s exponentially more useful. Everyone is on Facebook because everyone is on Facebook. Which turns it into a hub. It is now a centralised hub, which means it is a single point of failure, by the way. In security terms, I guess you would talk about it having a large attack surface.

Let’s say you wanted to attack media outlets, I don’t know, let’s say you were trying to influence an election in the United States of America… Instead of having to target hundreds of different news outlets, you now only need to target one because it has a very large attack surface.

It’s just like, if you run WordPress as a CMS, you have to make sure to keep it patched all the time because it’s a large attack surface. It’s not that it’s any more or less secure or vulnerable than any other CMS, it’s just that it’s really popular and therefore is more likely to be attacked. Same with a hub like Facebook.

OK. Why then? Why did we choose to return to these walled gardens? Well, the answer’s actually pretty obvious, which is: they’re convenient. Walled gardens are nice to use. The user experience is pretty great; they’re well-designed, they’re nice.

The disadvantage is what you give up when you gain this convenience. You give up control. You no longer have control over the content that you publish. You don’t control who’s going to even see what you publish. Some algorithm is taking care of that. These silos—Facebook, Twitter, Medium—they now have control of the hyperlinks. Walled gardens give you convenience, but the cost is control.

The Indie Web

This is where this idea of the indie web comes in, to try and bridge this gap that you could somehow still have the convenience of using these beautiful, well-designed walled gardens and yet still have the control of owning your own content, because let’s face it, having your own website, that’s a hassle: it’s hard work, certainly compared to just opening up Facebook, opening a Facebook account, Twitter account, Medium account and just publishing, boom. The barrier to entry is really low whereas setting up your own website, registering a domain, do you choose a CMS? There’s a lot of hassle involved in that.

But maybe we can bridge the gap. Maybe we can get both: the convenience and the control. This is the idea of the indie web. At its heart, there’s a fairly uncontroversial idea here, which is that you should have your own website. I mean, there would have been a time when that would’ve been a normal statement to make and these days, it sounds positively disruptive to even suggest that you should have your own website.

Longevity

I have my own personal reasons for wanting to publish on my own website. If anybody was here back in the…six years ago, I was here at Webstock which was a great honour and I was talking about digital preservation and longevity and for me, that’s one of the reasons why I like to have the control over my own content, because these things do go away. If you published your content on, say, MySpace: I’m sorry. It’s gone. And there was a time when it was unimaginable that MySpace could be gone; it was like Facebook, you couldn’t imagine the web without it. Give it enough time. MySpace is just one example, there’s many more. I used to publish in GeoCities. Delicious; Magnolia, Pownce, Dopplr. Many, many more.

Now, I’m not saying that everything should be online for ever. What I’m saying is, it should be your choice. You should be able to choose when something stays online and you should be able to choose when something gets taken offline. The web has a real issue with things being taken offline. Linkrot is a real problem on the web, and partly that’s to do with the nature of the web, the fundamental nature of the way that linking works on the web.

Hyperlinks

When Tim Berners-Lee and Robert Cailliau were first coming up with the World Wide Web, they submitted a paper to a hypertext conference, I think it was 1991, 92, about this project they were building called the World Wide Web. The paper was rejected. Their paper was rejected by these hypertext experts who said, this system: it’ll never work, it’s terrible. One of the reasons why they rejected it was it didn’t have this idea of two-way linking. Any decent hypertext system, they said, has a concept of two-way linking where there’s knowledge of the link at both ends, so in a system that has two-way linking, if a resource happens to move, the link can move with it and the link is maintained. Now that’s not the way the web works. The web has one-way linking; you can just link to something, that’s it and the other resource has no knowledge that you’re linking to it but if that resource moves or goes away, the link is broken. You get linkrot. That’s just the way the web works.

But. There’s a little technique that if you sort of squint at it just the right way, it sort of looks like two-way linking on the web and involves a very humble bit of HTML. The rel attribute. Now, you’ve probably seen the rel attribute before, you’ve probably seen it on the link element. Rel is short for relationship, so the value of the rel attribute will describe the relationship of the linked resource, whatever’s inside the href; so I’m sure you’ve probably typed this at some point where you say rel=stylesheet on a link element. What you’re saying is, the linked resource, what’s in the href, has the relationship of being a style sheet for the current document.

link rel="stylesheet" href="..."

You can use it on A elements as well. There’s rel values like prev for previous and next, say this is the relationship of being the next document, or this is the relationship of being the previous document. Really handy for pagination of search results, for example.

a rel="prev" href="..."
a rel="next" href="..."

And then there’s this really silly value, rel=me.

a rel="me" href="..."

Now, how does that work? The linked document has a relationship of being me? Well, I use this. I use this on my own website. I have A elements that link off to my profiles on these other sites, so I’m saying, that Twitter profile over there: that’s me. And that’s me on Flickr and that’s me on GitHub.

a rel="me" href="https://twitter.com/adactio"
a rel="me" href="https://flickr.com/adactio"
a rel="me" href="https://github.com/adactio"

OK, but still, these are just regular, one-way hyperlinks I’m making. I’ve added a rel value of “me” but so what?

Well, the interesting thing is, if you go to any of those profiles, when you’re signing up, you can add your own website: that’s one of the fields. There’s a link from that profile to your own website and in that link, they also use rel=me.

a rel="me" href="https://adactio.com"

I’m linking to my profile on Twitter saying, rel=me; that’s me. And my Twitter profile is linking to my website saying, rel=me; that’s me. And so you’ve kind of got two-way linking. You’ve got this confirmed relationship, these claims have been verified. Fine, but what can you do with that?

RelMeAuth

Well, there’s a technology called RelMeAuth that uses this, kind of piggy-backs on something that all these services have in common: Twitter and Flickr and GitHub. All of these services have OAuth, authentication. Now, if I wanted to build an API, I should probably, for a right-API, I probably need to be an OAuth provider. I am not smart enough to become an OAuth provider; that sounds way too much like hard work for me. But I don’t need to because Twitter and Flickr and GitHub are already OAuth providers, so I can just piggy-back on the functionality that they provide, just be adding rel=me.

Here’s an example of this in action. There’s an authentication service called IndieAuth and I literally sign in with my URL. I type in my website name, it then finds the rel=me links, the ones that are reciprocal; I choose which one I feel like logging in with today, let’s say Twitter, I get bounced to Twitter, I have to be logged in on Twitter for this to work and then I’ve authenticated. I’ve authenticated with my own website; I’ve used OAuth without having to write OAuth, just by adding rel=me to a couple of links that were already on my site.

Micropub

Why would I want to authenticate? Well, there’s another piece of technology called micropub. Now, this is definitely more complicated than just adding rel=me to a few links. This is an end-point on my website and can be an end-point on your website and it accepts POST requests, that’s all it does. And if I’ve already got authentication taken care of and now I’ve got an end-point for POST requests, I’ve basically got an API, which means I can post to my website from other places. Once that end-point exists, I can use somebody else’s website to post to my website, as long as they’ve got this micropub support. I log in with that IndieAuth flow and then I’m using somebody else’s website to post to my website. That’s pretty nice. As long as these services have micropub support, I can post from somebody else’s posting interface to my own website and choose how I want to post.

In this example there, I was using a service called Quill; it’s got a nice interface. You can do long-form writing on it. It’s got a very Medium-like interface for long-form writing because a lot of people—when you talk about why are they on Medium—it’s because the writing experience is so nice, so it’s kind of been reproduced here. This was made by a friend of mine named Aaron Parecki and he makes some other services too. He makes OwnYourGram and OwnYourSwarm, and what they are is they’re kind of translation services between Instagram and Swarm to micropub.

Instagram and Swarm do not provide micropub support but by using these services, authenticating with these services using the rel=me links, I can then post from Instagram and from Swarm to my own website, which is pretty nice. If I post something on Swarm, it then shows up on my own website. And if I post something on Instagram, it goes up on my own website. Again I’m piggy-backing. There’s all this hard work of big teams of designers and engineers building these apps, Instagram and Swarm, and I’m taking all that hard work and I’m using it to post to my own websites. It feels kind of good.

Screenshots of posts on Swarm and Instagram, accompanied by screenshots of the same content on adactio.com.

Syndication

There’s an acronym for this approach, and it’s PESOS, which means you Publish Elsewhere and Syndicate to your Own Site. There’s an alternative to this approach and that’s POSSE, or you Publish on your Own Site and then you Syndicate Elsewhere, which I find preferable, but sometimes it’s not possible. For example, you can’t publish on your own site and syndicate it to Instagram; Instagram does not allow any way of posting to it except through the app. It has an API but it’s missing a very important method which is post a photograph. But you can syndicate to Medium and Flickr and Facebook and Twitter. That way, you benefit from the reach, so I’m publishing the original version on my own website and then I’m sending out copies to all these different services.

For example, I’ve got this section on my site called Notes which is for small little updates of say, oh, I don’t know, 280 characters and I’ve got the option to syndicate if I feel like it to say, Twitter or Flickr. When I post something on my own website—like this lovely picture of an amazingly good dog called Huxley—I can then choose to have that syndicated out to other places like Flickr or Facebook. The Facebook one’s kind of a cheat because I’m just using an “If This Then That” recipe to observe my site and post any time I post something. But I can syndicate to Twitter as well. The original URL is on my website and these are all copies that I’ve sent out into the world.

Webmention

OK, but what about…what about when people comment or like or retweet, fave, whatever it is they’re doing, the copy? They don’t come to my website to leave a comment or a fave or a like, they’re doing it on Twitter or Flickr. Well, I get those. I get those on my website too and that’s possible because of another building block called webmention. Webmention is another end-point that you can have on your site but it’s very, very simple: it just accepts pings. It’s basically a ping tracker. Anyone remember pingback? We used to have pingbacks on blogs; and it was quite complicated because it was XML-RPC and all this stuff. This is literally just a post that goes “ping”.

Let’s say you link to something on my website; I have no way of knowing that you’ve linked to me, I have no way of knowing that you’ve effectively commented on something I’ve posted, so you send me a ping using webmention and then I can go check and see, is there really a link to this article or this post or this note on this other website and if there is: great. It’s up to me now what I do with that information. Do I display it as a comment? Do I store it to the database? Whatever I want to do.

And you don’t even have to have your own webmention end-point. There’s webmentions as a service that you can subscribe to. Webmention.io is one of those; it’s literally like an answering service for pings. You can check in at the end of the day and say, “Any pings for me today?” Like a telephone answering service but for webmentions.

And then there’s this really wonderful service, a piece of open source software called Bridgy, which acts as a bridge. Places like Twitter and Flickr and Facebook, they do not send webmentions every time someone leaves a reply, but Bridgy—once you’ve authenticated with the rel=me values—Bridgy monitors your social media accounts and when somebody replies, it’ll take that and translate it into a webmention and send it to your webmention end-point. Now, any time somebody makes a response on one of the copies of your post, you get that on your own website, which is pretty neat.

It’s up to you what you do with those webmentions. I just display them in a fairly boring manner, the replies appear as comments and I just say how many shares there were, how many likes, but this is a mix of things coming from Twitter, from Flickr, from Facebook, from anywhere where I’ve posted copies. But you could make them look nicer too. Drew McLellan has got this kind of face-pile of the user accounts of the people who are responding out there on Twitter or on Flickr or on Facebook and he displays them in a nice way.

Drew, along with Rachel Andrew, is one of the people behind Perch CMS; a nice little CMS where a lot of this technology is already built in. It has support for webmention and all these kind of things, and there’s a lot of CMSs have done this where you don’t have to invent this stuff from scratch. If you like what you see and you think, “Oh, I want to have a webmention end-point, I want to have a micropub end-point”, chances are it already exists for the CMS you’re using. If you’re using something like WordPress or Perch or Jekyll or Kirby: a lot of these CMSs already have plug-ins available for you to use.

Building Blocks

Those are a few technologies that we can use to try and bridge that gap, to try and still get the control of owning your own content on your own website and still have the convenience of those third party services that we get to use their interfaces, that we get to have those conversations, the social effects that come with having a large network. Relatively simple building blocks: rel=me, micropub and webmention.

But they’re not the real building blocks of the indie web; they’re just technologies. Don’t get too caught up with the technologies. I think the real building blocks of the indie web can be found here in the principles of the indie web.

There’s a great page of design principles about why are we even doing this. There are principles like own your own data; focus on the user experience first; make tools for yourself and then see how you can scale them and share them with other people. But the most important design principle, I think, that’s on that list comes at the very end and it’s this: that we should 🎉 have fun (and the emoji is definitely part of the design principle).

Your website is your playground; it’s a place for you to experiment. You hear about some new technologies, you want to play with them? You might not get the chance at work but you can try out that CSS grid and the service worker or the latest JavaScript APIs you want to play with. Use your website as a playground to do that.

I also think we should remember the original motto of the World Wide Web, which was: let’s share what we know. And over the next few days, you’re going to hear a lot of amazing, inspiring ideas from amazing, inspiring people and I hope that you would be motivated to maybe share your thoughts. You could share what you know on Mark Zuckerberg’s website. You could share what you know on Ev Williams’s website. You could share what you know on Biz Stone and Jack Dorsey’s website. But I hope you’ll share what you know on your own website.

Thank you.

A group of smiling people, gathered together at an Indie Web Camp.

Friday, May 4th, 2018

Webstock ‘18: Jeremy Keith - Taking Back The Web on Vimeo

Here’s the talk I gave at Webstock earlier this year all about the indie web:

In these times of centralised services like Facebook, Twitter, and Medium, having your own website is downright disruptive. If you care about the longevity of your online presence, independent publishing is the way to go. But how can you get all the benefits of those third-party services while still owning your own data? By using the building blocks of the Indie Web, that’s how!

Webstock '18: Jeremy Keith - Taking Back The Web

Thursday, May 3rd, 2018

Building Progressive Web Apps | nearForm

It is very disheartening to read misinformation like this:

A progressive web app is an enhanced version of a Single Page App (SPA) with a native app feel.

To quote from The Last Jedi, “Impressive. Everything you just said in that sentence is wrong.”

But once you get over that bit of misinformation at the start, the rest of this article is a good run-down of planning and building a progressive web app using one possible architectural choice—the app shell model. Other choices are available.

Saturday, April 28th, 2018

The Illusion of Control in Web Design · An A List Apart Article

Aaron gives a timely run-down of all the parts of a web experience that are out of our control. But don’t despair…

Recognizing all of the ways our carefully-crafted experiences can be rendered unusable can be more than a little disheartening. No one likes to spend their time thinking about failure. So don’t. Don’t focus on all of the bad things you can’t control. Focus on what you can control.

Start simply. Code defensively. User-test the heck out of it. Recognize the chaos. Embrace it. And build resilient web experiences that will work no matter what the internet throws at them.

Monday, April 23rd, 2018

Sara Soueidan: Going Offline

Sara describes the process of turning her site into a progressive web app, and has some very kind words to say about my new book:

Jeremy covers literally everything you need to know to write and install your first Service Worker, tweak it to your site’s needs, and then write the Web App Manifest file to complete the offline experience, all in a ridiculously easy to follow style. It doesn’t matter if you’re a designer, a junior developer or an experienced engineer — this book is perfect for anyone who wants to learn about Service Workers and take their Web application to a whole new level.

Too, too kind!

I highly recommend it. I read the book over the course of two days, but it can easily be read in half a day. And as someone who rarely ever reads a book cover to cover (I tend to quit halfway through most books), this says a lot about how good it is.