May 27th, 2015

May 26th, 2015

100 words 065

As a conference organiser, it’s easy to see yourself as being in a position of weakness. You’re hustling hard to put on a great event, but you are a victim to the whims of the ticket-buying public. So you might well be tempted to make whatever compromises are necessary just to break even.

But the truth is that, as a conference organiser, you are in a position of power. You decide which voices will be amplified. You might think that your conference line-up needs to reflect the current state of the world. But it could also highlight a better world.

Responsive Day Out 3: The Final Schedule

There’s just a few more weeks to go until the third and final Responsive Day Out and I can’t wait! It’s going to be unmissable so, like, don’t miss it. If you haven’t already got your ticket, it’s not too late. And remember: it’s a measly £80.

On June 19th, follow the trail of eager geeks to the Corn Exchange at the Brighton Dome, a short walk from the train station. We’ll be using the main Dome entrance on Church Street and registration starts at 9am, with the first talk at 10am.

I’ve already talked about the topics that will be covered on the day. Here’s what I’m planning for the day’s schedule (subject to change):

09:00 - 10:00Registration
10:00 - 10:20Alice
10:20 - 10:40Rachel
10:40 - 11:00Alla
11:00 - 11:15Chat with Alice, Rachel, and Alla
11:15 - 11:45Break
11:45 - 12:05Zoe
12:05 - 12:25Jason
12:25 - 12:45Heydon
12:45 - 13:00Chat with Zoe, Jason, and Heydon
13:00 - 14:30Lunch
14:30 - 14:50Jake
14:50 - 15:10Ruth
15:10 - 15:30Peter
15:30 - 15:45Chat with Jake, Ruth, and Peter
15:45 - 16:15Break
16:15 - 16:35Rosie
16:35 - 16:55Lyza
16:55 - 17:15Aaron
17:15 - 17:30Chat with Rosie, Lyza, and Aaron
17:30 - ??:??Pub!

Now, what with it being a measly £80, don’t expect much in the way of swag. In fact, don’t expect anything in the way of swag. You won’t even get a lanyard; just a sticker. There won’t be any after-party; we can all just wander off to the nearby pubs and cafés instead. And lunch won’t be provided. But that’s okay, because Street Diner will be happening just up the road that day, and I’ve already confirmed that The Troll’s Pantry will be present—best burgers in Brighton (or anywhere else for that matter).

It’s going to be such a great day! Like I said …unmissable.

May 25th, 2015

100 words 064

Jessica and I went to see Mad Max: Fury Road at the Dukes At Komedia last week. We both thoroughly enjoyed it. There’s the instant thrill of being immersed in a rollicking good action movie but this film also stayed with me long after leaving the cinema.

This isn’t really Max’s movie at all—it’s Furiosa’s. And oh, what a wonderful protagonist she is.

Max’s role in this movie is to be an ally. And for that reason, I see him as a role model—one who offers a shoulder, not to cry on, but to steady a rifle’s aim.

The Long Web

A presentation on long-term thinking and the web, from An Event Apart 2013.

Hello Austin! Good morning!

Good morning!

It is an absolute pleasure to be here. I love coming to Austin. Who here is from Austin? Nice, excellent; I like your town a lot. I’ve been here many times but usually it’s been SXSW so that doesn’t really count, so I’m very happy to be here when SXSW isn’t on, so I get to actually go to all the places I want to go to.

Anyway, so, today I want to talk to you about the web because I’m a big fan. I like the web a lot. It’s this wonderful, giant, big, huge mess, sprawling mess of a web, this beautiful chaotic thing. It’s almost too much to grasp, I think, the sheer scale of it. So what I’m going to do is I’m going to just zoom in on one particular website to try and talk about the web in general by focusing in on one website in particular, and this website we’re going to look at, it’s made up of a number of formats, like most websites are; a number of different files and those files are different formats, and some of these formats will be familiar to you, some of them may be less familiar. But what’s interesting about all of these particular formats is that these are all text formats. They’re not binary formats. They’re made of text; human, readable text files and if we’re going to look at text files on the web, I think it’s interesting to look at where text came from, as in: text that we use to communicate with.

Text really started here with cuneiform clay tablets, these scratches and markings. These are maybe about four thousand years old. If you ever get the chance to see these, they’re beautifully intricate but quite small. The Pergamon Museum in Berlin has a wonderful collection. Quite different from any text we would use today, but this evolved; we got hieratic text, later demotic and that led to Greek and you can see how, as it evolved, it starts to get closer to the text that we would recognise today, we start to get closer to the letter forms that we would use for the Western alphabet, for example, in these kind of texts. So this is something from about 330AD and at this point, knowledge is being stored in texts and this is the amazing thing about text: it’s kind of the next step on from language.

Language is amazing because it allows us to communicate ideas from brain to brain. This way that I have an idea in my head and by flapping the meat in my mouth and passing air over my vocal chords, I can vibrate the air and as long as you can decode the format that I’m speaking in, in this case English, then I can get an idea from my brain into your brain. That’s pretty remarkable. And what text allows us to do is to do that over time, so I can get an idea out from my brain onto a text format and then you could come along, read that text format, again, once you can decode it, once you can decode the codebase, in this case it would be English again. You could get that idea transferred to your brain, but the difference being that you could be reading this months, years, decades or centuries after it was written. And this allows us to store and pass on ideas over time, which is remarkable.

So, about this time, ideas are being stored in text and the texts themselves are being stored in these remarkable repositories of information. Like, for example, the Library of Alexandria, which is where a lot of the world’s knowledge was stored. Libraries: wonderful things. Right up there with the web in Things I Am A Fan Of. But the Library of Alexandria is most famous for one thing, and it’s not so much the texts that were stored there; it’s famous for the fact that it burned down. We’re not even clear exactly when it burned down. Maybe around 391AD. Who knows? The fact that it’s famous for being destroyed means all the knowledge about it is also destroyed, but we don’t know what was stored there. We don’t know how much knowledge was lost to humankind, whether if the Library of Alexandria still existed, would we have colonies on Mars at this point? Who knows? That timeline was shut off to us.

It was a pretty bad time for knowledge, for text files, for Europe in general, although off the western coast of Europe, Ireland was where some monks were starting to store knowledge, store some of these text files, mostly because it was just such an inaccessible place that marauding hordes are unlikely to visit. This has been written about and talked about and maybe over-exaggerated. There was a book called How the Irish saved civilisation by Thomas Cahill, which may have exaggerated the point but still, what did get saved tended to get saved here on the western coast of Europe.

Now, Ireland itself was beset with its own problems centuries later. In the mid-nineteenth century was the Irish Famine, which was an absolutely devastating effect; about 1846, 1847. Millions died and millions also left the country, they started this exodus, which never really stopped. The population of Ireland has been decreasing since the middle of the nineteenth century: it’s one of the few countries in Europe where the net population has been a loss. As you know, they all come here, right? They come to America, they go to other countries. New York, Philadelphia, Boston, Chicago, these kind of places is where the Irish ended up coming.

Like this guy. This is Francis O’Neill and he comes from County Cork, which is where I’m from as well. He was born the same time as the famine was going on, 1848, and like many others, he left the country later. He had all sorts of adventures: he was shipwrecked, travelled across the Wild West before settling in Chicago, and it was in Chicago that he became Chief of Police. He was also a fan of traditional Irish music and I can kind of relate to this; maybe he wasn’t so much a fan when he was back in Ireland, but once you leave the country, suddenly you’re like the most Irish person ever! You know what I’m talking about, Boston! He wanted to capture some of the Irish music, so what he ended up doing was, if you were in Chicago around this time in the nineteenth century, and you could play the pipes or flute or fiddle, you were pretty much guaranteed a job on the Chicago Police Force. You know the clichéd old movies, where…”Oh, it’s always the Irish Coppers on the beat.” There’s actually some truth to that: it was pretty much entirely made up of Irish policemen.

So he started, he didn’t play music himself, but he started transcribing the tunes that these musicians/police officers would play to him, and he collected these tunes. He collected them into a book: O’Neill’s 1001: jigs, reels, hornpipes, the dance music of Ireland. This was published about the start of the twentieth century. And I remember when I was learning Irish music, this book was still very much the place you’d go to, to find sheet music for tunes. In fact, we just called it The Book. If somebody played a tune and you wanted to learn it, you might ask, “Oh, is that tune in The Book?” and they would know what you mean. You meant, this book.

So what ended up happening was actually back home in Ireland, the music was kind of dying out, because people were leaving the country and the fact that the knowledge got stored in this book meant that the music could be revived, because it had been noted down. It contributed to a revival.

Now, because the music was stored in sheet music, which is an interesting format, because it’s not exactly text format: it’s a symbolic set in the same way as the alphabet is, but it’s kind of more like a graphic than at text format, I think: I picture it that way. So this is how sheet music looks, which has its own history of being noted down; goes back to medieval times essentially.

There was an interesting development with the transmission of music when the internet came along, or when the web came along, but even before that, email lists, bulletin boards; people want to transfer music from one computer to another, but you can’t send an image file, that’s going to be way too big. You certainly can’t send an audio file, even a midi file; that’s certainly way too big to be sending over the kind of networks we had back then, modems with really slow dial-up. So there was this new format that was created in 1990, 1991 by John Chambers and it’s called ABC where it is purely text which is really nice and lightweight and cheap to send. And you can see it’s almost JSON-like to begin with, with the metadata: what’s the title of the tune, information about how it should be played. And then the notes themselves are literally just letters of the alphabet, like any other text format. It’s re-using the Western alphabet, where lower case and upper case mean different octaves; we’ve got things like the vertical pipe symbol to denote bar lines, so I’ll try and demonstrate what this tune is that we’re looking at here, if you’ll bear with me…

Let’s see, so this is Chief O’Neill’s favourite….(plays mandolin)….So, that’s a hornpipe. (Applause.) Thank you! Thank you very much. That’s a hornpipe and all the information encoded into the notes of the tune at least, maybe not the nuances of how it be played, but is encoded into this text file, which is pretty remarkable.

So, what you could do over email lists, bulletin boards and then later the web was you could transmit just text files like this and the person at the other end could unpack that, could decode it, either by looking directly at it or using software on their machine that could convert the ABC file into sheet music, or maybe even a Midi file.

When I was first getting into the web, I was living in Germany in the nineties and like I was saying, it’s once you leave Ireland that you start to get really into the…oh, I’m so Irish…so I’m getting really into Irish folk music because I’m not living in Ireland any more and I decided I wanted to put together a site about Irish music with tunes, so I made the site called thesession.org and the idea would be that I would put on a tune every week, in ABC format, but also converted to sheet music and so on. So, I launched it in, gosh, ‘98, ‘97: I can’t remember, and initially it looked like this, please forgive me, this is fifteen years old now. And it was nice and it actually started to get a bit of a following because it had this weekly structure, this weekly release cycle.

Of course, there was an issue here and that was with scaling it, because I’m putting out a tune every week but there’s only so many tunes I know, so eventually I hit that wall of that’s it: I’ve run out of tunes I know. So, I decided I was going to completely revamp the site to make it much more of a write/read kind of site where people could contribute tunes in ABC format and other things like location of Irish music events or sessions and recordings and stuff. So, I re-launched the site in 2001 as much more of a community site where everything was being contributed by the people. So, people could contribute tunes in ABC format and then I would convert it into sheet music, so I was a bit of a bottleneck there; I was doing it manually: every time somebody contributed a tune I had to convert the whole lot.

I was really pleased with this site initially; I was really, really proud of it because it was really becoming the go-to place on the web for Irish music. But like I said, that was 2001 and I kind of let it stagnate, which is a real shame. So as browsers evolved and the web evolved, there was so much more I could have been doing with the site and I wasn’t doing it and The Session was simultaneously my proudest achievement and my most shameful achievement because I was like, “Oh, it could have been so much better!”

For years I was saying, I really need to re-launch The Session, I need to re-vamp it, but it was this huge monolithic task I really wasn’t looking forward to it. I think for two years running, on my New Year’s Resolution I had “Re-launch The Session”. But I finally got round to it last year, so this is after ten years of the previous design, the previous way that things are done, I re-launched the site. And this is the site I wanted to look at, is how I approached that from the long-term view as in, it’s a site that’s been online for over a decade. It’s a site that will be online for hopefully much longer than a decade. And how I evaluated technologies and how I evaluated approaches to building a site for the long term, not just for the here and now.

Of course, one of the things when you’re building a website today that’s clear is I can no longer rely on the fact that somebody’s just going to be looking at it on a desktop or a laptop computer, which would have been a safe bet maybe ten years ago. Because these days, of course, the site has got to go everywhere because people will use whatever device they have to hand to view the site, to use the site. Exactly what Karen was talking about yesterday.

Luckily, I’ve got a whole bunch of devices I can test on because at Clearleft we’ve got this Device Lab which I was able to use to do some testing; it’s really handy. This Device Lab also, it’s open to the public. Anybody can come by and test on these devices. So, we’re in Brighton in the UK and this started because there were some friends of mine in Portland had this idea that they were going to have this Open Device Lab where people could come and use it and I thought, that is a great idea, and every time I saw them I’d say to them, “Oh, how’s the Open Device Lab thing going?” and they’d say “Great. We just signed the paperwork to get funded as a non-profit and next we’re sending our lawyer to do this…” It’s never going to actually launch, is it?

So, at Clearleft, I was gathering together a few devices, a handful of crappy devices from second-hand shops and I had them on a table for testing responsive designs with, but I felt really bad that they were just sitting on the table most of the time, not being used: it feels wasteful. And I thought of the idea of the Open Device Lab and I thought yeah, they had all that paperwork to do and all the insurance stuff I guess we’d have to cover and I thought, you know what? Screw it. So I wrote a blogpost and I tweeted out, I said, “Hey, anybody in Brighton who wants to come and use these devices: help yourself.” I didn’t worry about insurance, I didn’t worry about liability, any of that, I just did it. And what I didn’t expect was that straight away, other designers and developers in Brighton responded with tweets like, “Oh, I’ve got this phone lying on my desk and I feel really bad about it being there. Do you want to take it?” or “I’ve got a phone in a drawer that’s just sitting there gathering dust, can I bring it by and drop it off?” and I was like, “Hell, yeah!” So within twenty four hours, the number of devices had doubled. And since then, we’ve got forty or fifty devices there and hardly any of them are actually mine. They’ve all been contributed by other people. And this idea of an Open Device Lab has taken off, so if you go to opendevicelab.com you can find out of there’s an Open Device Lab near you where you can come by and test on devices. And if there isn’t, you can get advice on starting one, which I highly recommend you do.

This this really handy; these days you’re trying to test your work on so many different devices and I will point out: this is testing, not optimising. It’s not like I’m trying to optimise every possible device. That doesn’t scale, but testing on every device.

It was clear, re-launching websites today as opposed to ten years ago, I’m going to go mobile first. I’m sure you’ve all…who’s read the book by Luke, Mobile First? Great book. Excellent. And of course, one of the things he talks about is the fact that by going mobile first, you have to prioritise; you have to be pretty ruthless about figuring out what’s the most important thing on this page. And so for me, the mobile first, when you follow it to its conclusion, it’s content first; it’s figuring out what’s most important. And when I say content, I don’t necessarily mean copy or images. Just as Luke was saying, this is about tasks. The content could be adding something to a shopping cart; the content could be very much actions as opposed to consuming it.

If you really want to take this content-first approach to its ultimate limit, something I like to do, if I ever get the chance to do this on products is I like to start with the URLs; really bring it down to the most basic webiness of what you’re building is, what is the URL structure, which is something I think people don’t think about enough. And yet, URLs are so, so important. Some people treat them like an implementation detail of the web, like, “oh yeah, we’ve got native, we’ve got the web, web has URLs, whatever,” whereas I think it’s the most powerful part of the web. In fact, once you have the name of something and once you have the address that you can pull up on any device, as long as it’s connected to a network, that is amazing. That immediately makes it part of this huge, big, chaotic mess of a web. It was Tim Berners-Lee who said, when you have a URL, it’s part of the web, it’s part of the discourse of humanity, this giant Library of Alexandria that we’re all collectively building.

URL design as a skill is something I feel we’re losing, which is a real shame because I will admit, I’m a URL fetishist. I love a good URL. But I think, rightly so, because they are this fundamental unit of the web. Kyle Neath who works at GitHub—where they have beautiful URLs—he said:

URLs are universal. They work in Firefox, Chrome, Safari, Internet Explorer, cURL, wget, your iPhone, Android and even written down on sticky notes, they are the one universal syntax of the web.

That’s so important to remember: written down on sticky notes, written on a Post-it. They’re for humans. URLs are for humans. Yes, they’re used by machines to fetch a resource, but they’re very much for humans to use. URLs should be hackable, guessable, readable.

On The Session I’ve got this kind of structure with my URLs, it’s sort of restful, you can drill down and this is repeated throughout the site, this URL structure, and in a way this is kind of almost like an API for the content; the content just happens to be an HTML format. In fact, there is a read-only API for The Session and rather than have it as a separate sub-domain or a completely different URL, the API is the same URLs, just adding on a query string to say, I’ll take this in RSS or I’ll take this in JSON,

Thinking about your content that way, almost like an API first approach is really good for stopping yourself thinking too much about the appearance, thinking about how things are going to look, where things are positioned. This idea of content first, literally the content devoid of where it’s going to appear. Whether it’s even going to be showing up on a visual medium at all.

Something else I do to drive home the content-first approach is try and break things down into their fundamental units. Instead of thinking about layout and how things fit together, which is important, but I feel like that needs to come later, I try to break things down into the building blocks. I do this at work, but I decided to also do this on a personal project like The Session.

Here I’ve broken things down into, we’ve got buttons, we’ve got feedback messages, form fields, headings, all this kind of stuff, breaking it down into individual units, and I have this one document where you’ve got, this is what the unit looks like and here’s the mark-up that’s generating that pattern. And I’ve thrown this up on GitHub. I call it a pattern primer; just a little page PHP script that looks in a folder full of little HTML snippets. It’s been ported to Ruby and Python and other languages too, so go ahead and take it, use it, whatever you like.

I just find this so useful to think in terms of the building blocks rather than thinking of the whole picture to start with because then you tend to have more robust units, and it forces you and your CSS as well to not rely on context when you don’t know whether a building block is going to appear in the main column or a sidebar or what that even means these days.

There’s a corollary to the content-first approach which is, navigation second. If you’re going to have your content first: navigation second. And again, this is an idea that I first saw from Luke; his previous start-up, Bagcheck, he had this content first, navigation second approach and I’ve shamelessly ripped it off for The Session.

On The Session you can see there’s this trigger at the top and that brings up the navigation and the way it’s working is, that trigger, that little downward arrow, is a hyperlink; that’s all it is. It’s a hyperlink pointing to a fragment identifier that’s at the bottom of the page, which is the navigation. And the back-to-top link is just another hyperlink. And the great thing about this pattern is that this will work everywhere. And I mean, everywhere: any browser connected to the internet understands hyperlinks, so it’s a very robust pattern. And there’s all sorts of other patterns you can use, off-canvas and overlays and progressive disclosure …and the nice thing about this is, you could start with this as your baseline, as your default, and enhance up to using any of those other patterns, but if anything ever goes wrong, you’ve got this great fallback which is this content-first, navigation-second approach I really like.

And of course, once we get more screen real estate to play with, I can start putting the navigation further up; I can put it back up to the top of the page, using CSS we’ve got absolute positioning, we’ve got display: table, we’ve got flexbox, we’ve got all sorts of ways that we can now move things around, regardless of their source order. CSS has gotten really good at that.

You’ll notice one of the other things that changed as I got more real estate: it wasn’t just that the navigation was changing but the logo as well; the logo starts being this kind of strip and then becomes more like this tag off to the side. I’m not swapping out images there because the logo is actually just text; it’s just CSS and markup, which is obviously nice and lightweight and CSS is great and of course, not every browser’s going to get these styles. You know what? I’m fine with that. That’s OK, it’s an enhancement.

Something else I wanted to make sure I really got right for The Session, because as I said, everything on The Session these days is contributed by somebody else: it’s not me putting the content in; it’s somebody else. I wanted to make sure that that input was as good as I could get it. And it could be better; I think there’s always room for improvement, but I’d learned a few things from previous work I’d done that I decided I’d try and apply it.

There’s this site Huffduffer that Jeffrey mentioned that I built a few years back, and on Huffduffer on the log-in, I provided a little toggle, you can see there, to show your password. So its input type="password", but if you click the checkbox then you can reveal the password so you can see what you’re typing. Just a little usability enhancement, the same way that works in operating systems when you’re connecting to wifi and stuff like that. Nice little pattern.

That was a few years ago, and for The Session, I decided I was actually going to reverse it. I decided I would make the password field input type="text" by default, and if you want to hide it, then you have to tick the box. Now I knew this was going to be interesting from a psychological perspective, how people would react to this. But honestly, I believed the whole password thing with the dots where you’re obscuring what’s written, it’s security theatre. I don’t think it’s helping anyone. When you’re typing in…people looking over your shoulder…and also, if you are worried about that, I’ve got the toggle. So I put this out there, and I did receive emails from people saying, “Oh, you’ve got the password in input type="text": you shouldn’t do that” and I responded, genuinely curious, “Why not?” I really wanted to know. Please tell me why not. And nobody could tell me why not. All they could say was, no other site does that. Yeah, I know that, but why not? Why shouldn’t I do this? And they brought up situations like, “Oh, somebody could be looking over my shoulder”, I’m like, “Yes, that’s why I’ve provided a toggle.” So there was a little bit of pushback, but on the whole not that much and let’s face it, this makes it a heck of a lot easier to input something on a mobile device, which is very much something where you’re not going to have people looking over your shoulder and heck, if they are, there’s a toggle.

I think this is going to take a while to percolate, but I think we will drop the security theatre of obscuring inputs, which gives this illusion of security that isn’t there. It will be interesting to hear your thoughts on that, whether I’ve pushed it too far, but I think the time is right. The world is ready for this!

As well as input type="text" and password and all that stuff, I’m going crazy with all these great new input types we’ve got now with HTML5. These are wonderful. The great thing about all of these is that if you use these, you might get rewarded in the browsers that understand these but if this is read by a browser that doesn’t understand them, that’s absolutely fine, they will just use these as input type="text", because of the way the HTML works; it’s like if you type input type="bar", a browser will see that and go, “I don’t understand that; I’m defaulting to input type="text"” which is great, it means we can start using the new shiny stuff, safe in the knowledge that it’s not going to break in all the browsers, so I can use input type = number; it’s going to just be like input type="text" for older browsers but on newer browsers that understand they’ll use that, like on an iOS device, you get this keypad that’s got the number already displayed with the symbols and stuff.

It’s not quite a numeric keypad because input type="number" can still accept commas, it can accept decimal points and stuff. If you want a numeric keypad, there is an attribute called “inputmode”, and you can specify that this should be in numeric input mode. This is not supported yet in any browsers; it’s in HTML5 but it’s not supported yet, but it’s on the way, so you can just start adding this stuff now because again, the way the browsers work is, they see something they don’t understand, like an attribute like this; they just ignore it. It’s like you’re putting something in there for future browsers.

Also, we do have a little hack we can use today, if you want to get the same effect, which is you can use the pattern attribute. Using the pattern attribute, what you do is you provide a regular expression (now you’ve got two problems) which specifies what you would expect from the input. In this case I’m saying, I only want numbers here. Be careful with this because, think about whether your input truly is just numbers, because a lot of things that look like numbers that actually take commas and decimal points or spaces, so, think about that. Or minus signs, things like that. But if you truly only need the numbers zero to nine, then go ahead and use this pattern. Pattern zero to nine and you’ll get that numeric keypad, because iOS is smart enough to realise it doesn’t need to use one of the other keyboards, so that’s handy.

One of the other new things in HTML5 that I’ve decided use with The Session is the datalist element which I really like. I don’t see that many people using it yet but I really, really like it. This is a way of turning a regular input into a combo-box which is like a cross between a text input and a select. Somebody can type in the text but if they start typing something that’s in this list of options, then that will come up like an auto-complete sort of suggestion. So, you’ve got regular input and you’ve got this list attribute that points to an ID. The ID is on the datalist element and then you’ve got a bunch of options and it’s the value in the options that count. So, as the user is typing in, what they get is the ability to choose from the selection or, if what they’re typing isn’t in the selection, they can just keep on typing, OK? This is perfect for those situations you’ve got like let’s take an option like, “How did you hear about us?” and it’s a select list and then after that, the last one in the select list is “Other” and then the next form field is, “If Other, please specify” and then you’ve got a text field. This is, “If Other, please specify.” This is perfect. You’ve given them a choice but also given them the option to type whatever they want. Really, really handy little pattern using the datalist element.

Another attribute we can use these days is placeholder. Again, something like datalist we could do before, but it required JavaScript, probably a jQuery plug-in, that’s more stuff to download. Same with this placeholder pattern, where when there’s nothing in the form field, you want some kind of greyed-out text that shows an example of the input that is expected, we could do this using JavaScript. Now we can do it in HTML; we just say, here’s the placeholder text.

And I noticed for anything that’s using the datalist element, because the options in the data list are effectively possible inputs and what you’re supposed to put into the placeholder is a possible input, then any element that has a data list associated, you could programmatically just grab any one of those and put it in there as placeholder. So that’s what I’ve done there, it just randomly grabs one of those suggestions and puts it in as a placeholder text into the input and I’ve put that JavaScript up online as well, so feel free to grab that and drop it into your site and it’ll just work straight away.

It won’t work in every browser; not every browser supports datalist; not every browser supports placeholder. Do you know what? That’s absolutely fine, because the core content here, the task, is inputting something and hitting that button and going to the next screen. And none of this affects that, other than to make it a bit better, to enhance it. And this is the approach I’ve been taking with The Session, it’s the approach I’ve been taking online for as long as I’ve been making websites, this idea of progressive enhancement.

There are some myths about progressive enhancement: I think there’s this idea that progressive enhancement means designing for the lowest common denominator, which isn’t true. Progressive enhancement is about starting from the lowest common denominator and then building up, but there’s no limit to where you can go.

The idea is, progressive enhancement in a nutshell is, you’ve got these layers of technology and you begin with your HTML and you structure your HTML well and it works as a standalone thing; the hyperlinks are proper hyperlinks, the forms work as forms. Then you can add your CSS, you’re enhancing using styles. And then you can add your JavaScript, add in all sorts of whizzy goodness: that’s great. But, if the JavaScript fails or the CSS fails, that’s fine, you’ve still got well-structured content, well-structured markup.

But progressive enhancement goes further than that because at each level, at each part of the stack, HTML, CSS, JavaScript, you can use progressive enhancement. placeholder attribute, datalist element, new input types: I can use those in HTML today as an enhancement, and that’s great. That allows us to evolve the language. And the reason I can do that, as I said, is that browsers won’t choke on that stuff; older browsers won’t throw an error. If a browser sees an element it doesn’t recognise or an attribute it doesn’t recognise, it doesn’t halt parsing of the page and throw an error and say the user can’t read the page, it just goes, “Nah, don’t understand it, going to carry on.” That’s actually really, really powerful, that kind of error handling.

It’s the same with CSS; if a parser sees a selector it doesn’t understand or a value or a property, it just moves onto the next one. That’s enormously powerful, because that allows us to keep on expanding CSS. This is how we can keep adding stuff to CSS; you can start using the new stuff today, even when it’s only in one or two browsers, because you’re safe in the knowledge that no browser’s going to choke on it; it’s not going to break any other browser.

Now, it’s a bit different with JavaScript because with JavaScript if you use a property or a method that the browser doesn’t understand, it will throw an error; it has a different error handling model, so you have to be a bit more careful in JavaScript and always test: “do you understand this property?” like geolocation or something. If so, do this stuff. But once you do that, you can apply progressive enhancement at the JavaScript level too, but that error handling of JavaScript can be tricky.

So, progressive enhancement, the idea is that it’s about being robust; it’s about catering for the situations you can’t imagine where something goes wrong that you haven’t foreseen.

One of the descriptions that’s always given for progressive enhancement; it’s like escalators: they can never break, they can just become stairs. Although in this case, clearly they didn’t get the memo because I can’t climb these now-stairs. But it’s kind of a good way of thinking about how we should use JavaScript. It’s like electricity: it should be used to enhance things, so an escalator is an electric stairs or a moving walkway is just a floor. How can a floor be out of service? I don’t know. A moving walkway is just a progressively enhanced floor. Or here we are: an electric toothbrush is just a progressively enhanced toothbrush; it doesn’t break, it just goes back to being an old-fashioned toothbrush.

I think with JavaScript, that’s definitely the way we should approach it. Enhancements. There’s no limit to what we can do with JavaScript but I think from a philosophical point of view, we need to treat it as a technology that’s used to enhance what’s already there which is in the markup, which is where the content is.

And yet, I see sites that use JavaScript for literally everything. For the markup, for the content, for the core tasks; they rely on JavaScript, which I think is dangerous. This is Squarespace if you have JavaScript turned off, go to squarespace.com without JavaScript, this is what you get: the page has finished loading at this point. This is my Facebook screen without JavaScript. A nice big white blank expanse; although they have a bug report for this saying, what do I do when my homepage is blank? They’ve got a fix for that: it’s turn on JavaScript!

But this actually isn’t the point. It isn’t about people turning off JavaScript or browsers that don’t support JavaScript. Frankly, it’s actually getting harder and harder to switch off JavaScript in most browsers; that’s not what progressive enhancement is about. It’s about how you use the JavaScript you’re using. It’s not relying on the JavaScript and you might think, well, I’ve looked at my audience and I knew the kind of devices they’re using; all these devices are JavaScript-capable. Yeah, but stuff happens that you can’t predict and progressive enhancement allows you to be ready for the unpredictable.

As Scott Jehl said:

Every user is a non-JavaScript user while a page is still loading.

You want to make sure your page works; maybe it’s not a great experience without JavaScript; that’s fine, but as long as it works and then you use JavaScript to make it a great experience.

This is the page for downloading Chrome from Google. I took this screenshot I think it was last year. For two hours, nobody could download Chrome, it didn’t work, because there was an error in the JavaScript in a big JavaScript file and because of that error, that button didn’t work because that button, it is a link but that’s the href value of the link: instead of being a link to an actual file or a page or something like a proper link, it’s this JavaScript pseudo-protocol, so effectively it’s not really a link at all. Because there was one error, completely unrelated, somewhere in the JavaScript file, this link did not work as a link and for two hours, nobody in the world could download Google Chrome. I’m pretty sure heads probably rolled for that one.

My friend Andy Hume said:

Progressive enhancement is more about dealing with technology failing than dealing with technology not being supported.

And I think that’s very true actually; like I said, it’s preparing for the unexpected.

I see this kind of stuff all the time, this assumption that JavaScript will be available; this reliance on JavaScript which, like I said, just from a purely engineering perspective doesn’t make sense because of the error-handling in HTML and CSS. If you make a mistake in your HTML, the browser’s going to be very forgiving. If you make a mistake in your CSS, the browser’s going to be very forgiving. They just ignore stuff they don’t understand. If you make a mistake in your JavaScript, the browser is not going to be forgiving; it will throw an error, it will stop the render parsing at that point. So just from Occam’s razor, from an engineering perspective, if you want a robust page, it makes sense not to rely on JavaScript.

Don’t get me wrong: I’m not saying, don’t use JavaScript. I love JavaScript. It’s just how you use it, how you deploy it, as an enhancement, not this or this on Flickr, this JavaScript pseudo-protocol. Or this on Foursquare. It’s not even a link or a button: it’s a span where they’ve got styles to make it look like a link, and they’ve probably got JavaScript to make it act exactly like a link. Just use a link! Or a button. This stuff makes me angry, it really does!

On The Session, I wanted to make sure I was using progressive enhancement, and probably the most complex part of The Session now is converting the ABC to sheet music. As I said, I used to be the bottleneck there; I used to have to do this by hand, I wanted this to be automated. By default you’ve got your ABC file there and there’s a button to turn it into sheet music and that is a proper button inside a form, and by default it just generates something on the server side; it generates a gif. This is actually on a third party server; I should really bring it in-house and do it on The Session itself but by default, without JavaScript, without any technology, this still works, and it will generate an image file like that.

But if you have JavaScript available, then what I did is I capture that click on the button and instead of going off to another page, we stay on the page we’re on, we use a little bit of Ajax and we bring in the sheet music into the page, so this is an enhancement. Everyone will be able to look at sheet music; it’s just the experience is much nicer if you’ve got JavaScript and if you support the technologies.

One of the ways that this is much nicer, I think, is that this sheet music that’s generated here isn’t a gif: this is SVG so it’s nice and crisp because some genius out there has written an ABC to SVG converter in JavaScript, which is just insane. Atwood’s law states that anything that can be written in JavaScript will be written in JavaScript! I think that’s very true. Somebody has ported UNIX to JavaScript; it’s getting pretty crazy.

But SVG I love. I love SVG; it’s so crisp; you can bump up the font size, it’s still stays crisp. Look at it on any device: it’s automatically responsive, it’s like it just fits the container. I’m using SVG somewhere else as well, actually, on the member profiles I have these sparklines.

I love sparklines. Sparklines, it’s a term coined by Edward Tufte and he describes them as:

A small, intense, simple word-size graphic with typographic resolution.

Again, this kind of enhancement, the information is already there in text and you just enhance it with this nice little visual enhancement with a sparkline.

I’d used sparklines before on Huffduffer, I was using sparklines; I was using Google Charts API, because you could use that API like image source = and then you’ve got this Google Chart href and you pass it all the parameters and it’ll generate a chart for you and it could generate a sparkline for you; it was a really, really useful service, really good API. So, Google’s shutting it down. Because that’s what Google do.

When you’ve been making websites long enough, you learn to trust no one. Especially when it comes to APIs being available. No, no, no. Over a long enough timescale, all APIs disappear.

So I did not want to rely on any third part service for this, so OK, I’ll figure out how to make sparklines myself. I thought, somebody must have done it: I’m looking on GitHub, I’m looking on Stack Overflow. Well, on Stack Overflow of course, all the answers are, “use this jQuery plug-in.” Not knocking jQuery, but jQuery is kind of like the spam of Stack Overflow if you’re trying to solve any problem and you don’t want to use jQuery: good luck!

I had to write it myself, so I wrote this little script that generates sparklines, it generates a little canvas element with a sparkline and I put it up on GitHub and I wrote a blogpost saying, here’s this thing I wrote to make sparklines using canvas.

But I finished the blogpost by saying, this doesn’t feel right; I don’t think canvas is the right element here because it’s not a dynamic image, it’s not going to move or anything; it’s just staying still and actually, SVG would be better, so if someone wanted to make an SVG version, that would be great. I swear to God, two hours later, somebody had converted it to SVG. I love the web! I love the fact that this kind of sharing and evolution…I love that kind of stuff. And with GitHub now, especially, it makes sharing so much easier and people should improve other people’s code: I love it.

Anyway, somebody else had basically made a service out of this where you can just pass in the numbers and get back an SVG. So now I’ve got sparklines nice and crisp using SVG and this is the crazy thing: every sparkline, no matter how different it looks, is the same SVG file, and I just pass in different numbers.

Now wait a minute: SVG, that’s an image format, right? No, not quite. SVG is simultaneously an image with the output that you see, and text because it’s a markup document. It’s a form of XML. You can view source in SVG. And because it’s a markup document, that means you can put scripts inside the SVG file, which is what’s happening here. It’s looking at the query string figuring out, yeah, it’s like the descriptor’s coming from inside the SVG file! You could put style declarations inside SVG files. So, you know I showed earlier how I made my logo look different at different sizes? I was using media queries there, but you could put media queries inside a style sheet inside an SVG file to make an automatically responsive logo or any other image that will respond to its containing element. SVG is awesome! I love it!

Initially when I was adding these SVG sparklines on the member profiles, it was computationally kind of expensive, so it was holding up the loading of the page, so I didn’t download them initially; I waited ‘til the page was loaded and then I fired off a new request using Ajax to grab those sparklines and put them into the page because I wanted to get better performance. This idea of having your core content loaded and then you do some loading afterwards, for the enhancements for the nice to have stuff, it’s this idea of conditional loading, where maybe you load in the stuff, maybe you won’t, by doing a test in JavaScript. Conditional loading I feel is something really, really important, especially for responsive design and it doesn’t get talked about enough, in my opinion.

A lot of people kind of pooh-pooh responsive design because they make the mistake of thinking that you’re serving up exactly the same thing to every browser, but using conditional loading, what you can do is serve up your core content to every browser, your basic core content, and then after the DOM is ready, then you can do some testing and say okay, if the screen is wide enough or some other parameters, then I want to load in this other content as well. Not core content, but nice to have content. Like on the front page of The Session, this is just the content, it’s loaded in, it’s grabbed from the server but down at the bottom you’d see a bunch of links off to Twitter or Facebook, Flickr, because there’s a Flickr pool of photos of Irish music and stuff. So, after the page has already loaded, then I use some JavaScript and I say, you know what, if the screen is wider than a certain width, I’ve got enough room, let’s pull in some of those photos—this is conditional loading—and then just display them in-line. But it’s not going to hold up the displaying of the page. This will happen after page load, the user can already carry on with what they’re doing.

In fact, this is kind of the perfect place to use conditional loading because it’s third party content. I’m not now relying on Flickr’s servers to be up all the time to render my page. If something goes wrong with the third party service, that’s okay. So a great place to use conditional loading is if you have those buttons somewhere, like saying “Like” or “+1” or “Tweet this” or “Follow me”; all that kind of stuff. First of all, it makes you look really desperate—just sayin’—but secondly, you are relying on a third party service for the rendering of your document. The way that most of those little widgets work is they say, “Oh, just insert this script element into the middle of your page.” A script element, which will block rendering until the source has been retrieved. So you might think, that’s okay, Twitter’s servers are always going to be up, right? And then someone tries to look at your page in China and your page never finishes loading because that script element never finishes loading because Twitter is blocked in China.

That’s just one example, but again, like I said, you’ve been in this business long enough to get paranoid; you don’t want to rely on a third party service for anything. Conditional loading allows us to have the best of both worlds; I’m going to get the value from these third party services, but I’m not going to be reliant on them. Conditional loading: really handy for that.

To speed things up a little when you know you’re going to be making a request out to some other server, in the head you can do a DNS pre-fetch. So this is a rel value in the link element, DNS pre-fetch and then you point to the third party service you’re going to be using and this is just a hint to the browser that you’re going to be getting something from this domain, so you might want to do the DNS look-up for that domain when you’ve got a chance, when the browser’s ready and has a moment.

There’s a whole bunch of these kind of rel values that help you squeeze a little bit more performance from the browser. You can pre-fetch, if you’re pretty sure that the user is going to go to a specific URL next, you’re pretty confident about that, you can tell the browser that it might want to pre-fetch that page in the background, when it’s ready, when it’s got time. It’s not a command to the browser, it’s a suggestion to the browser; it’s still left to the discretion of the browser. And if you’re very confident that the user is going to visit a particular page next, you can even use pre-render, though I would say be very careful with this one. Be careful because that’s a big assumption to be making, but could result in very, very snappy loading in the browsers that support this.

I find myself trying to squeeze every bit of performance out of the browsers; the performance is so, so important. It always was: performance always was important, but somehow we got lazy there for a while, like I remember back in my day …in the nineties, we were making the smallest image files we possibly could, we only had 216 colours to work with. Tell that to the young people today: they wouldn’t believe you. And we would keep our page sizes really low as well, right? These days we’ve got page sizes in the megabytes. What happened to us? Some time in the last ten years, a memo went around saying, “Everybody’s on broadband now; everybody’s got great big monitors now, we do not have to worry about optimising anything, it’s all taken care of: good job.” I missed that memo. We all got really lazy and thank heaven for responsive design and the rise of the mobile and all these other devices because now, “Oh no, I have to optimise everything and make it really snappy.” And you think, hell yeah, you should’ve been doing that anyway. That was always a skill of being a web designer and somehow we lost that skill and now we’re getting it back. Now, performance is so, so important, you can’t ignore it. It’s the single most important part of the user’s experience. If you are a UX designer and you’re not thinking about performance, you’re not doing your job.

I like the nerdiness of trying to squeeze every bit of performance out. I was really disappointed actually that Google Page Speed have taken the score out of the reports they give you when you hit the page speed thing; I used to love trying to get that number as high as I could. It’s gameification, I know, but I used to love trying to get the highest score I possibly could. Now, they don’t give you a score any more, it’s a shame. But still: performance. I can’t repeat it enough.

And this is the interesting thing is that, with the timescales I’m concerned with here, with this website which has been online for over a decade and is going to be online for much longer than that, I hope, I’ve got these two different timescales I’m looking at which is decades, maybe even centuries and then in the other end, nanoseconds, microseconds, the really, really short; how fast does this load? How quickly can the user accomplish their task? These two very different timescales; they’re what interest me. And all the stuff in the middle, I kind of don’t care about as much; days, weeks, months. And yet, in our work, those are the timescales we tend to concentrate on: when is this shipping? It’s shipping next month. When is the deadline? Get those files to me by Friday. These are the timescales we’re thinking of. This website needs to be on by this particular date this year. We’re thinking in pretty short timescales, and actually if you want to think long term, if you really want to be prepared for what’s coming and we don’t know what’s coming, we don’t know what kind of device is out there. Then, thinking in terms of the past can actually be your best bet.

It’s this idea of being future-friendly, trying to somehow prepare for the future. You can’t be future-proof: nobody can predict the future, but there will be more variation in devices, there will be more variation in ways that people will be accessing the content, using your content. You can’t predict that, but you can prepare for that, and the best way to be future-friendly is to be backwards compatible. To stick to those technologies, the robust technologies, starting with well-formed markup, adding your CSS as an enhancement. Remember that,every time you write a style declaration, you are not telling the browser what to do: you are suggesting something to the browser; it’s important to remember that. And not relying on JavaScript, or any other technology, not putting all your eggs in one basket, particularly a basket that has an error-handling capability like JavaScript.

Progressive enhancement is a way to be future friendly, in my opinion. And there are people thinking about the long term view here; like The Long Now Foundation. Who’s familiar with The Long Now Foundation? Okay, one or two people. Any members of The Long Now Foundation here? Not today. It’s kind of an obscure thing, but they concentrate on long-term timescales; they have projects that are thinking in the hundreds and thousands of years. Probably the most famous one is the Clock of the Long Now. This is a clock that will tell time for ten thousand years. It’s a scale-free clock, it’s being built inside a disused mine in Nevada. It’s being built; this isn’t some theoretical thing, they’re actually building something that’s going to tell time for ten thousand years, which I really like because it gets you thinking about those kind of timescales. You’re thinking about the engineering problems, you’re thinking about long-term problems.

Looking at the web, all these different formats like I said at the start, that I’m using on this one particular website, they’re text formats so I think that increases their longevity, because binary formats tend to last for shorter periods of time. Image formats, video formats, which concerns me a lot; we’re putting so much of our collective culture up online in binary formats .Whether the jpeg format will still be supported in a few decades’ time is not clear and when it comes to video, yeah, that’s even more of a mess. But text formats, because they’re human readable probably stand a better chance of surviving. It’s slightly better.

But the most important thing is to just be thinking about this stuff. Next time somebody says to you, “The internet never forgets”, just call bullshit on that. It’s absolute bollocks! Look at the data. The internet forgets all the time. The average lifespan of a web page is months, and yet people are like, “Oh, you’ve got to be careful what you put online, it’ll be there forever: Facebook never forgets, Google never forgets.” No, I would not entrust our collective culture, our society’s memory to some third party servers we don’t even know. Certainly not to the Cloud, whatever that means, the Cloud. What a bullshit term! I mean, it’s just…(applause) it’s just another word for somebody else’s server. Next time somebody talks about the Cloud, just substitute Somebody Else’s Server. It’s on a hard-drive somewhere. What I do is I mentally substitute the word “Moon” when someone says “Cloud” and it makes just as much sense but it’s way more entertaining!

But like I said, just thinking about it, thinking about how long stuff is going to be online, thinking about what formats you’re going to use. These formats, I don’t know whether they’ll last. A lot of these formats will probably disappear, although this one I have high hopes for: HTML, because they are thinking about the long-term picture when it comes to HTML. You know, when Tim Berners-Lee was first creating the web and URLs and HTTP and HTML, Håkon Wium Lie, the co-founder of Opera Software and one of the creators of CSS, he placed a bet; he placed a bet that HTML would be around in fifty years. Fifty years. That’s a ridiculously long timescale for a computer format. If you know anything about the history of computing, you’ll know that formats die off all the time. Fifty years, a crazy timescale. Now, I think that bet actually looks pretty safe, and this is not by accident. HTML is in it for the long-term by default and also it’s got to be backwards compatible. Future friendly and backwards compatible. It is by design.

I found an old email from Ian Hickson to a mailing list, years ago; he was talking about why he got into HTML, he’s the editor of the HTML spec at the WHATWG, and he said:

The original reason I got involved in this work is that I realised that the human race has written literally billions of electronic documents but without ever actually saying how they should be processed. I decided that, for the sake of our future generations, we should document exactly how to process today’s documents so that when they look back, they can still re-implement HTML browsers and get our data back.

That is thinking about our culture, about our society, about our preserving what we’re putting online, and that’s kind of all I ask of you, is to think about The Long Web, to think about the long term consequences of what we’re doing because I don’t think we do it enough.

It isn’t just about what we’re doing today. We are building something greater than the Library of Alexandria could ever have been and that is an awesome—in the true sense of the word—an awesome responsibility.

You’re going to be hearing about more technologies today; you’ve heard about technologies yesterday, techniques, processes. And as you’re evaluating all of the things you’re learning over these two days, I want you to just think also about the longevity, the consequences and the long term effects of what we’re building and think about The Long web.

Thank you.

Licence

This presentation is licenced under a Creative Commons attribution licence. You are free to:

Share
Copy, distribute and transmit this presentation.
Remix
Adapt the presentation.

Under the following conditions:

Attribution
You must attribute the presentation to Jeremy Keith.

Sending @Brad_Frost off to London.

I just couldn’t take any more of his incessant cheerful whistling and knuckle-cracking.

The typo in this image caption has me imagining Hungaryon Friday as a character in Game Of Thrones.

May 24th, 2015

“Sounds like a funfair—hamburger menus and carousels.”

100 words 063

Brad is visiting Brighton this weekend after his stint at UX London. I’ve been showing him around town, introducing him to the finest coffee, burgers, and beers that Brighton has to offer.

We travelled out to Lewes yesterday evening to partake in Jamie’s birthday celebrations. There followed a night of dancing to a wonderfully fun punk covers band, complete with guest vocal appearances from the extended Freeman family: Jamie doing Elvis Costello, his brother Tim doing The Sex Pistols, his other brother Martin doing The Jam, and his cousin Ben doing The Stranglers.

Ah, so much nostalgia and revisited youth!