Tags: smashingconf

14

sparkline

Thursday, November 9th, 2017

Jeremy Keith on Evaluating Technology at SmashingConf Barcelona 2017 on Vimeo

I think this is the best delivery of this talk I’ve ever given. It was something about being in that wonderful venue.

I got quite worked up around the the 32 minute mark.

Tuesday, September 19th, 2017

Evaluating Technology

A presentation from the Beyond Tellerrand conference held in Düsseldorf in May 2017. I also presented a version of this talk at An Event Apart, Smashing Conference, Render, Frontend United, and From The Front.

I’m going to show you some code. Who wants to see some code?

All right, I’ll show you some code. This is code. This is a picture of code.

Photograph 51

The code base in this case is the deoxyribonucleic acid. This is literally a photograph of code. It’s the famous Photograph 51, which was taken by Rosalind Franklin, the X-ray crystallographer. And it was thanks to her work that we were able to decode the very structure of DNA.

Rosalind Franklin

Base-4, unlike the binary base that we work with, with computers, it’s base-4 A-C-G-T: Adenine, Cytosine, Guanine, Thymine. From those four simple ingredients we get DNA, and from DNA we get every single life form on our planet: mammals, birds, fish, plants. Everything is made of DNA. This huge variety from such simple building blocks.

Apollo 11 Mission Image - Earth view over Central and North America

What’s interesting, though, is if you look at this massive variety of life on our planet, you start to see some trends over time as life evolves through the process of natural selection. You see a trend towards specialisation, a species becoming really, really good at something as the environment selects for fitness. A trend towards ubiquity as life attempts to spread as far as possible. And, interestingly, a trend towards cooperation, that a group could be more powerful than an individual.

Now we’re no different to any other life form, and this is how we have evolved over time from simpler beginnings. I mean, we like to think of ourselves as being a more highly evolved species than other species, but the truth is that every species of life on this planet is the most highly evolved species of life on this planet because they’re still here. Every species is fit for its environment. Otherwise they wouldn’t be here.

This is the process, this long, slow process of natural selection. It’s messy. It takes a long time. It relies on errors in the code to get selected for. This is the process that we human beings have gone through, same as every other species on the planet.

But then we figured out a way to hack the process. We figured out a way to get a jumpstart on evolution, and that’s through technology. Through technology we can bypass the process of natural selection and augment ourselves, extend our capabilities like this.

Acheulean hand ax

This is a very early example of technology. It existed for millions of years in this form, ubiquitous, across the planet. This is the Acheulean hand ax. We didn’t need to evolve a sharp cutting tool at the end of our limb because, through technology, we were able to create a sharp cutting tool at the end of our limb. Then through that we were able to extend our capabilities and shape our environment.

We shape our tools and, thereafter, the tools shape us.

And we have other tools. This is a modern tool, the pencil. I’m sure you’ll all familiar with it. You use it all the time. I think it’s a great piece of technology, great affordance on there. Built in progress bar, and it’s got an undo at the end.

I, Pencil

What’s interesting is if you look at the evolution of technology and you compare it to the evolution of biology, you start to see some of the same trends; trends towards specialisation, ubiquity, and cooperation.

The pencil does one thing really, really well. The Acheulean hand ax does one thing really, really well.

All over the world you found Acheulean hand axes, and all over the world you will find the pencil in pretty much the same form.

And, most importantly of all, cooperation. No human being can make a pencil. Not by themselves. It requires cooperation.

There’s a famous book by Leonard Read called I, Pencil, and it’s told from the point of view of a pencil and describing how it requires cooperation. It requires human beings to come together to fell the trees to get the wood, to get the graphite, to put it all together. No single human being can do that by themselves. We have to cooperate to create technology.

You can try to create technology by yourself, but you’re probably going to have a hard time. Like Thomas Thwaites, he’s an artist in the U.K. You might have seen his most recent project. He tried to live as a goat for a year.

The toaster project

This is from a while back where he attempted to make a toaster from scratch. When I say from scratch, I mean from scratch. He wanted to mine his own metals. He wanted to smelt the steel. He wanted to create the plastic, wire it all up, and do it all by himself. It was a very interesting process. It didn’t really work out. I mean it worked for like a second or two when he plugged it in and then completely burned out, and it was prohibitively expensive.

When it comes to technology, cooperation is built in, along with those other trends: specialisation, ubiquity.

It’s easy to think when we compare these trends in biology and technology and we see the overlap, to fall into the trap of thinking they’re basically the same process, but they’re not. Underneath the hood the process is very different.

In biology it’s natural selection, this long, messy, slow process. But kind of like DNA, it’s very simple building blocks that results in amazing complexity. With technology it’s kind of the other way around. Nature doesn’t imagine the end result of a species and then work towards that end result. Nature doesn’t imagine an elephant or an ostrich. It’s just, that’s the end result of evolution. Whereas with technology, we can imagine things, design things, and then build them. Picture something in our mind that we want to exist in the world and then work together to build that.

Now one of my favourite examples of imagining technology and then creating it is a design school called Chindogu created by Kenji Kawakami. He started the International Chindogu Society in 1995. There’s goals, principles behind Chindogu, and the main one is that these things, these pieces of technology must be not exactly useful, but somehow not all together useless.

Noodle cooler

I’ll show you what I mean and you get the idea. You look at these things and you think, uh, that’s crazy. But actually, is it crazy or is it brilliant? Like this, I think, well, that’s ridiculous. Well— actually, not entirely useless, not exactly useful, but, you know, keeping your shoes dry in the rain. That seems sort of useful.

Butter stick Shoe umbrellas

They’re described as being un-useless. These are un-useless objects. But why not? I mean why not harvest the kinetic energy of your child to clean the floors? If you don’t have a child, that’s fine. It works other ways.

Toddler mop Cat mop

These things, I mean they’re fun to imagine and to create, but you couldn’t imagine them actually in the world being used. You couldn’t imagine mass adoption. Like, I found this thing from the book of Chindogu from 1995, and it describes this device where you kind of put a camera on the end of a stick so you can take self portraits, but you couldn’t really imagine anyone actually using something like this out in the world, right?

Selfie stick

These are all examples of what we see in the history of technology. From Acheulean hand axes to pencils to Chindogu, there are bits of hardware. When we think of technology, that’s what we tend to think of: bits of hardware. And the hardware is augmenting the human. The human is using the hardware to gain benefit.

Something interesting happened in the 20th Century when we started to get another layer in between the human and the hardware, and that’s software. Then the human can interact with the software, and the software can interact with the hardware. I would say the best example of this, looking back through the history of technology of the last 100 years or so, would be the Apollo Program, the perfect mixture of human, software, and hardware.

Apollo 11 Mission Image - View of Moon limb and Lunar Module during ascent, Mare Smythii, Earth on horizon

By the way, seeing as we were just talking about selfies and selfie sticks, I just want to point out that this picture is one of the very few examples of an everyone-elsie. This picture was taken by Michael Collins in the Command Module, and Neil Armstrong and Buzz Aldrin are in that spaceship, and every human being alive on planet earth is also in this picture with one exception, Michael Collins, the person taking the picture. It’s an everyone-elsie.

I think the Apollo program is the pinnacle of human achievement so far, I would say, and this perfect example of this mixture of, like, amazing humans required to do this, amazing hardware to get them there, and amazing software. It’s hard to imagine how it would have been possible to send people to the moon without the work of Margaret Hamilton. Writing the onboard flight software and also creating entire schools of thought of software engineering.

Margaret Hamilton

Since then, and looking through the trend of technology from then onwards, what you start to notice is that the hardware becomes less and less important, and the software is what really starts to count with Moore’s law and everything like that, that we can put more and more complexity into the software. Maybe the end goal of technology is eventually that the hardware becomes completely irrelevant, fades away. This idea of design dissolving in behaviour.

WWW

This idea of the hardware becoming irrelevant in a way was kind of what was at the heart of the World Wide web project created by Tim Berners-Lee when he was at CERN because there at CERN — CERN is an amazing place, but everybody just kind of does whatever they want. It’s crazy. There’s almost no hierarchy, which means everybody uses whatever kind of computer they want. You can’t dictate to people at CERN you all must use this operating system. That was at the heart of the World Wide web project, the idea to make the hardware irrelevant. It shouldn’t matter what kind of computer you’ve got. You should still be able to access information.

Tim Berners-Lee

We kind of take that for granted today, but it is quite a revolutionary thought. We don’t worry about it today. You make a website, of course you can look at it on a Windows device or a Mac or a Linux machine or an iPhone, an iOS device, or an Android device. Of course. But it wasn’t clear at the time. You know back at the time you would make software for specific operating systems, so this idea of making hardware irrelevant was kind of revolutionary.

The World Wide web project is a classic example of a piece of technology that didn’t come out of nowhere. It built on what came before. Like every other piece of technology, it built on what was already there. You can’t have Twitter or Facebook without the World Wide Web, and you can’t have the World Wide web without the Internet. You can’t have the Internet without computers. You can’t have computers without electricity. You can’t have electricity without the Industrial Revolution. Building on the shoulders of giants all the way up.

There’s also this idea of the adjacent possible. It’s when these things become possible. You couldn’t have had the World Wide web right after the Industrial Revolution because these other steps hadn’t yet taken place. It’s something that the author Steven Johnson takes about: the adjacent possible. It was impossible to invent the microwave oven in 16th Century Holland because there were too many other things that needed to be invented in the way.

It’s easy to see this as an inevitable process that, of course electricity follows industrialisation. Of course computers come, and of course the Internet comes. And there is a certain amount of inevitability. This happens all the time in the history of technology where there’s simultaneous inventions and people are beating one another to the patent office by hours to patent, whether it’s radio or the telephone or any of these devices that it seemed inevitable.

I don’t think the specifics are inevitable. Something like the World Wide web was inevitable, but the World Wide web we got was not. Something like the Internet was inevitable, but not the Internet that we got.

The World Wide web project itself has these building blocks: HTTP, the protocol, URLs as identifiers, and HTML was a simple format. Again, these formats are built upon what came before. Because it turns out that making the technology—creating a format or a protocol or spec for identifying things—not to belittle the work, but that’s actually not the hard part. The hard part is convincing people to use the protocol, convincing people to use the format.

Grace Hopper

That’s where you butt up against humans. How do you convince humans? Which always reminds me of Grace Hopper, an amazing computer scientist, rear admiral Grace Hopper, co-inventor of COBOL and the inventor of the compiler, without which we wouldn’t have computing as we know it today. She bumped up against this all the time, that people were reluctant to try new things. She had this phrase. She said, “Humans are allergic to change.” Now, she used to try and fight that. In fact, she used to have a clock on her wall that went backwards to simply demonstrate that it’s an arbitrary convention. You could change the convention.

She said the most dangerous phrase in the English language is, “We’ve always done it that way.” So she was right to notice that humans are allergic to change. I think we could all agree on that.

But her tactic was, “I try to change that,” whereas with Tim Berners-Lee and the World Wide Web, he sort of embraced it. He sort of went with it. He said, “Okay. I’ve got these things I want to convince people to use, but humans are allergic to change,” and that’s why he built on top of what was already there.

He didn’t create these things from scratch. HTTP, the protocol, is built on top of TCP/IP, the work of Bob Kahn and Vint Cerf. The URLs work on top of the Domain Name System and the work of Jon Postel. And HTML, this very simple format, was built on top of a format, a flavour of SGML, that everybody at CERN was already using. So it wasn’t a hard sell to get people to use HTML because it was very familiar.

In fact, if you were to look at SGML back then in use at CERN, you would see these elements.

<body> <title> <p> <h1> <h2> <h3> <ol> <ul> <li> <dl> <dt> <dd>

These are SGML elements used in CERN SGML. You could literally take a CERN SGML document, change the file extension to .htm, and it was an HTML document.

It’s true. Humans are allergic to change, so go with that. Don’t make it hard for them.

Now of course, we got these elements in HTML. This is where they came from. It’s just taking wholesale from SGML. Over time, we got a whole bunch more elements. We got more semantic richness added to HTML, so we can structure our documents more clearly.

<article> <section> <aside> <figure> <main> <header> <footer>

Where it gets really interesting is that we also got more behavioural elements added to HTML, the elements that browsers recognise and do quite advanced things with like video and audio and canvas.

<canvas> <video> <audio> <picture> <datalist>

Now what’s interesting is that I find it fascinating that we can evolve a format like this. We can just keep adding things to the format. The reason why we could do that is because these elements were designed with backwards compatibility built in. If you have an open video tag, closing video tag, you can put content in between there for the browsers that don’t understand the video tag.

The same with canvas. You can put fallback content in there, so you don’t have to wait for every browser to support one of these elements. You can start using it straight away and still provide something for older browsers. That’s very deliberate.

The canvas element was actually a proprietary element created by Apple and other browsers saw it and said, “Oh, yeah, we like that. We’re going to take that,” and they started standardising on it. To begin with, it was a standalone element like img. You put a closing slash there or whatever. But when it got standardised, they deliberately added a closing tag so that people could put fallback content in there. What I’m saying is it wasn’t an accident. It was designed.

Now Chris yesterday mentioned the HTML design principles, and this is one of them—that when you’re creating new elements, new attributes, you should design them in such a way that “the content can degrade gracefully in older or less capable user agents even when making use of these new elements, attributes, APIs, content models.” It is a design decision. There are HTML design principles. They’re very good.

I like design principles. I like design principles a lot. I actually collect them. I’m a bit of a nerd for design principles, and I collect them at this URL:

principles.adactio.com

There you will find design principles for software, for organisations, for people, for schools of thought. There’s Chindogu design principles I’ve collected there.

I guess why I’m fascinated by principles is where they sit. Jina talked about this yesterday in relation to a design system, in that you begin with the goals. This is like the vision, what you’re trying to achieve, and then the principles define how you’re going to achieve that. Then the patterns are the result of the principles. The principles are based on the goals, which result in the patterns.

In the case of the World Wide Web, the goal is to make hardware irrelevant. Access to information regardless of hardware. The principles are encoded in the HTML design principles, and then the patterns are those elements that we get, those elements that are designed with backwards compatibility in mind.

Now when we look at new things added to HMTL, new features, new browser APIs, what we tend to ask, of course, is: how well does it work?

How well does this thing do what it claims it’s going to do? That’s an excellent question to ask whenever you’re evaluating a new technology or tool. But I don’t think it’s the most important question. I think it’s just as important to ask: how well does it fail?

How well does it fail?

If you look at those HTML elements, which have been designed that way, they fail well. They fail well in older browsers. You can have that fallback content. I think this is a good lens to look at technology through because what we tend to do, when there’s a new browser API, we go to Can I Use, and we see, well, what’s the support like? We see some green, and we see some red. But the red doesn’t tell you how well it fails.

Here’s an example: CSS shapes. If you go to caniuse.com and you look at the support, there’s some green, and there’s some red. You might think there’s not enough green, so I’m not going to use it. But what you should really be asking is, how well does it fail?

In the case of CSS shapes, here’s an example of CSS shapes in action. I’ve got a border radius on this image, and on this text here I’ve said, shape-outside: circle on the image, so the text is wrapping around that circle. How well does it fail? Well, let’s look at it in a browser that doesn’t support CSS shapes, and we see the text goes in a straight line.

I’d say it fails pretty well because this is what would have happened anyway, and the text wrapping around the circle was kind of an enhancement on top of what would have happened anyway. Actually, it fails really well, so you might as well go ahead and use it. You might as well go ahead and use it even if it was only supported in one browser or two browsers because it fails well.

Let’s use that lens of asking how well does it work and how well does it fail to look at some of the technologies that you’ve probably been hearing about—some of the buzzwords in the world of front-end development. Let’s start with this. This is a big buzzword these days: service workers.

Service Workers

Who has heard of service workers? Okay. Quite a few.

Who is using service workers? Not so many. Interesting.

The rest of you, you’ve heard of it, and you’re currently probably in the state of evaluating the technology, trying to decide whether you should use this technology.

I’m not going to explain how service workers work. I guess I’ll just describe what it can do. It’s an amazing piece of technology that you kind of install on the user’s machine and then it sits there like a virus intercepting requests, which sounds scary, but actually is really powerful because you can really improve performance. You can serve things from a cache. You get access to the cache API. You can make things work offline, which is kind of amazing, because you’ve got access to those requests.

I was trying to describe it the other day and the best way I could think of describing it was a service worker is like doing a man-in-the-middle attack on your own website, but in a good way—in a good way. There’s endless possibilities of what you can do with this technology. It’s very powerful. And, at the very least, you can make a nice, custom, offline page instead of the dinosaur game or whatever people would normally get when they’re offline. You can have a custom offline page in the same way you could have a custom 404 page.

The Guardian have a service worker on their site, and they do a crossword puzzle. You’re on the train, you’re trying to read that article, but there’s no internet connection. Well, you can play the crossword puzzle. Little things like that, so it can be used for real delight. It’s a great technology.

How well does it work? It does what it says…. You don’t get anything for free with service workers, though. A service worker file is JavaScript, which can actually be quite confusing because you’ll be tempted to treat it like your other JavaScript files and do what you would do to other JavaScript files, but don’t do that. It’s almost like service worker scripts happen to be written in JavaScript, but they require this whole new mindset. So it’s kind of hard to get your head around. It’s a new technology to learn, but it’s powerful.

Well, let’s see what the support is like on Can I Use. Not bad. Not bad at all. Some good green there, but there’s quite a bit of red. If this is the reason why you haven’t used service workers yet because you see the support and you think, “Not enough support. I’m not going to invest my time,” I think you haven’t asked the question, “how well does it fail?” This is where I think the absolute genius of service worker comes in.

Service workers fail superbly because here’s what happens with a service worker. The first time someone visits your site there, of course, is no service worker installed on the client. They must first visit your site, get the downloads, have that service worker installed, which means every browser doesn’t support service workers for the first visit.

Then, on subsequent visits, you can use the service worker for the browsers that support it as this enhancement. Provide the custom offline page. Cache those assets. Do offline first stuff. But you’re not going to harm any of those browsers that are in the red on Can I Use, and that’s deliberate in the design of service workers. It’s been designed that way. I think service workers fail really well.

Let’s look at another hot topic.

Web Components

Who has heard of web components? Who is using web components—the real thing now? Okay. Wow. Brave. Brave person.

Web components actually aren’t a specific technology. web components is an umbrella term. I mean, in a way, service workers is kind of an umbrella term because it’s what you get access to through service workers that counts. You get access to the fetch API and the cache API and even notifications through a service worker.

With web components, it’s this term for a combination of specs, a combination of APIs like custom elements, the very sinister sounding shadow DOM, which is not as scary as it sounds, and there’s other things in there too like HTML imports and template. All of this stuff together is given the label web components. The idea is we’ve already got these very powerful elements in HTML, and it’s great when they get added to HTML, but it takes a long time. The standards process is slow. What if we could just make our own elements? That’s what you get to do with custom elements. You get to make shit up.

<mega-menu> <slippy-map> <image-gallery> <modal-lightbox> <off-canvas>

These common patterns. You keep having to reinvent the wheel. Let’s make an element for that. The only requirement with a custom element is that you have to have a hyphen in there. This is kind of a long-term agreement with the spec makers that they will never make an HTML element with a hyphen in it. Therefore, it’s kind of a safe space to use a hyphen in a made up element.

Okay, but if you just make up an element like this, it’s effectively the same as having a span in your document. It doesn’t do anything. It’s the other specs that make it come to life, like having HTML imports that link off to a file that describes what the browser is supposed to do with this new element that you’ve created.

Then in that file you could have your HTML. You could have your CSS. You could have JavaScript. And, crucially, it is modular. It doesn’t leak through. Those styles won’t leak through to the rest of the page. This is the dream we’ve been chasing: encapsulation. This is kind of the problem that React is solving. This is the reason why we have design systems, to try and be modular and try and encapsulate styles, behaviours, semantics, meaning.

Web components are intended as a solution to this, so it sounds pretty great. How well does it work? Well, let’s see what the browser support is like for some parts of web components. Let’s take custom elements. Yeah, some green, but there’s an awful lot of red. Never mind, as we’ve learned from looking at things like CSS shapes and service workers. But the red doesn’t tell us anything because the lack of support in a browser doesn’t answer the question, how well does it fail? How well do web components fail?

This is where it gets interesting because the answer to the question, “How well do web components fail?” is …it depends.

It depends on how you use the web components. It depends on if you applied the same kind of design principles that the creators of HTML applied when they’re making new elements.

Let’s say you make an image-gallery element, and you make it so that the content of the image gallery is inside the open and closing tag.

<image-gallery>
  <img src="..." alt="...">
  <img src="..." alt="...">
  <img src="..." alt="...">
</image-gallery>

Now in a non-supporting browser this is actually acceptable because they won’t understand what this image-gallery thing is. They won’t throw an error because HTML is very tolerant of stuff it doesn’t understand. They’ll just display the images as images. That’s acceptable.

Now in a browser that supports web components, all those different specs, you can take these images, and you can whiz-bang them up into a swishy carousel with all sorts of cool stuff going on that’s encapsulated; that you can share with other people; that people can just drop into their site. If you do this, web components fail very well. However, what I tend to see when I see web components in use is more like this where it’s literally an opening tag, closing tag, and all of the content and all the behaviour and all the styling is away somewhere else being pulled in through JavaScript, creating a kind of single source of failure.

<image-gallery>
</image-gallery>

In fact, there’s demo sites to demonstrate the power of web components that do this. The Polymer Project, there’s a whole collection of web components, and they created an entire online shop to demonstrate how cool web components are, and this is the HTML of that shop.

<body>
  <shop-app>
  </shop-app>
  <script>...</script>
</body>

The body element simply contains a shop-app custom element and then a script, and all the power is in the script. Here the web component fails really badly because you get absolutely nothing. That’s what I mean when I say it depends. It depends entirely on how we use them.

Now the good news is, as we saw from looking at Can I Use, it’s very early days with web components. We haven’t figured out yet what the best practices are, so we can set the course of the future here. We can decide that there should be design principles for how we collectively use this powerful technology like web components.

See, the exciting thing about web components is that they give us developers the same power that previously only browser makers had. But the scary thing about web components is that they give us developers the same power that previously only browser makers had. With great power, et cetera, et cetera, and we should rise to the challenge of that responsibility.

What’s interesting about both these things we’re looking at is that, like I said, they’re not really a single technology in themselves. They’re kind of these umbrella terms. With service worker it’s an umbrella term for fetch and cache and notifications, background sync — very cool stuff. With web components it’s an umbrella term for custom elements and HTML imports and shadow DOM and all this stuff.

But they’re both coming from the same place, the same sort of point of view, which is this idea that we, web developers, should be given that power and that responsibility to have access to these low-level APIs rather than just waiting for standards bodies to give us access through new APIs. This is all encapsulated in a school of thought called The Extensible Web, that we should have access to these low-level APIs.

The Extensible web is effectively — it’s literally a manifesto. There’s a manifesto for The Extensible Web. It’s just a phrase. It’s not a technology, just words, but words are very powerful when it comes to technology, when it comes to adopting technology. Words can get you very far. Ajax is just a word. It’s just a word for technologies that already existed at the time, but Jesse James Garrett put a word on it, and it made it easier to talk about it, and it helped the adoption of those technologies.

Responsive web Design: what Ethan did was he put a phrase to a collection of technologies: media queries, fluid layouts, fluid images. Wrapped it all up in a very powerful term, Responsive web Design, and the web was never the same.

Progressive Web Apps

Here’s a term you’ve probably heard of over the last couple of days: progressive web apps. Anybody who went to the Microsoft talk yesterday at lunchtime would have heard about progressive web apps. It’s just a term. It’s just an umbrella term for other technologies underneath. Progressive web app is the combination of having your site run over HTTPS, so it’s secure, which by the way is a requirement for running a service worker, and then also having a manifest file, which contains all this metadata. Chris mentioned it yesterday. You point to your icons and metadata about your site. All that adds up to, hey, you’ve got a progressive web app.

It’s a good sounding — I like this term. It’s a good sounding term. It was created by Frances Berriman and her husband, Alex Russell, to describe this. Again, a little bit of a manifesto in that these sites should be responsive and intuitive and they need to fulfil these criteria. But I worry sometimes about the phrasing. I mean, all the technologies are great. And you will actually get rewarded if you use these technologies. If you use HTTPS, you got a service worker, you got a manifest file. On Chrome for Android, if someone visits your site a couple of times, they’ll be prompted to add the site to the home screen just as though it were a native app. It will behave like a native app in the app switcher. You’re getting rewarded for these best practices.

But when I see the poster children for progressive web apps, my heart sinks when I see stuff like this. This is the Washington Post progressive web app, but this is what you get if you visit on the “wrong” device. In this case I’m visiting on a desktop browser, and I’m being told to come back with a mobile browser. Oh, how the tables have turned! It was not that long ago when we were being turned away on our mobile devices, and now we’re turning people away on desktops.

This was a solved problem. We did this with responsive web design. The idea of having a separate site for your progressive web app - no, no, no. We’re going back to the days of m.sites and the “real” website. No. No. I feel this is the wrong direction.

I worry that maybe this progressive web app terminology might be hurting it and the way that Google are pushing this app shell model. Anything can be a progressive web app, anything on the web.

I mean I’ve got things that I’ve turned into progressive web apps, and some of them might be, okay, maybe you consider this site, Huffduffer, as an app. I don’t know what a web app is, but people tell me it might be a web app. But I’ve also got like a community website, and it fulfils all the criteria. I guess it’s a progressive web app. My personal site, it’s a blog, but technically it’s a progressive web app. I put a book online. A book is an app now because it fulfils all the criteria. Even a single page collecting design principles is technically a progressive web app.

I worry about the phrasing, potentially limiting people when they come to evaluate the technology. “Oh, progressive web app, well, that’s not for me because I’m not building apps. I’m building some other kind of site.” I think that would be a real shame because literally every site on the web can benefit from those technologies, which brings me to the next question when we’re evaluating technology. Who benefits from the technology?

Who benefits?

Broadly speaking, I would say there’s kind of two schools of who could benefit from a particular technology on the Web. Does the technology benefit the developer or does the technology benefit the user? Much like what Chris was showing yesterday with the Tetris blocks and kind of going on a scale from technologies that benefit users to technologies that benefit developers.

Now I would say that nine times out of ten there is no conflict. Nine times out of ten a piece of technology is beneficial to the developer and beneficial to the user. You could argue that any technology that benefits the developer is de facto a benefit to the user because the developer is working better, working faster, therefore they can get the website out, and that’s good for the user.

Let’s talk about technologies that directly impact users versus the technologies that directly impact developers. Now personally I’m going to generally fall down on the side of technologies that benefit users over technologies that benefit developers. I mean, you look at something like service workers. There isn’t actually a benefit to developers. If anything, there’s a tax because you’ve got to get your head around service workers. You’ve got a new thing to learn. You’ve got to get your head down, learn how it works, write the code. It’s actually not beneficial for developers, but the end result—offline pages, faster performance—hugely beneficial for users. I’ll fall down on that side.

Going back to when I told you I was a nerd for design principles. Well, I actually have a favourite design principle and it’s from the HTML design principles. It’s the one that Chris mentioned yesterday morning. It’s known as the priority of constituencies:

In case of conflict, consider users over authors over specifiers over theoretical purity.

That’s pretty much the way I evaluate technology too. I think of the users first. And the authors, that’s us, we have quite a strong voice in that list, but it is second to users.

Now when we’re considering the tools and we’re evaluating who benefits from this tool, “Is it developers, or is it users, or is it both?” I think we need to stop and make a distinction about the kinds of tools we work with. I’m trying to work out how to phrase this distinction, and I kind of think of it as inward facing tools and outward facing tools: inward facing tools developers use; outward facing tools that directly touch end users.

I’ll show you what I mean. These are like the inward facing tools in that you put them on your computer. They sit on your computer, but the end output is still going to be HTML, CSS, and JavaScript. These are tools to make you work faster: task runners, version control, build tools—all that kind of stuff.

Now when it comes to evaluating these technologies, my attitude is, whatever works for you. Now we can have arguments and say, “Oh, I prefer this tool over that tool”, but it really doesn’t matter. What matters is: does it work for you? Does it make you work faster? Does it make your team work faster? That’s really the only criteria because none of these directly touch the end user.

That criterion of, “Hey, what works for me”, that’s a good one to apply for these inward facing tools, but I think we need to apply different criteria for the outward facing tools, the tools that directly affect the end user because, yes, we, developers, get benefit from these frameworks and libraries that are written in CSS and JavaScript, but the user pays a tax. The user pays a tax in the download of these things when they’re on the client. It’s actually interesting to see how a lot of these JavaScript frameworks have kind of shifted the pendulum where it used to be the user had to pay a tax if you wanted to use React of Angular or Ember. The pendulum is swinging back that we can get the best of both worlds where you can use these tools as an inward facing tool, use them on the server, and still get the benefit to the user without the user having to pay this tax.

I think we need to evaluate inward facing tools and outward facing tools with different criteria. Now when it comes to evaluating tools, especially tools that directly affect the end user—CSS frameworks, JavaScript libraries, things like that—there’s a whole bunch of questions to ask to evaluate the technology, questions like: what’s the browser support like? What browsers does this tool not work in? What’s the community like? Am I going to get a response to my questions? How big is the file size? How much of a tax is the user going to have to download? All of these are good questions, but they are not the most important question.

The most important question—I’d say this is true of evaluating any technology—is, what are the assumptions?

What are the assumptions?

What are the assumptions that have been baked into the tool you’re about to use, because I guarantee you there are assumptions baked into those tools. I know that because those tools were created by humans. And we humans, we have biases. We have assumptions, and we can’t help but encode those biases and assumptions into what we make. It’s true of anything we make. It’s particularly true of software.

We talk about opinionated software. But in a way, all software is opinionated. You just have to realise where the opinions lie. This is why you can get into this situation where we’re talking about frameworks and libraries, and one person is saying, “Oh, this library rocks”, and the other person is saying, “No, this library sucks!” They’re both right and they’re both wrong because it entirely depends on how well the philosophy of that tool matches your own philosophy.

If you’re using a tool that’s meant to extend your capabilities and that tool matches your own philosophy, you will work with the tool, and you will work faster and better. But if the philosophy of the tool has a mismatch with your own philosophy, you’re going to fight that tool every step of the way. That’s why people can be right and wrong about these frameworks. What works for one person doesn’t work for another. All software is opinionated.

It makes it really hard to try and create un-opinionated software. At Clearleft we’ve got this tool. It’s an open source project now called Fractal for building pattern libraries, working with pattern libraries. The fundamental principle behind it was that it should be as agnostic as possible, completely agnostic to build tools, completely agnostic to templating languages, that it should be able to work just about anywhere. It turns out it’s really, really hard to make agnostic software because you keep having to make decisions that favour one thing over another at every step.

Whether it’s writing the documentation or showing an example, you have to show the example in some templating language. You have to choose a winner in the documentation to demonstrate something. It’s really hard to write agnostic software. Every default you add to a piece of software shows your assumptions because those defaults matter.

But I don’t want to make it sound like these tools have a way of working and there’s no changing it, that the assumptions are baked in and there’s nothing you can do about it; that you can’t fight against those assumptions. Because there are examples of tools being used other than the uses for which they were intended right throughout the history of technology. I mean, when Alexander Graham Bell created the telephone, he thought that people would use it to listen to concerts that were happening far away. When Edison created the gramophone, he thought that people would record their voices so they could have conversations at a distance. Those two technologies ended up being used for the exact opposition purposes than what their inventors intended.

Hedy Lamarr

Here’s an example from the history of technology from Hedy Lamarr, the star of the silver screen the first half of the 20th Century here in Europe. She ended up married to an Austrian industrialist arms manufacturer. After the Anschluss, she would sit in on those meetings taking notes. Nobody paid much attention to her, but she was paying attention to the technical details.

She managed to get out of Nazi occupied Europe, which was a whole adventure in itself. Made her way to America, and she wanted to do something for the war effort, particularly after an incident where a refuge ship was sunk by a torpedo. A whole bunch of children lost their lives, and she wanted to do something to make it easier to get the U-boats. She worked on a system for torpedoes. It was basically a guidance system for radio controlled torpedoes.

The problem is, if you have a radio frequency you’re using to control the torpedo to guide it towards its target, if the enemy figure out what the frequency is, they can jam the signal and now you can no longer control the torpedo. Together with a composer named George Antheil, Hedy Lamarr came up with this system for constantly switching the frequency, so both the torpedo and the person controlling it are constantly switching the radio frequency to the same place, and now it’s much, much harder to jam that transmission.

Okay. But what’s that got to do with us, some technology for guided missiles in World War II? In this room, I’m guessing you’ve got devices that have WiFi and Bluetooth and GPS, and all of those technologies depend on frequency hopping. That wasn’t the use for which it was created, but that’s the use we got out of it.

We can kind of bend technology to our will, and yet there seems to be a lot of times this inevitability to technology. I don’t mean on the front-end where it’s like, “I guess I have to learn this JavaScript framework” because it seems inevitable that everyone must learn this JavaScript framework. Does anyone else feel disempowered by that, that feeling of, “uh, I guess I have to learn that technology because it’s inevitable?”

I get that out in the real world as well: “I guess this technology is coming”, you know, with self-driving cars, machine learning, whatever it happens to be. I guess we’ve just got to accept it. There’s even this idea of technological determinism that technology is the driving force of human history. We’re just along for the ride. It’s the future. Take it.

The ultimate extreme of this attitude of technological determinism is the idea of the technological singularity, kind of like the rapture for the nerds. It’s an idea borrowed from cosmology where you have a singularity at the heart of a black hole. You know a star collapses to as dense as possible. It creates a singularity. Nothing can escape, not even light.

The point is there’s an event horizon around a black hole, and it’s impossible from outside the event horizon to get any information from what’s happening beyond the event horizon. With a technological singularity, the idea is that technology will advance so quickly and so rapidly there will be an event horizon, and it’s literally impossible for us to imagine what’s beyond that event horizon. That’s the technological singularity. It makes me uncomfortable.

But looking back over the history of technology and the history of civilisation, I think we’ve had singularities already. I think the Agricultural Revolution was a singularity because, if you tried to describe to nomadic human beings before the Agricultural Revolution what life would be like when you settle down and work on farms, it would be impossible to imagine. The Industrial Revolution was kind of a singularity because it was such a huge change from agriculture. And we’re probably living through a third singularity now, an information age singularity.

But the interesting thing is, looking back at those previous singularities, they didn’t wipe away what came before. Those things live alongside. We still have agriculture at the same time as having industry. We still have nomadic peoples, so it’s not like everything gets wiped out by what comes before.

In fact, Kevin Kelly, who is a very interesting character, he writes about technology. In one of his books he wrote that no technology has ever gone extinct, which sounds like actually a pretty crazy claim, but try and disprove it. And he doesn’t mean it is a technology sitting in a museum somewhere. He means that somewhere in the world somebody is still using that piece of technology, some ancient piece of farming equipment, some ancient piece of computer equipment.

He writes these very provocational sort of books with titles like What Technology Wants, and The Inevitable, which makes it sound like he’s on the side of technological determinism, but actually his point is a bit more subtle. He’s trying to point out that there is an inevitability to what’s coming down the pipe with these technologies, but we shouldn’t confuse that with not being able to control it and not being able to steer the direction of those technologies.

Like I was saying, something like the World Wide Web was inevitable, but the World Wide Web we got was not. I think it’s true of any technology. We can steer it. We can choose how we use the technologies.

Looking at Kevin Kelly and his impressive facial hair, you might be forgiven for thinking that he’s Amish. He isn’t Amish, but he would describe himself as Amish-ish in that he’s lived with the Amish, and he thinks we can learn a lot from the Amish.

It turns out they get a very bad reputation. People think that the Amish reject technology. It’s not true. What they do is they take their time.

The Amish are steadily adopting technology at their pace. They are slow geeks.

I think we could all be slow geeks. We could all be a bit more Amish-ish. I don’t mean in our dress sense or facial hair. I mean in the way that we are slow geeks and we ask questions of our technology. We ask questions like, “How well does it work?” but also, “How well does it fail?” That we ask, “Who benefits from this technology?” And perhaps most importantly that we ask, “What are the assumptions of those technologies?”

Because when I look back at the history of human civilisation and the history of technology, I don’t see technology as the driving force; that it was inevitable that we got to where we are today. What I see as the driving force are people, remarkable people, it’s true, but people nonetheless.

Rosalind Franklin Margaret Hamilton Grace Hopper Hedy Lamarr

And you know who else is remarkable? You’re remarkable. And your attitude shouldn’t be, “It’s the future. Take it.” It should be, “It’s the future. Make it.” And I’m looking forward to seeing the future you make. Thank you.

Monday, November 14th, 2016

SmashingConf Barcelona 2016 - Jeremy Keith on Resilience on Vimeo

Here’s the video of the talk I gave at Smashing Conference in Barcelona last month—one of its last outings.

Wednesday, October 5th, 2016

Saturday, April 11th, 2015

SmashingConf Oxford 2015: Richard Rutter on Don’t Give Them What They Want, Give Them What They Need

A great case study from Richard, walking through the process of redesigning the website for the Royal Borough of Kensington and Chelsea.

Thursday, October 16th, 2014

Patty Toland — Design Consistency For The Responsive Web (Smashing Conference Freiburg 2014) on Vimeo

Patty’s excellent talk on responsive design and progressive enhancement. Stick around for question-and-answer session at the end, wherein I attempt to play hardball, but actually can’t conceal my admiration and the fact that I agree with every single word she said.

Saturday, November 2nd, 2013

Smashing Conference closing keynote

The final talk at the Smashing Conference held in Freiburg in September 2013.

I had a presentation. I had a slide deck for you, but I’ve thrown it away and I’m not going to use any slides today.

I was quite impressed by Elliot’s opening and the way he put himself out there, put his old work in front of you, because like Vitaly said, the whole idea here was just going to be about stories and showing failures, so I thought, I’m going to do that as well. I’m going to put my old, embarrassing websites up there in front of you, and I figure maybe you guys will get a kick out of that too. I think you’re the only people who have a word for getting joy from the misfortune of others. I think Schadenfreude is a particularly German concept, and I think you’re going to get a lot of Schadenfreude out of this.

So, I used to live here in Freiburg, back in the 90s. Saying I moved here probably gives the wrong impression, because it makes it sound like I had a fixed abode, I had somewhere to live, which I didn’t. I had no fixed abode. I was in Freiburg, but I was playing music on the streets of Freiburg.

I’d like to direct your attention to the stained glass window at the back of the room, the very back. You see that one back there? See the long haired hippie playing the musical instrument? That was basically me. On the Kaiser Joseph Straße, playing my bouzouki, earning my money that way. Playing music between…(handclap)…yeah, buskers! Straßenmusiker. I was playing music between the hours of eleven and twelve in the mornings and half past four and six in the afternoons, because those were the official busking hours in Freiburg. I don’t know if that’s still the case.

See, I was a long-haired hippie like Brad, playing music on the streets. Some daya I was playing Irish music. I’m from Ireland, and it’s only after I left Ireland that I really grew to appreciate the music of Ireland—traditional music—and later on when I did have somewhere to actually live, I had this abode, I’d occasionally come across other Irish musicians on the streets of Freiburg, and I’d say, “Hey, come on, let’s play music together, you can crash with me.” I remember there was this couple, Kerry and Christian from Northern Ireland playing guitar and fiddle, playing tunes, and I’d be like, “Yeah, come over, you can crash at my place, play some tunes together.”

But then after a while I got a proper job. I started working in a bakery. So if you’ll direct your attention to this stained glass window over here, this would be my guild when I was living in Freiburg working in the bakery, baking …well I wasn’t actually baking. I was just selling the bread. It was a bakery called Bäckerei Bueb. It doesn’t exist any more. But it was good. It was really good bread. I miss German bread. It’s the best bread. And pretzels, obviously. Over there on the Münster, there’s a stained glass window which contains the very first pictorial representation of the pretzel.

So I’m working in the bakery, selling bread, and the web is starting to come along. I remember the first time I was online I think was in America with my girlfriend, now wife, Jessica, and it was a text only browser, and I thought it was pretty cool, these things connected: you go online looking for one thing and before you know it, you’re looking at something completely different, going down a rabbit hole. And then about a year later I remember Jessica taking me into an internet café, it used to be the Rombach, is that still there, the bookshop on the corner? Who here’s from Freiburg? Okay, it’s called Rombach? Okay, so in the basement of Rombach there was a little internet café. I remember the old Bondi blue iMacs, the classic original iMacs, and Jessica was showing me the graphical web, how it had come a long way. So now we were on Netscape 3, Internet Explorer 3, and I distinctly remember, I couldn’t get the difference between …there was Netscape, Internet Explorer, Yahoo! and Alta Vista, and to me they were all four versions of the same kind of thing. I didn’t grasp the difference between a search engine and a browser; they were all something with this text field and you typed in what you wanted to look for, right? And matters weren’t helped by the fact that you open up Netscape the browser and it automatically had a homepage which was netscape.com. It was very confusing for me; something that I bear in mind, when we laugh at newbie users, but I was that newbie user.

This whole time while I was working in a bakery, I was playing music still. Not on the street this time, but with some bands. I was playing bass guitar in a surf rock band, called Leopold Kraus Wellenkapelle. Die gibst noch! Who’s heard Leopold Kraus? All right, cool! Excellent, really good, you can get their music online. They’re good now, because I’m not playing with them any more. They got a lot better after I left.

I was playing in this other band with my girlfriend and some other guys called Beam, which is a terrible name for a band. We’re playing music and someone says, we should really have one of those new-fangled websites; everyone’s got a website. We had an email address; that was something. Hotmail. But we needed a website. So I said, I’ll do it, I’ll make us a website. So my first ever website was the website for the band, Beam, and I’ve still got the files for this.

I’ll just open it …so we got a splash. I don’t think it looks that bad really. This is 1998 that I launched it.

It has this really smug thing down here …so this was the time when everybody had those buttons on their sites saying, best viewed in Internet Explorer; best viewed in Netscape Navigator. I’ve got this thing down the bottom, “this website looks best on any browser set at any width.”

Smug! Insufferably smug really.

Once you get inside, it kind of looked …I don’t know what it was in the 90s; every website had to have coloured bars down the side. It didn’t matter whether they were needed or not, you had to have coloured bars. It’s kinda nuts.

So this is my first website. Hand-crafted, and I enjoyed it. I had a lot of fun, and after this launchd, people liked it, and someone who was in another band, the brother of a friend who was living in Vienna said, “Would you make the website for our band? We’ll pay you.”

That was my first paid job, first client. That was the Salonorchester Alhambra and I don’t know if they still exist or not. This was the website. Again, there’s kind of a splash page, but this time it had a purpose, which was the site existed in English and in German, so… I like this: when you hovered over you got the album cover showing up. I will point out, this was a nice, lightweight image, because it was only literally two colours in it, the picture of Louise Brooks.

So English or Deutsch?

Lauter…

Okay… English.

So it looked like this. God! Pretty awful, but I will point something out. So this is 1998, 99. Liquid! Some of us were doing it right from the beginning.

It’s using tables for layout and I don’t know if I even want to view source on this, shall I view source? This is going to be really, really embarrassing but …oh yeah, that’s nasty. Look at that. All the upper case tags and the tables and the bgcolors and …phew …is this bringing back memories for anybody?

Okay, so my first paid gig. Now of course, now that I’m going down the road of being a professional web designer, I of course had to have my own website. You may have seen my website, adactio.com. You may have visited it. But you haven’t seen the first version of adactio.com. Ladies and gentlemen, this is the first version of adaction.com.

Notice that this is the nineties, so anytime you had a letter “a”, you had to turn it into the “@” symbol. That showed you were modern. That showed it was the future, the information superhighway.

Deutsch oder Englisch?

Deutsch. Also gut.

Wait for it… whoa! So, Elliot was showing all the Flash stuff he was doing when he started. This is me faking Flash stuff, because I couldn’t do Flash, so I was using what we called DHTML, which is just this nasty, nasty marketing term for horrible JavaScript, and CSS, and the CSS was literally just positioning stuff. It’s like, watch this …watch this… Pschewww….see that? So talking about faking Flash…pschewww… pschewww …see that? So I’m doing that with three different images and I’m swapping them out really quickly, one after the other to give the impression of some kid of effect. Here we go…weyhey, look at that! That is actually me. I used to look like that. I was young once.

You know… scrolling …faking Flash, I’m using DHTML. It was awful, trying to programme this stuff.

(Am I cutting out a lot? Should I switch over or shall I carry on with… I’ll carry on.)

You had to fork your code for both browsers, because there were only two browsers back then obviously, Netscape and Internet Explorer, and you had to fork your JavaScript to make it work.

Nasty stuff. Tables, a frameset, DHTML… I was ticking all the boxes.

I also made a website for my girlfriend, now my wife, Jessica. Got the domain name, lostintranslation.com, because this was a while ago. And again it has a splash page, but it serves a purpose, which is choosing Englisch oder Deutsch.

Englisch oder Deutsche?

English. There we go.

So she’s a translator and this is the website for her work. Here’s the interesting thing. This is actually still online. If you go to lostintranslation.com, this is what you’ll see. So this was made in the nineties, and I know it looks very dated and some things are showing their age, but it’s not that bad.

Oh, and I will point out: pixel fonts. Pixel fonts; Elliot had them. I’ve got ‘em. Gotta have your pixel fonts.

This is long overdue for a redesign, and I will get round to it soon. I promise. But it’s been online for fourteen, fifteen years. It’s still kinda working.

I want to tell you a story about someone else who left Ireland for foreign shores. Francis O’Neill was born in 1848. The mid 1840s was kind of a bad time for Ireland in general, when we had a famine. Millions died, and millions more left the country. In fact the emigration from Ireland continued for well over a century. Into the twentieth century, Ireland was a country with a decreasing population. As you know, there are Irish people all over the world, because there were no prospects at home. A lot of them ended up in America, and Francis O’Neill, he had lots of adventures. He got shipwrecked at one point, and he ended up in America, travelled across the West, ended up in Chicago. Ended up becoming a police officer in Chicago. Ended up working his way up to become Chief of Police in Chicago.

Now Francis had always loved Irish music as well. I think maybe like me, once he had left Ireland, he really missed the culture. Didn’t maybe appreciate it at the time, but once you leave, you really going to miss it. So he started making sure that if you played an Irish instrument, you were pretty much guaranteed a job in the Chicago Police Force. If you played fiddle or pipes or anything like that, he’d sort you out with a job. So that’s why when you watch the old movies, it’s always the cop on the beat is always, “begob and begorrah; top of the morning to you”, the Irish cliché cop. There was some truth to that. And he’d get them to play tunes and he would transcribe the tunes. And he got enough tunes together that he ended up making a whole book. It was called O’Neill’s One Thousand and One Melodies. Jigs, reels, hornpipes.

And it turned out what had happened was the music back home was in danger of dying out, because so many people were leaving Ireland. The diaspora was so big that in a way, O’Neill saved the music, by noting it down, and it ended up travelling back to Ireland, and becoming kind of the bible of Irish music.

When I was learning to play Irish music on the mandolin and the bouzouki, you just referred to it as The Book. If someone played a tune, you’d say, oh is that tune in The Book? And that meant, was it in O’Neill’s One Thousand and One.

Sheet music, staff notation, was the format he used. Now there’s another interesting way of notating music that started in 1991. This is about the time when we had the internet. We didn’t really have the web yet. We had email…

(It’s cutting out a lot, isn’t it? Shall I switch over? I should switch over.)

Okay… so if you were trying to send a piece of music over the internet at the time, bandwidth was at a premium, you couldn’t send an image of sheet music; an image was far too high bandwidth. You certainly couldn’t send a sound file; that would’ve been crazy. You got 14k modems.

So John Chambers came up with this format called ABC, which you can see is kind of JSON-like to begin with, with the metadata, indicating information about the tune, and then the actual tune itself is notated just using Western alphabet: A, B, C, D, E, F, G.

I suppose if it was in Germany, it would be A, H, C, D, E, F, G. Yeah. You guys need to fix that. That’s just confusing.

But it’s really, really lightweight; it’s just text, so you could send this in an email on a bulletin board, and then people had software to convert both directions. So somebody could take this text file and convert it into an image or a soundfile, or some people just read straight from the ABC file like this. A really nice little format.

And so I’m in Freiburg, it’s getting towards the end of the nineties, and I know I want to do something related to Irish music; I want to make a website about Irish music. And some people just started putting tune collections online, so I thought I would do something like that. So I made a site called The Session. It didn’t even have its own domain. And this time it has a splash page for no reason whatsoever! There’s no choice here other than to enter. And also, I’ve got a meta element on the top of this page, so if you just actually leave it and don’t press Enter, after seventeen seconds, it enters anyway!

(laughter and applause)

It totally looked like I timed that, didn’t it? That was pure coincidence!

So this was the original version of The Session. Kinda dated; it still looks okay. Tables for layout, all that stuff. But the idea here was, every week I was putting a tune online in ABC format and then also maybe in gif, and maybe a midi file. Hey, midi files! And also, I learned enough Perl to do a cgi-bin script so that people could sign up to an email list.

And so people started… the email list started to grow as people were coming back every week: “Hey, what’s the new tune this week?” So this was working really well, it actually started to take off, but there was a fatal flaw, which was: I only know so many tunes. The tunes are going to run out at some point. And sure enough, they did, so I needed to change the site, it needed to become more of a user generated content site where people could submit tunes.

So I sat down and I learned PHP and I learned MySQL so that I could build a proper website. That was The Session version 2, which looked like this. Bit of an improvement. The idea here is people—anybody—can submit a tune, and you submit the tune in a ABC format as I described. But then I have to convert that tune into sheet music, into a gif, into a midi file or whatever, so I’m a bit of a blocker here; no matter how many tunes get submitted, I have to go through the backlog and convert them and manually upload them. But it worked pretty well, and I added new sections to the site about events and sessions, and importantly, a discussions area where people could just hang out and chat. And so the site really took off; I was really proud of this …for a while.

For the first few years, it was great. But it didn’t really change that much. In fact, after ten years it was pretty much unchanged. I think this version launched in 2001 and my pride was high, but over time, my shame got higher. My shame at the missed opportunities here.

I’ve got events and sessions, but I’m not doing anything with the geolocation; I’m not making the site as useful as it could be. It could be improved so many ways.

I wanted to relaunch The Session; I wanted to make it better. But now the task had become so big, because now it’s this big sprawling site with a big database that I had done everything wrong with when I launched in 2001.

It was my New Year’s Resolution two years running: update The Session; fix The Session; launch a new version of The Session. And I never got round to it.

I remember I was in Sydney, I was coming back from speaking at Webstock in New Zealand two years ago, and a geek event had been organised in a bar by John Allsopp, one of the geeks down there. I’m chatting away to this geek, we’re talking about web stuff and we’re talking for a few minutes and there’s kind of a lull in the conversation. He says, “You know, actually we’ve met before.” I said, “Oh really? Where was that?” I’m thinking was it some other geek conference, was it South by Southwest? I meet a lot of people. I’m like, “Give me some context.”

And he says, “Freiburg.” Wait. What? I wasn’t… I’ve got him in the geek category, and now suddenly he’s saying Freiburg? And then it suddenly clicked. This was Christian, the guy who was playing music on the street that I said, “Hey, come stay at my place.” And this was like, fifteen years later, more! And he’s doing web stuff as well! And he’d come out because he’d seen it was me and of course he’s a member of The Session. And so we get talking about that, and he’s like, “When are you going to update that site?” “Oh God! I know, I know, but it’s such a big task.” But he offered to help, he was like, “I’m a geek and I play Irish music: I can help.”

And I realised that I could reach out to others, this monumental task, I could break it down into smaller chunks and maybe farm out some of those to other people. So I got him doing geolocation stuff on all this information I’ve got on the database. And I start working weekends and evenings, really hacking on this, and I actually enjoyed it, I really got into it. Coming home from work and not watching TV or anything like that, but actually just hacking, hacking, hacking.

So the current version of The Session looks like this.

This is The Session. You’ve got the tunes section, recordings, discussions …not a huge visual change really from what was there before. You don’t want to scare people too much by changing too dramatically, but I’m much happier with the way it works and there’s a lot more features on this.

Now one of the things when I was making this, I knew this time things would be a little different, and I knew I wanted to go mobile first. I wanted to make sure it was working on small screens before I started worrying about the big screens, because times have changed. Now, luckily, because I work at Clearleft, and we’ve got this Device Lab, so I had access to all these small mobile devices.

The way the Device Lab came about; this isn’t a new idea. I remember a couple of years ago at the Mobilism Conference in Amsterdam, the first Mobilism conference, we were talking about this idea, it’d be great to have this communal Device Lab, and Jason Grigsby from Portland was like, “Yeah, I’m gonna do it; I’m gonna have the world’s first Open Device Lab in Portland.” And then I’d see him occasionally, I’d say, “Hey, how’s that mobile Device Lab thing going?” He said “Great! I’ve done the paperwork to get it set up as a non-profit and the next step is filling out these forms and applying for this and that…” I just thought, “It’s never going to actually launch, is it?”

So at Clearleft, I’d started gathering one or two devices together, maybe a handful. And I decided, aw, screw it. And I wrote a blog post and I wrote a tweet and I said, “Hey, if anybody in Brighton wants to come round and just use our devices, feel free. It’s an Open Device Lab.” I didn’t do anything with liability or insurance or any paperwork or anything like that, I just did it. I thought, if something goes horribly wrong, I’ll deal with the problem then rather than worrying about everything beforehand.

And nothing’s gone wrong. In fact, within hours of me posting this, I had other people in Brighton, people like Remy Sharp and other designers and developers saying, “Hey, you know what? I’ve got this device that’s just sitting here on my desk or it’s in my drawer. Why don’t I drop that round to your office so that all the devices are together?” I was like, “Yes please, that’d be great!” Within twenty four hours, we had doubled the devices. Now we’ve got forty, fifty devices, and none of them are mine really. People have donated them, which is wonderful.

So if you feel like you don’t have enough devices to test on, and we should all probably feel like we don’t have enough devices to test on, an Open Device Lab is a really great idea. You can go to opendevicelab.com and see if there’s an Open Device Lab near you, and if there isn’t, you can get information on starting one up. And actually, you can go to Nuremburg on 25th October and you can attend an event called Border None. So you can go to border-none.net. And there’s going to be a bunch of us there talking about Open Device Labs and stuff like that, and actually if you’re interested in that in general, you need to talk to Joschi. Where’s Joschi? Where are you? There’s Joschi. So talk to him about everything related to Open Device Labs and this new event.

So anyway, I’ve got access to devices. Great, I’m going to do the mobile first thing. You’ve all read Mobile First by Luke Wroblewski, right? Who’s read Mobile First? Good, because it’s great, it’s really good, and one of the things he talks about is the constraints, how mobile forces you to really, really figure out what’s important in your content. That’s what Natalie was talking about earlier as well, really getting down to it. So I started doing that.

Mobile first is kind of a way of saying content first. It’s kind of like saying, figuring out what’s the most important content. What’s the most important thing at this URL. And when I say content, I don’t mean copy or images. I mean tasks. What’s the most important task that needs to be accomplished at this particular URL?

There’s also this corollary to this “content first” idea, which is something that Luke did on his start-up Bag Check, which is content first, navigation second, which I’ve done literally on The Session where the navigation is at the bottom of the screen.

So if you’re looking at this on a small screen, you hit this arrow to bring up the navigation. What it’s actually doing, it’s just a hyperlink to an internal anchor, which is where the navigation is. And then using the magic of media queries, once it gets large enough I can shove the navigation up the top, using whatever …display table, absolute positioning, any of the CSS positioning tools we have at our disposal. So content first, navigation second.

And one of the other ways, kinda doing the content first thing, is getting my content in order. Much like what Natalie was talking about with her pattern portfolio, this idea of a pattern primer, which is something we do at Clearleft all the time. A lot of our systems thinking around things like pattern portfolios, pattern primers, is very much influenced by Natalie, because she used to work at Clearleft, and she’s probably the best Front End Developer I know. She’s amazing.

So this idea of a pattern primer is that you’re breaking things down into their constituent parts, and I wrote this little script, so you’ve got, here’s the mark-up, the output, here’s the source. Makes for really nice deliverable. But I found that I was doing it even on a personal project like this. This isn’t for a client: this is for me. It’s still really useful because it kinda acts like a unit test case for your CSS. As you fiddle with the CSS you just keep coming back to this page and hitting refresh and make sure you haven’t screwed anything up. Very handy.

Back to The Session. Something I very much had in mind is something that’s come up a lot over the past two days, which I’m pleased to see, and that’s this whole idea of progressive enhancement. And there’s been a lot of talk about progressive enhancement lately. I’ve been building websites for a while now, as you’ve seen, going way back, and pretty early on I got the progressive enhancement bug, even before we were calling it that. The idea was pretty straightforward, that with the web, we have these different stacks; stacks of technology, stacks of experience as well, and you want to make sure that whatever your core content is, whatever the core task is, that everybody should be able to get to that content or accomplish that task. I mean, everybody, no matter what the device, no matter what the capabilities. But you don’t stop there. You then enhance for whatever capabilities that device, that browser has.

I think there’s a bit of a myth about progressive enhancement, that it means designing for the lowest common denominator, but it doesn’t; it means starting from the lowest common denominator. But there’s absolutely no limit to where you can go. It’s basically just making sensible use of the technologies we have.

Now we can use progressive enhancement really easily in HTML and CSS. Every time we use something new from HTML5 that doesn’t work in all the browsers yet, that’s kind of progressive enhancement, so, using new input types like input type="search". Now, not all the older browsers will know what that is, but that’s fine, they’ll just treat it as input type="text". It won’t break, and this is really, really important. The error handling model of HTML is to be very forgiving, so if a browser sees something it doesn’t understand, it just ignores it. In the case of input types, it just treats it as input type="text", so you’re free to use all these new input types, even if they’re not supported everywhere.

Likewise in CSS, that’s how we can have new selectors. We can have new properties, new values. Because when browser is passing a CSS file, if it sees a property or value or selector it doesn’t understand, it just ignores it and moves onto the next one. And that’s enormously powerful; it means we can keep expanding the standards, keep adding new stuff, and more importantly, we can start using this stuff, even if it isn’t universally supported, because the browser won’t break; it won’t throw an error message, it won’t refuse to render that stuff.

Now JavaScript is a little trickier. With JavaScript, if you use some property or method that the browser doesn’t understand, it will throw an error, so you have to be sure to use feature detection. And this is why it’s really important not to rely on JavaScript. I’m not saying don’t use JavaScript. I love JavaScript; I’ve written books about JavaScript. But you shouldn’t rely on JavaScript for your core content or for your core tasks. Because that’s kind of what the discussions around progressive enhancement lately have been about.

It’s exactly what Andy Hume was talking about. It’s about being robust. He said, “Progressive enhancement is more about dealing with technology failing than technology not being supported.” I think that’s very true. It’s about planning for the unexpected.

I’ll give you an example: the website to download Google Chrome, it was useless for two hours. There was one day where nobody could download Google Chrome. It was because the hyperlink that was the download link, saying “Download Chrome”, it wasn’t a proper hyperlink, in the sense that it didn’t have what I would call a valid href attribute. It was a href="javascript..." and then some JavaScript, and then in the JavaScript file there was one error. One misplaced semicolon, something like that. But it was enough that all the JavaScript was basically screwed up, and for two hours, nobody in the world could download Google Chrome. That could have been avoided using progressive enhancement, and that’s exactly the kid of situation that progressive enhancement is all about. It’s a kind of situation you would never foresee. You can’t plan for that kind of stuff; that’s why it makes sense to use progressive enhancement.

I kind of think of progressive enhancement like electricity. You can use it to enhance stuff, but you shouldn’t rely on it. Buildings, if they have smart doors or smart elevators, you’ve got to make sure they have safe defaults, and that’s certainly the case with the web technologies, I think. We’ve got to think about safe defaults. But use electricity to enhance stuff.

There’s the old joke about escalators can never break: they can just become stairs. So I guess an escalator is stairs with progressive enhancement. Or those walkways in the airports that make you walk really fast. I guess that’s just a floor that’s been progressively enhanced. An electric toothbrush is a progressively enhanced toothbrush.

So I’m using progressive enhancement at all the levels here; I’ve CSS properties that aren’t universally supported. That’s fine. HTML attributes and elements that aren’t universally supported. That’s fine too. And in the way that the display of sheet music works, I’m kind of using progressive enhancement as well.

By default, this is a form with a button and it’s just pointing off to a script on the server. Actually, that’s a third party server that will generate an image, a gif in this case. But if your browser has enough capabilities—if it cuts the mustard, as Andy was talking about, which means that you support the JavaScript required and you support SVG—then it will load in that JavaScript and convert music to SVG. Because as per Atwood’s Law—which is that anything that can be done in JavaScript will be done in JavaScript—some genius has written JavaScript that converts ABC to sheet music in SVG! It’s brilliant, and because it’s SVG, it just stays super-crisp, no matter what the size, way better than those old gifs.

I love SVGs. SVGs are awesome. In fact, let me show you another example.

Here’s another site I built a few years ago, called Huffduffer. Does anybody use Huffduffer? A few people. It’s weirdly popular in Germany; I’m not sure why. Got a lot of German people Huffduffing.

It’s basically like Instapaper but for audio; you want to listen to something later, you put it on Huffduffer. And I am using a local version, but it seems to be taking its own sweet time. Let me jump straight to my profile page. One of the things I have on the profile pages—you have Huffduffed this much stuff; you’ve saved this many files—is I’ve got a sparklines. I love sparklines (we’ll see one in a moment if this thing ever loads).

Sparklines are something created by Edward Tufte, and he calls them “a small, intense, simple word sized graphic with typographic resolution.” So this is the Sparkline right there. And this is actually using Google’s Charts API, which is really handy; you can just say img src= and then Google servers and you pass in this string and you say, this is how I want it to look. It’s a really, really useful API. So Google are shutting it down.

So on The Session, I didn’t want to rely on Google or any other third party APIs frankly, because when you’ve been making websites as long as I have, you learn to trust no one! Especially when it comes to third party APIs.

I was looking into generating these Sparklines on the fly myself, and I searched to see if somebody else had done this, and actually lots of people had done this using JavaScript, and all of them, all of them used jQuery.

Now, nothing against jQuery, but when you just want to do one thing, loading in an entire library kinda seems like overkill. It’s kinda like the spam of stack overflow at this stage. It’s like, “How do I do this thing in JavaScript?” “Step 1: here’s jQuery. Here’s your jQuery plug-in.” What if I don’t want to use jQuery? “Oh, just use JQuery for that!”

jQuery’s great, but as it turns out, on The Session, I’m not using jQuery; I don’t need to. So I ended up writing a little script that generated Sparklines and created them using the canvas element. And I put it on GitHub, and I blogged about it and I said, “Hey, there’s this thing that generates sparklines in canvas.” But I also said, because I had this feeling like… I don’t think canvas is the right element for this. It’s not a dynamic image; it should probably be SVG. But I couldn’t figure out how to get that to work.

And oh, I love the web! I swear to God, within like an hour or two of blogging this, Stuart Langridge, he sends me a link saying, “There you go, I’ve done it for you, here’s the SVG version.”

And then my friend Tom took that code and turned it into a service, so it basically works like Google Charts but better, because it’s SVG and it’s scalable!

So these Sparklines on the member profiles are done with SVGs, so they’re lovely and small. Now, SVG, I love this format, because yes, it’s an image format, but it’s also a text format. You can view source in SVG and it’s an XML file. So all those Sparklines are actually pointing to the same file; they’re all pointing to just this one file that has a script element in it, and when you point to the file, you use the query string, and you pass in your numbers.

The script is inside the SVG, so there’s a script element inside the image element, which is like Inception! SVG, it’s an image, but it’s also text file. It’s also a document. You can put scripts in there. You can put CSS in there! So you could make responsive logos. In fact, on The Session, that’s kinda what happens. You see the large screen, the logo looks like this, but at smaller sizes, it’s more like that. It’s a responsive logo.

Actually, I’m not using SVG for that. That’s a piece of text that says, “The Session”, and has got CSS applied to it. Nice and scalable too. That works.

Progressive enhancement, sparklines, SVG: awesome.

One example of progressive enhancement that I don’t see talked about much—not enough in my opinion, certainly when it comes to responsive design—is this idea of conditional loading. Andy was talking about this with The Guardian: they’re doing fantastic work in this way where they’ve really figured out what’s the core content, and you send that to everyone. Everyone, no matter what kind of browser, no matter what kind of device. But then, after the page load, after everyone’s got the content, then you can start to do stuff with JavaScript, because now you know more information. You know how wide the screen is; you know what kind of JavaScript capabilities are there.

On this homepage here, these pictures from Flickr are loaded in after the page load, and only if the page is wide enough to display them. So everyone can get to these pictures, because there’s a link to this group on Flickr, but on a small screen, if I was loading that page, you just don’t get the pictures. You get them on the large screen, using conditional loading. And they’re not going to hold up performance or anything like that, because it’s happening after the main page load.

Performance; and you’ve heard it multiple times today again, so important. And with The Session, I was really tweaking every last thing. Obviously, I’m gzipping my files; I’m minifying my JavaScript, squeezing my CSS down, I’m minifying that. Basically, avoiding images wherever possible, the little logo is an example of that. Avoiding every unnecessary HTTP request.

And as an example of where you should be using conditional loading, I said I’m doing it with Flickr, and that’s certainly a good idea, because if Flickr goes down, I don’t want my performance to depend on Flickr. So any third party content, I think you should be conditionally loading. Particularly those widgets that say “Like” or “Tweet this” or “+1”. They make you look needy and they could potentially affect the performance of your site. Because most of them, the way they work is they say, “Oh just insert this script element into the middle of your document.” A blocking script element from a third party into the middle of your document! And you think, well, you know, those servers are really robust, they’re not going to go down. And then somebody in China tries to look at your website, but you’ve got a script element from Twitter, which is blocked in China, and your page never finishes loading.

There’s a bunch of little performance things I won’t go into, because it’s kinda like Andy was talking about, it’s not really about technology, it’s about the mindset.

But this is the interesting thing; when I was coming to redesign The Session, obviously I’m thinking in large timescales because it’s been online for so long, so I’m thinking actually in terms of decades. It’s already been up for over ten years. I’m thinking about, how will it work for the next ten years? And partly that involves placing some bets on technologies; what’s still going to work?

It’s also kind of deciding what approaches are going to stand the test of time, and progressive enhancement is absolutely an approach that has stood the test of time. Throughout the years, it doesn’t matter what new technologies come along, progressive enhancement just keeps working as a philosophy, I guess, so when Ajax came along, it made total sense not to rely on Ajax, but to use Ajax and progressive enhancement. Same with all the greatest technologies that have come along meantime.

So I’m thinking long term, and I think it’s important that we do that. Mandy Brown said, “We should not measure our work in months or years, but in decades.” I think there’s something to that.

And the other timescale that interests me—apart from decades and the really long term; I feel like I’m part of a longer tradition that stretches back literally hundreds of years—the other timescale is microseconds, which is the performance stuff. I’m really, really interested in the long term stuff and I’m really, really interested in the very, very short term stuff as in, “How fast can I make this site?”, because performance is so, so important, for all the reasons that Tim pointed out.

But the timescales in between, I kinda don’t care about as much. And yet those are the timescales we tend to focus on in our work. Weeks and months. Deadlines: when will this project be finished? It’s usually weeks, it’s usually months. And I think we do need to think a bit more long term, and not be so focused on what’s in right now. Even though I know you’re all itching to get out so you can see the keynote to find out what’s the new thing from Apple? What’s the new shiny?

I would like you to evaluate technologies with their long term effect, not just the short term. It’s a weird thing: the best way to be future friendly is to be backwards compatible. We’re kinda lucky that we do use technologies that have stood the test of time, like HTML.

When Håkon Wium Lie was at CERN when the web was first being invented, he placed a bet that HTML would still be around in fifty years, which is a crazy idea, because no file format lasts that long. But now, I think it’s a pretty safe bet. HTML has evolved, and that’s really important. If it were to stand still, it would atrophy, like any tradition, like the Irish music tradition. But it evolves and it gets better and it goes forward, and those websites I showed you that are fifteen years old, still work today in a modern browser. More importantly, you could find a web browser from fifteen years ago and visit The Session with it today, and it would still work. It’s not going to look the same. It’s not going to behave exactly the same, but the core content, the core tasks are available. That’s pretty amazing, right? And it’s not by accident that HTML is so robust.

In fact, Ian Hickson, who’s the editor of the HTML spec at the WHATWG, he said this about it; this is in an email to a mailing list from years ago. He said:

The original reason I got involved in this work is that I realised the human race has written literally billions of electronic documents but without ever actually saying how they should be processed. I decided that for the sake of our future generations, we should document exactly how to process today’s documents so that when they look back, they can still re-implement HTML browsers and get our data back.

I think it’s really important. We’ve already seen how easily we lose stuff. Elliot couldn’t find a lot of the stuff he wanted to show us; it was just gone, it’s just not out there.

Here’s something from Owen Briggs back in 2001. Does anybody remember a site called The Noodle Incident? Okay, one or two.

Now he was talking about HTML. He was actually writing a post about validation, why it’s important that your code validates, and he said—he was talking about HTML and how it’s changed, but it’s amazing:

The code has to expand its capabilities as we do, yet never in a way that blocks out our earlier documents. Everything we put down now will continue to be readable, as long as it was recorded using mark-up valid in its time.

That’s amazing. Already, we can’t read CD-ROMs. They’re outdated. The earliest iPhone apps are already becoming pretty much pointless; they’re kind of like the CD-ROMs of today. These isolated islands that can’t link to each other, that can’t be part of this broader thing, this amazing web, the ocean of the web.

We have all these technologies like every day; CoffeeScript, Yeoman, Grunt, Sass, Less, Git, Node, Backbone, Ember. Do you ever feel like, “I can’t keep up with all these technologies?” I feel that way all the time, and I think it’s Okay. Obviously we should use these technologies, but we should evaluate them not just for how they help us today, but for how they’re going to help us in the future.

You see that building out there? The Münster, Freiburger Münster? Beautiful building; an amazing achievement. It’s made of sandstone mostly, which isn’t the most durable material. That’s why there’s always scaffolding on the Münster. There’s always some part of it being kept up. It’s kinda like the web. It’s this beautiful thing in aggregate. The sheer scale of it is amazing, but it actually requires upkeep to keep it going. If it weren’t for the Münsterbauhütte, it would just start to fall apart.

They’re thinking in generations, keeping this thing going, and I think we need to think about that with the web. And we don’t. In fact, quite the opposite. With the web, we’re constantly hearing “the internet never forgets.” Quatsch! Where’s the data to support that statement? We hear it all the time; we believe it all the time. “Oh, be careful what you put online because you know the internet never forgets.” “Oh that’s right yeah. Google never forgets. Facebook never forgets.” Bollocks!

If you’re not careful, you will lose that data. Don’t rely on somebody else to keep it, right? We need to take care of what we put online. We need to take care of our URLs, make sure that the cool URLs don’t change.

I’m reminded of a story, I’ll finish with this. We could take some questions if there’s time. I was reminded of a story, this thing about the Münster, of a medieval traveller coming across a construction site where there’s builders with stone blocks, and the traveller says to the first builder, he says, “What are you doing?” He says, “I’m piling blocks of stone one on top of the other.” And he says to the second builder, “What are you doing?” He says, “I’m building a wall.” And he says to the third builder, “What are you doing?” He says, “I’m building a cathedral.” Now, they’re all doing the same thing, but they’re viewing it in different ways. They’re viewing it through a different timescale, and I wish we’d do that more with the web. We’re so lucky that we have these standards to work with, and standards are important.

Remember I said I worked in a bakery? I ate pretzels, brotchen, bread, all that good stuff. There’s been bakers in Freiburg for hundreds of years. There’s been a market outside the Münster for hundreds of years. If you go over to the Münster, and I encourage you to do this when we’re finished here, go round to the front entrance, and you’ll see these shapes scrawled in there. They’re standards. They’re standards of measurement for blocks of stone, but there’s also standards like this. These are bread standards. So at the market, in this particular year, this was the size of a loaf of bread. This was the size of a brotchen, or a long bread. Standards evolve, they change with time, but they’re what allow us as a society to keep going, and I think we’re pretty lucky on the web.

So have a think about what your work is and how you want to view it, and whether it’s important. Elliot talked about being defined potentially by your personal projects, and I hope that’s true. I can’t think of any of the work I’ve done at Clearleft for clients or anything like that, that I’d like to define me, but The Session, this community website that people have contributed to, that now contains ten times as many tunes as Francis O’Neill collected …I’m incredibly proud of that, despite being very ashamed of the site for a long time. And I hope that that defines me.

Thank you.

Bread standards

Saturday, October 26th, 2013

Medieval times

I just got back from Nürnberg where I gave the closing talk at the cheap’n’cheerful border:none event. It was my first time in Nürnberg and I wish I could’ve stayed longer in such a beautiful place. I would’ve liked to stick around for today’s Open Device Lab admin meetup, but alas I had to get up at the crack of dawn to start making my way back to Brighton.

I was in Germany last month too. That time I was in Freiburg, where I was giving the closing talk at Smashing Conference. That was a lot of fun:

So I threw away my slidedeck and went Keynote commando.

The video from that slideless talk is up on Vimeo now for your viewing and/or downloading pleasure.

If you watch it through to the end, then you’ll know why I could be found immediately afterwards showing people some centuries-old carvings on Freiburg’s cathedral.

Jeremy playing tour guide Bread standards

Update: I’ve published a transcript of the talk.

Thursday, September 12th, 2013

Smashing

It was a crazy time in Brighton last week: Reasons To Be Creative followed by Improving Reality followed by dConstruct followed by Maker Faire and Indie Web Camp. After getting some hacking done, I had to duck out of Indie Web Camp before the demos so that I could hop on a plane to Germany for Smashing Conference—the geek party was relocating from Brighton to Freiburg.

I was there to deliver the closing keynote and I had planned to reprise a talk that I had already given once or twice. But then Vitaly opened up proceedings by declaring that the event should be full of stories …and not just stories of success either; stories of failure. Then Elliot opened the show by showing some of his embarrassing early Flash websites. I decided that, in the spirit of Vitaly’s entreaty, I would try something similar. After all, I didn’t have anything quite as embarrassing as Atomic Kitten or Hilary Duff e-cards in my closet.

So I threw away my slidedeck and went Keynote commando. My laptop was connected to the projector but I only used it to bring up a browser to show embarrassing old sites like the first version of adactio.com complete with frames, tables for layout, and gratuitous DHTML animation. But I spent most of the time just talking, telling the story of how I first started making websites back when I used to live in Freiburg, and describing the evolution of The Session—a long-term project that’s given me a lot of perspective on how we often approach our work from too short a timescale.

It was fun. It was nice to be able to ditch the safety net of slides and talk off-the-cuff to a group of fellow geeks in the intimate surroundings of Freiburg’s medieval merchant’s hall.

Preparing to speak Leaving Smashingconf

I finished by encouraging people to look out the window of the merchant’s hall across to the splendid cathedral. The Freiburger Münster is a beautiful, magnificent creation …just like the web. But it’s made of sandstone and so it requires constant upkeep …just like the web. The Münsterbauverein are responsible for repairing and maintaining the building. They can only ever work on small parts at a time, but the overall result—over many generations—is a monument that’s protected for the future.

I hope that when we work on the web, we are also contributing to a magnificent treasure for the future.

Muenster

Friday, December 7th, 2012

The Smashing Conference: Exclusive Videos And Interviews | Smashing Magazine

Here’s an interview I did during the Smashing conference in Freiburg.

Friday, November 30th, 2012

The Spirit Of The Web on Vimeo

This is my opening talk from Smashing conference a few months back in Freiburg, where I used to live.

Friday, September 21st, 2012

14islands: Smashing Conference take-aways

A nice round-up of some of the themes that emerged at Smashing Conference. As with An Event Apart, there was a definite focus on process.

Return to Freiburg

I was in southern Germany this week to speak at the inaugural Smashing Conference. It was a really good event, packed with in-depth talks and workshops for web developers. Its practical nature contrasted nicely with the more inspirational value of dConstruct. I always say it’s good to have a balanced conference diet: too much code and I start itching for big-picture thinking; too much big-picture thinking and I start jonesing for some code.

That said, I have to admit that I missed out on quite a few of the talks. That’s because I was outside exploring Freiburg. Or should I say, I was outside rediscovering Freiburg.

I used to live there. I lived there for about six years, all told, during the ’90s. That’s where I met Jessica.

To start with, I was playing music on the streets of Freiburg. Later, I got a job in a bakery, selling bread, pretzels and all manner of excellent baked goods. Meanwhile, I was playing in a band (two bands actually: for a while I was the bassist in Leopold Kraus, the finest surf band in the Black Forest). At some point, the band decided we needed a website. I said I’d give it a go. That’s when this whole web thing started for me. I started freelancing on the side. Before too long, I was able to give up the bread-selling day job.

But after six years, Jessica and I decided that we were done with Freiburg. We moved to Brighton, where we’ve lived for twelve years now.

So it was with some excitement and a certain amount of nervous anticipation that we returned to Freiburg for the Smashing Conference. What would Freiburg be like now? Would it feel weird to be back there?

Well, Jessica has written all about what it was like to go back. I highly recommend that you read what she’s written because she puts it far better than I ever could.

Jessica has been publishing online at wordridden.com since we lived in Germany. Reading back through her posts from way back then about life in Freiburg makes me wish that I had started writing on adactio.com sooner. I don’t have much evidence of my time there: a box of cassettes (cassettes!) that the band recorded; a handful of photographs.

On this trip, I took quite a few photographs. In three days, I recorded an order of magnitude more data than I had done in six years of living in Freiburg.

Monday, September 17th, 2012

The Spirit of the Web – Jeremy Keith at Smashing Conference | Brad Frost Web

Brad’s notes from my opening talk at the Smashing Conference in Freiburg.