Tags: culture

32

sparkline

The World-Wide Work

I’ve been to a lot of events and I’ve seen a lot of talks. I find that, even after all this time, I always get something out of every presentation I see. Kudos to anyone who’s got the guts to get up on stage and share their thoughts.

But there are some talks that are genuinely special. When they come along, it’s a real privilege to be in the room. Wilson’s talk, When We Build was one of those moments. There are some others that weren’t recorded, but will always stay with me.

Earlier this year, I had the great honour of opening the New Adventures conference in Nottingham. I definitely felt a lot of pressure, and I did my utmost to set the scene for the day. The final talk of the day was delivered by my good friend Ethan. He took it to another level.

Like I said at the time:

Look, I could gush over how good Ethan’s talk was, or try to summarise it, but there’s really no point. I’ll just say that I felt the same sense of being present at something genuinely important that I felt when I was in the room for his original responsive web design talk at An Event Apart back in 2010. When the video is released, you really must watch it.

Well, the video has been released and you really must watch it. Don’t multitask. Don’t fast forward. Set aside some time and space, and then take it all in.

The subject matter, the narrative structure, the delivery, and the message come together in a unique way.

If, having watched the presentation, you want to dive deeper into any of Ethan’s references, check out the reading list that accompanies the talk.

I mentioned that I felt under pressure to deliver a good opener for New Adventures. I know that Ethan was really feeling the pressure too. He needn’t have worried. He delivered one of the best conference talks I’ve ever seen.

Thank you, Ethan.

Design perception

Last week I wrote a post called Dev perception:

I have a suspicion that there’s a silent majority of developers who are working with “boring” technologies on “boring” products in “boring” industries …you know, healthcare, government, education, and other facets of everyday life that any other industry would value more highly than Uber for dogs.

The sentiment I expressed resonated with a lot of people. Like, a lot of people.

I was talking specifically about web development and technology choices, but I think the broader point applies to other disciplines too.

Last month I had the great pleasure of moderating two panels on design leadership at an event in London (I love moderating panels, and I think I’m pretty darn good at it too). I noticed that the panels comprised representatives from two different kinds of companies.

There were the digital-first companies like Spotify, Deliveroo, and Bulb—companies forged in the fires of start-up culture. Then there were the older companies that had to make the move to digital (transform, if you will). I decided to get a show of hands from the audience to see which kind of company most people were from. The overwhelming majority of attendees were from more old-school companies.

Just as most of the ink spilled in the web development world goes towards the newest frameworks and toolchains, I feel like the majority of coverage in the design world is spent on the latest outputs from digital-first companies like AirBnB, Uber, Slack, etc.

The end result is the same. A typical developer or designer is left feeling that they—and their company—are behind the curve. It’s like they’re only seeing the Instagram version of their industry, all airbrushed and filtered, and they’re comparing that to their day-to-day work. That can’t be healthy.

Personally, I’d love to hear stories from the trenches of more representative, traditional companies. I also think that would help get an important message to people working in similar companies:

You are not alone!

Unsolved Problems by Beth Dean

An Event Apart in Seattle continues. It’s the afternoon of day two and Beth Dean is here to give a talk called Unsolved Problems:

Technology products are being adapted faster than ever. We’ve spent a lot of time adopting new technology, but not as much time considering the social impact of doing so. This talk looks at large scale system design in the offline world, and takes lessons from them to our online work. You’ll learn how to expand your design approach from self-contained products, to considering the broader systems in which they exist.

Fun fact: An Event Apart was the first conference that Beth attended over ten years ago.

Who recognises this guy on screen? It’s Robert Stack, the creepy host of Unsolved Mysteries. It was kind of like the X-Files. The X-Files taught Beth to be a sceptic. Imagine Beth’s surprise when her job at Facebook led her to actual conspiracies. It’s been a hard year, what with Cambridge Analytica and all.

Beth’s team is focused on how people experience ads, while the whole rest of the company is focused on ads from the opposite end. She’s the Fox Mulder of the company.

Technology today has incredible reach. In recent years, we’ve seen 1:1 harm. That’s when a product negatively effects someone directly. In their book, Eric and Sara point out that Facebook is often the first company to solve these problems.

1:many harm is another use of technology. Designing in isolation isn’t new to tech. We’ve seen 1:many harm in urban planning. Brasilia is a beautiful city that nobody wants to live in. You need messy, mixed-use spaces, not a space designed for cars. Niemeier planned for efficiency, not reality.

Eichler buildings were supposed to be egalitarian. But everything that makes these single-story homes great places to live also makes them great targets for criminals. Isolation by intentional design leads to a less safe place to live.

One of Frank Gehry’s buildings turned into a deathtrap when it was covered with snow. And in summer, the reflective material makes it impossible to sit on side of it. His Facebook office building has some “interesting” restroom allocation, which was planned last.

Ohio had a deer overpopulation problem. So the solution they settled on was to introduce coyotes. Now there’s a coyote problem. When coyotes breed with stray dogs, they start to get aggressive and they hunt in packs. This is the cobra effect: when the solution to your problem makes the problem worse. The British government offered a bounty for cobras in India. So people bred snakes for the bounty. So they got rid of the bounty …and then all those snakes were released into the wild.

So-called “ride sharing” apps are about getting one person from point A to point B. They’re not about making getting around easier in general.

Google traffic directions don’t factor in the effect of Google giving everyone the same traffic directions.

AirBnB drives up rent …even though it started out as a way to help people who couldn’t make rent. Sounds like cobra farming.

Automating Inequality by Virgina Eubanks is an excellent book about being dropped by health insurance. An algorithm did it. By taking broken systems and automating them, we accelerate disenfranchisement.

Then there’s Facebook. Psychological warfare is not new. Radio and television have influenced elections long before the internet. Politicians changed their language to fit the medium of radio.

The internet has removed all friction that helps us behave cooperatively. Removing friction was once our goal, but it turns out that friction is sometimes useful. The internet has turned into an outrage machine.

Solving problems in the isolation of our own products ignores the broader context of society.

The Waze map reflects cities as they are, not the way someone wishes them to be.

—Noam Bardin, CEO of Waze

From bulletin boards to today’s web, the internet has always been toxic because human nature is toxic. Maybe that’s the bigger problem to solve.

We can look to other industries…

Ideo redesigned the hospital experience. People were introduced to their entire care staff on their first visit. Sloan Kettering took a similar approach. Artwork serves as wayfinding. Every room has its own bathroom. A Chicago hostpital included gardens because it improves recovery.

These hospital examples all:

  • Designed for an intended outcome.
  • Met people where they were.
  • Strengthened existing support networks.

We’ve seen some bad examples from urban planning, but there are success stories too.

A person on a $30 bicycle is as important as someone in a $30,000 car, said Enrique Peñalosa.

Copenhagen once faced awful traffic congestion. Now people cycle everywhere. It’s the fastest way to get around. The city is designed for bicycles first. People rode more when it felt safer. It’s no coincidence that Copenhagen ranks as one of the most livable cities in the world.

Scandinavian prisons use a concept called restorative justice. The staff plays badminton with the inmates. They cook together. Treat people like dirt and they will act like dirt. Treat people like people and they will act like people. Recividism rates in Norway are now way low.

  • Design for dignity and cooperation.
  • Solve for everyone in a system.
  • Policy should reflect intended outcomes.

The deHavilland Comet was made of metal. After a few blew apart at the seams, they switched from rivetted material. Airlines today develop a culture of crew resource management that encourages people to speak up.

  • Plan for every point of failure.
  • Empower everyone on a team to solve problems.
  • Adapt.

What can we do?

  • Policies affect design. We need to work more closely with policy makers.
  • Question access. Are all opinions equal? Where are computers making decisions that should involve people.
  • Forget neutrality. Technology is not neutral. Neutrality allows us to abdicate responsibility.
  • Stay a litte bit paranoid. Think about what the worst case scenario might be.

Make people better curators. How might we allow people to assess the veracity of information for themselves? What if we gave people better tools to affect their overall experience, not just small customisations?

We can use what we know about people to bring out their best behaviours. We can empower people to take action instead of just outrage.

What if we designed for the good of the community instead of the success of individuals. Like the Vauban in Freiburg! It was squatted, and the city gave control to the squatters to create an eco neighbourhood with affordable housing.

We need to think about what kind of worlds we want to create. What if we made the web less like a mall and more like a public park?

These are hard problems. But we solve hard technology problems every day. We could be the first generation of builders to solve technology’s hard problems.

Audio I listened to in 2018

I wrapped up last year with a list of some of the best audio I listened to in 2017. This year I huffduffed about 260 pieces of audio, so I could do a similar end-of-year list for 2018. But I thought I’d do something a little different this time.

It seems like podcasting is going from strength to strength with each passing year. Some friends of mine started new podcasts in 2018. Matt started Hobby Horse, where he talks to people about their tangential obsessions. Meanwhile Khoi started Wireframe, a jolly good podcast about design.

Apart from the trend of everyone having their own podcast these days, there’s also been a trend for quite short and manageable “seasons” of podcasts. See, for example, Horizon Line by Atlas Obscura, which is just four episodes long. Given the cherry-picking nature of my usual audio consumption (the very reason I made Huffduffer in the first place), this trend suits me quite well. There have been a few podcast runs in 2018 that I can recommend in their entirety.

The Secret History Of The Future is a collaboration between Seth Stevenson and Tom Standage, one of my favourite non-fiction authors. They look at modern technology stories through the lens of the past, much like Standage has done in books like The Victorian Internet. There are annoying sponsor blurbs to skip past, but apart from that, it’s a top-notch podcast.

I discovered Settling The Score this year. It’s a podcast all about film scores. The two hosts have spent the year counting down the top 25 scores in the American Film Institute’s list of (supposedly) greatest scores in American cinema history. It’s a pleasure to listen to them take a deep dive into each film and its score, analysing what works and what doesn’t. It will also make you want to rewatch the movie in question.

By far my favourite podcast listening experience this year was with Stephen Fry’s Great Leap Years. It’s just six episodes long, but it manages to tell the sweep of human history and technology in an entertaining and fascinating way. I’ll admit I’m biased because it dwells on many of my hobby horses: the printing press, the telegraph, Claude Shannon and information theory. There are no annoying sponsorship interruptions, and best of all, you’ve got the wonderful voice of Stephen Fry in your earholes the whole time. Highly recommended!

So there you have it: three podcasts from 2018 that are worth subscribing to in their entirety:

Browsers

Microsoft’s Edge browser is going to switch its rendering engine over to Chromium.

I am deflated and disappointed.

There’s just no sugar-coating this. I’m sure the decision makes sound business sense for Microsoft, but it’s not good for the health of the web.

Very soon, the vast majority of browsers will have an engine that’s either Blink or its cousin, WebKit. That may seem like good news for developers when it comes to testing, but trust me, it’s a sucky situation of innovation and agreement. Instead of a diverse browser ecosystem, we’re going to end up with incest and inbreeding.

There’s one shining exception though. Firefox. That browser was originally created to combat the seemingly unstoppable monopolistic power of Internet Explorer. Now that Microsoft are no longer in the rendering engine game, Firefox is once again the only thing standing in the way of a complete monopoly.

I’ve been using Firefox as my main browser for a while now, and I can heartily recommend it. You should try it (and maybe talk to your relatives about it at Christmas). At this point, which browser you use no longer feels like it’s just about personal choice—it feels part of something bigger; it’s about the shape of the web we want.

Jeffrey wrote that browser diversity starts with us:

The health of Firefox is critical now that Chromium will be the web’s de facto rendering engine.

Even if you love Chrome, adore Gmail, and live in Google Docs or Analytics, no single company, let alone a user-tracking advertising giant, should control the internet.

Andy Bell also writes about browser diversity:

I’ll say it bluntly: we must support Firefox. We can’t, as a community allow this browser engine monopoly. We must use Firefox as our main dev browsers; we must encourage our friends and families to use it, too.

Yes, it’s not perfect, nor are Mozilla, but we can help them to develop and grow by using Firefox and reporting issues that we find. If we just use and build for Chromium, which is looking likely (cough Internet Explorer monopoly cough), then Firefox will fall away and we will then have just one major engine left. I don’t ever want to see that.

Uncle Dave says:

If the idea of a Google-driven Web is of concern to you, then I’d encourage you to use Firefox. And don’t be a passive consumer; blog, tweet, and speak about its killer features. I’ll start: Firefox’s CSS Grid, Flexbox, and Variable Font tools are the best in the business.

Mozilla themselves came out all guns blazing when they said Goodbye, EdgeHTML:

Microsoft is officially giving up on an independent shared platform for the internet. By adopting Chromium, Microsoft hands over control of even more of online life to Google.

Tim describes the situation as risking a homogeneous web:

I don’t think Microsoft using Chromium is the end of the world, but it is another step down a slippery slope. It’s one more way of bolstering the influence Google currently has on the web.

We need Google to keep pushing the web forward. But it’s critical that we have other voices, with different viewpoints, to maintain some sense of balance. Monocultures don’t benefit anyone.

Andre Alves Garzia writes that while we Blink, we lose the web:

Losing engines is like losing languages. People may wish that everyone spoke the same language, they may claim it leads to easier understanding, but what people fail to consider is that this leads to losing all the culture and way of thought that that language produced. If you are a Web developer smiling and happy that Microsoft might be adopting Chrome, and this will make your work easier because it will be one less browser to test, don’t be! You’re trading convenience for diversity.

I like that analogy with language death. If you prefer biological analogies, it’s worth revisiting this fantastic post by Rachel back in August—before any of us knew about Microsoft’s decision—all about the ecological impact of browser diversity:

Let me be clear: an Internet that runs only on Chrome’s engine, Blink, and its offspring, is not the paradise we like to imagine it to be.

That post is a great history lesson, documenting how things can change, and how decisions can have far-reaching unintended consequences.

So these are the three browser engines we have: WebKit/Blink, Gecko, and EdgeHTML. We are unlikely to get any brand new bloodlines in the foreseeable future. This is it.

If we lose one of those browser engines, we lose its lineage, every permutation of that engine that would follow, and the unique takes on the Web it could allow for.

And it’s not likely to be replaced.

Document

A little while back, I showed Paul what I was working on with The Gęsiówka Story. I value his opinion and I really like the Bradshaw’s Guide project that he’s been working on. We’re both in complete agreement with Russell Davies’ call for an internet of unmonetisable enthusiasms. Call them side projects if you like, but for me, these are the things that the World Wide Web excels at.

These unomentisable enthusiasms/side projects are what got me hooked on the web in the first place. Fray.com—back when it was a website for personal stories—was what really made the web click for me. I had seen brochure sites, I had seen e-commerce sites, but it was seeing something built purely for the love of it that caused that lightbulb moment for me.

I told Paul about another site I remembered from that time (we’re talking about the mid-to-late nineties here). It was called Private Art. It was the work of one family, the children of Private Art Pranger who served in World War Two and wrote letters from the front. Without any expectations, I did a quick search, and amazingly, the site is still up!

Yes, it’s got tiled background images, and the framesetted content is in a pop-up window, but it works. The site hasn’t been updated for fifteen years but it works perfectly in a web browser today. That’s kind of amazing. We really shouldn’t take the longevity of our materials for granted. Could you imagine trying to open a word processing document from the late nineties on your computer today? You’d have a bad time.

Working on The Gęsiówka Story helped to remind me of some of the things that made me fall in love with the web in the first place. What I wrote about it is equally true of Private Art:

When we talk about documents on the web, we usually use the word “document” as a noun. But working on The Gęsiówka Story, I came to think of the word “document” as a verb.

The World Wide Web is a medium that’s works for quick, short-term lightweight bits of fun and also for long-term, deeper, slower, thoughtful archives of our collective culture.

The web is a many-splendoured thing.

Choosing tools for scaling design

Tools and processes are intertwined. A company or a department or an individual has a way of doing things—that’s the process. They also have software to carry out the process—those are the tools.

Ideally, they should be loosely coupled. You should be able to change your tools without necessarily changing your process. So swapping out, say, one framework or library for another shouldn’t involve fundamentally changing the way you work. Likewise, trying a new way of working shouldn’t require you to use unfamiliar tools.

When it comes to scaling design within organisations, the challenges are almost always around switching processes (well, really it’s about trying to change culture, but that starts with changing processes—any sufficiently advanced process is indistinguishable from culture). All too often, though, I see people getting hung up on the tools.

We need to get more efficient in how we deliver designs …so let’s switch over to this particular design tool.

We should have a design system …so let’s get everyone using this particular JavaScript framework.

I understand this desire to shortcut the work of figuring out processes and jump straight to production solutions. For one thing, it allows you to create an easy list of requirements when it comes to recruiting talent: “Join our company—you must demonstrate experience and proficiency in this tool or that library.”

But when tools and processes become tightly coupled like this, there’s a real danger of stagnation. If a process can be defined as “the way we do things around here”, that’s not something you want to tie to any particular tool or technology. Otherwise, before you know it, you’re in the frustrating situation of using outdated tools, but you can’t swap them out for newer or better-suited technologies without disrupting everyone’s work.

This is technical debt (although it applies just as much to design). You’re paying a penalty in the present because of a decision that somebody made in the past. The problem isn’t so much with the decision itself, but with the longevity of its effects.

I think it’s important to remember what a tool is: it’s a piece of technology that enables you to work faster or better. You should enjoy using your tools, but you shouldn’t be utterly dependent on any particular one. Otherwise, the tail starts wagging the dog—you are now in service to the tool, instead of the other way around.

Treat your tools like cattle, not pets. Don’t get too attached to any one technology to the detriment of missing out on others.

Mind you, if you constantly tried every single new tool or technology out there, you’d never settle on anything—I’m pretty sure that three new JavaScript frameworks have been released since you started reading this paragraph.

The tools you choose at any particular time should be suited to what you’re trying to accomplish at that time. In other words, you’ve got to figure out what you’re trying to accomplish first (the vision), then figure out how you’re going to accomplish it (the process), and only then figure out which tools are the best fit. If you jump straight to choosing tools, you could end up trying to tighten a screw with a hammer.

Alas, I’ve seen plenty of consultants who conflate strategy with tooling. They’re brought in to solve process problems and, surprise, surprise, the solution always seems to involve purchasing the software that their company sells. I’ve been guilty of this myself: I see an organisation struggling to systemise their design patterns, and I think “Oh, they should use Fractal!” …but that’s jumping the gun. They might be better served with something simpler, or something more complex (I mean, Fractal is very, very flexible but it’s still just one option—there are plenty of other pattern library tools out there).

Once you separate out the tools from the process, there’s an added benefit. Making the right technology choice is no longer a life-or-death decision. You can suck it and see. Try out the technology and see if it works. If it’s working, great! Carry on using it. If it’s not working, that’s okay too. Try something different.

I realise I’m oversimplifying things, but I honestly believe that the real challenge is not choosing the right tools, but figuring out the right process for your team.

Process and culture

Cameron has a bone to pick. Why, oh, why, he wonders, are we so quick to create processes when what we really need is a good strong culture?

Strong culture = less process

To stop people breaking stuff: make a process for it. Want to make people act responsibly: make a process for it. Tired of telling people about something? Make a process for it.

For any single scenario you can name it’ll be easier to create a process for it than build a culture that handles it automatically. But each process is a tiny cut away from the freedom that you want your team to enjoy.

I take his point, but I also think that some processes are not only inevitable, but downright positive. There should be a process for handling payroll. There should be a process for handling promotions. Leaving that to culture might sound nice and nimble, but it could also lead to unintentional bias and unfairness.

But let’s leave those kind of operational processes aside and focus on process and culture when it comes to design and engineering. Cameron’s point is well taken here. Surely you want people to just know the way things are done? Surely you want people to just get on with doing the work without putting hurdles in their way?

On the face of it, yes. If you’re trying to scale design at your organisation, then every extra bit of process is going to slow down your progress.

But what if speed isn’t the most important metric of success when it comes to scaling design? You’ve got to make sure you’re scaling the right things.

Mark writes:

This is a post in defence of process. Yes, I know what you’re thinking: ‘urgh, process is a thing put in place to make up for mediocre teams’; or ‘prioritise discussion over documentation’; or ‘I get enough red tape in other parts of my life’.

The example he gives is undeniably a process that will slow things down …deliberately.

Whenever someone asks me to do something that I think seems ill-conceived in some way, I ask them to write it down. That’s it. Because writing is high effort. Making sentences is the easy bit, it’s the thinking I want them to do. By considering their request it slows them down. Maybe 30% of the time or something, they come back and say ‘oh, that thing I asked you to do, I’ve had a think and it’s fine, we don’t need to do it’.

I’ve seen this same tactic employed in standards bodies. Somebody bursts into a group and says “I’ve got a great idea—we should make this a thing!” The response, no matter what the idea is, is to say “Document use-cases.” It’s a stumbling block, and also a bit of a test—if they do come back with use-cases, the idea can be taken seriously; the initial enthusiasm needs to be backed up with hard graft.

(On a personal level, I sometimes use a little trick when it comes to email. If someone sends me a short email that would require a long response from me, I’ll quickly fire back a clarifying question: “Quick question: did you mean X or Y?” Now the ball is back in their court. If they respond swiftly with an answer to my question, then they’ve demonstrated their commitment and I honour their initial request.)

Anyway, it sounds like Cameron is saying that process is bad, and Mark is saying process can be good. Cody Cowan from Postlight thinks they’re both right:

To put it bluntly: people, not process, are the problem.

Even so, he acknowledges Cameron’s concern:

One of the biggest fears that people have about process is that something new is going to disrupt their work, only to be replaced by yet another rule or technique.

I think we can all agree that pointlessly cumbersome processes are bad. The disagreement is about whether all processes are inherently bad, or whether some processes are not only necessary, but sometimes even beneficial.

When Cameron talks about the importance of company culture, he knows whereof he speaks. He’s been part of Canva’s journey from a handful of people to hundreds of people. They’ve managed to scale their (excellent) culture along the way. That’s quite an achievement—scaling culture is really, really challenging. Scaling design is hard. Scaling culture is even harder.

But you know what’s even more challenging than scaling culture? Changing culture.

What if your company didn’t start with a great culture to begin with? What if you’re not Canva? What if you’re not AirBnB? What are your options then?

You can’t create a time travel machine to go back to the founding of the company and ensure a good culture from the outset.

You can’t shut down your existing company and create a new company from scratch, this time with a better culture.

You’ve got to work with what you’ve got. That doesn’t mean you can’t change your company culture, but it’s not going to be easy. Culture is pretty far down the stack of pace layers—it’s slow to change. But you can influence culture by changing something that’s less slow to change. I would argue the perfect medium for this is …process.

Once you know what values you’re trying to embed into your culture, create processes that amplify and reward those values. I totally understand the worry that these processes will reduce autonomy and freedom, but I think that only applies if the company already has a strong culture of autonomy and freedom. If you’re trying to create a culture of autonomy and freedom, then—as counter-intuitive as it may seem—you can start by putting processes in place.

Then, over time, those processes can seep into the day-to-day understanding of how things are done. Process dissolves into culture. It’s a long game to play, but as Cameron points out, that’s the nature of culture change:

Where culture pays off is in the long run. It’s hard work: defining the culture, hiring for the culture and communicating the culture again, and again, and again. But if you want to make a company where people are empowered, passionate, and champions of your organisation then it’s the only path forward.

Audio I listened to in 2017

I huffduffed 290 pieces of audio in 2017. I’ve still got a bit of a backlog of items I haven’t listened to yet, but I thought I’d share some of my favourite items from the past year. Here are twelve pieces of audio, one for each month of 2017…

Donald Hoffman’s TED talk, Do we see reality as it really is?. TED talks are supposed to blow your mind, right? (22:15)

How to Become Batman on Invisibilia. Alix Spiegel and Lulu Miller challenge you to think of blindness as social construct. Hear ‘em out. (58:02)

Where to find what’s disappeared online, and a whole lot more: the Internet Archive on Public Radio International. I just love hearing Brewster Kahle’s enthusiasm and excitement. (42:43)

Every Tuesday At Nine on Irish Music Stories. I’ve been really enjoying Shannon Heaton’s podcast this year. This one digs into that certain something that happens at an Irish music session. (40:50)

Adam Buxton talks to Brian Eno (part two is here). A fun and interesting chat about Brian Eno’s life and work. (53:10 and 46:35)

Nick Cave and Warren Ellis on Kreative Kontrol. This was far more revealing than I expected: genuine and unpretentious. (57:07)

Paul Lloyd at Patterns Day. All the talks at Patterns Day were brilliant. Paul’s really stuck with me. (28:21)

James Gleick on Time Travel at The Long Now. There were so many great talks from The Long Now’s seminars on long-term thinking. Nicky Case and Jennifer Pahlka were standouts too. (1:20:31)

Long Distance on Reply All. It all starts with a simple phone call. (47:27)

The King of Tears on Revisionist History. Malcolm Gladwell’s style suits podcasting very well. I liked this episode about country songwriter Bobby Braddock. Related: Jon’s Troika episode on tearjerkers. (42:14)

Feet on the Ground, Eyes on the Stars: The True Story of a Real Rocket Man with G.A. “Jim” Ogle. This was easily my favourite podcast episode of 2017. It’s on the User Defenders podcast but it’s not about UX. Instead, host Jason Ogle interviews his father, a rocket scientist who worked on everything from Apollo to every space shuttle mission. His story is fascinating. (2:38:21)

R.E.M. on Song Exploder. Breaking down the song Try Not To Breathe from Automatic For The People. (16:15)

I’ve gone back and added the tag “2017roundup” to each of these items. So if you’d like to subscribe to a podcast of just these episodes, here are the links:

Nosediving

Nosedive is the first episode of season three of Black Mirror.

It’s fairly light-hearted by the standards of Black Mirror, but all the more chilling for that. It depicts a dysutopia where people rate one another for points that unlock preferential treatment. It’s like a twisted version of the whuffie from Cory Doctorow’s Down And Out In The Magic Kingdom. Cory himself points out that reputation economies are a terrible idea.

Nosedive has become a handy shortcut for pointing to the dangers of social media (in the same way that Minority Report was a handy shortcut for gestural interfaces and Her is a handy shortcut for voice interfaces).

“Social media is bad, m’kay?” is an understandable but, I think, fairly shallow reading of Nosedive. The problem isn’t with the apps, it’s with the system. A world in which we desperately need to keep our score up if we want to have any hope of advancing? That’s a nightmare scenario.

The thing is …that system exists today. Credit scores are literally a means of applying a numeric value to human beings.

Nosedive depicts a world where your score determines which seats you get in a restaurant, or which model of car you can rent. Meanwhile, in our world, your score determines whether or not you can get a mortgage.

Nosedive depicts a world in which you know your own score. Meanwhile, in our world, good luck with that:

It is very difficult for a consumer to know in advance whether they have a high enough credit score to be accepted for credit with a given lender. This situation is due to the complexity and structure of credit scoring, which differs from one lender to another.

Lenders need not reveal their credit score head, nor need they reveal the minimum credit score required for the applicant to be accepted. Owing only to this lack of information to the consumer, it is impossible for him or her to know in advance if they will pass a lender’s credit scoring requirements.

Black Mirror has a good track record of exposing what’s unsavoury about our current time and place. On the surface, Nosedive seems to be an exposé on the dangers of going to far with the presentation of self in everyday life. Scratch a little deeper though, and it reveals an even more uncomfortable truth: that we’re living in a world driven by systems even worse than what’s depicted in this dystopia.

How about this for a nightmare scenario:

Two years ago Douglas Rushkoff had an unpleasant encounter outside his Brooklyn home. Taking out the rubbish on Christmas Eve, he was mugged — held at knife-point by an assailant who took his money, his phone and his bank cards. Shaken, he went back indoors and sent an email to his local residents’ group to warn them about what had happened.

“I got two emails back within the hour,” he says. “Not from people asking if I was OK, but complaining that I’d posted the exact spot where the mugging had taken place — because it might adversely affect their property values.”

Putting on a conference

It’s been a few weeks now since Patterns Day and I’m still buzzing from it. I might be biased, but I think it was a great success all ‘round—for attendees, for speakers, and for us at Clearleft organising the event.

I first had the idea for Patterns Day quite a while back. To turn the idea into reality meant running some numbers. Patterns Day wouldn’t have been possible without Alis. She did all the logistical work—the hard stuff—which freed me up to concentrate on the line-up. I started to think about who I could invite to speak, and at the same time, started looking for a venue.

I knew from the start that I wanted it to be one-day single-track conference in Brighton, much like Responsive Day Out. I knew I wouldn’t be able to use the Corn Exchange again—there’s extensive rebuilding going on there this year. I put together a shortlist of Brighton venues and Alis investigated their capacities and costs, but to be honest, I knew that I wanted to have it in the Duke Of York’s. I love that place, and I knew from attending FFconf that it makes for an excellent conference venue.

The seating capacity of the Duke Of York’s is quite a bit less than the Corn Exchange, so I knew the ticket price would have to be higher than that of Responsive Day Out. The Duke Of York’s isn’t cheap to rent for the day either (but worth every penny).

To calculate the ticket price, I had to figure out the overall costs:

  • Venue hire,
  • A/V hire,
  • Printing costs (for name badges, or in this case, stickers),
  • Payment provider commission—we use Stripe through the excellent Ti.to,
  • Speaker’s travel,
  • Speaker’s accommodation,
  • Speaker’s dinner the evening before the event,
  • Speaker’s payment.

Some conference organisers think they can skimp on that last part. Those conference organisers are wrong. A conference is nothing without its speakers. They are literally the reason why people buy tickets.

Because the speakers make or break a conference, there’s a real temptation to play it safe and only book people who are veterans. But then you’re missing out on a chance to boost someone when they’re just starting out with public speaking. I remember taking a chance on Alla a few years back for Responsive Day Out 3—she had never given a conference talk before. She, of course, gave a superb talk. Now she’s speaking at events all over the world, and I have to admit, it gives me a warm glow inside. When it came time for Patterns Day, Alla had migrated into the “safe bet” category—I knew she’d deliver the perfect closing keynote.

I understand why conference organisers feel like they need to play it safe. From their perspective, they’re already taking on a lot of risk in putting on a conference in the first place. It’s easy to think of yourself as being in a position of vulnerability—”If I don’t sell enough tickets, I’m screwed!” But I think it’s important to realise that you’re also in a position of power, whether you like it or not. If you’re in charge of putting together the line-up of a conference, that’s a big responsibility, not just to the attendees on the day, but to the community as a whole. It’s like that quote by Eliel Saarinen:

Always design a thing by considering it in its next larger context. A chair in a room, a room in a house, a house in an environment, an environment in a city plan.

Part of that responsibility to the wider community is representation. That’s why I fundamentally disagree with ppk when he says:

The other view would be that there should be 50% woman speakers. Although that sounds great I personally never believed in this argument. It’s based on the general population instead of the population of web developers, and if we’d extend that argument to its logical conclusion then 99.9% of the web development conference speakers should know nothing about web development, since that’s the rough ratio in the general population.

That makes it sound like a conference’s job is to represent the status quo. By that logic, the line-up should include plenty of bad speakers—after all, the majority of web developers aren’t necessarily good speakers. But of course that’s not how conferences work. They don’t represent typical ideas—quite the opposite. What’s the point of having an event that simply reinforces the general consensus? This isn’t Harrison Bergeron. You want a line-up that’s exceptional.

I don’t think conference organisers can shirk this issue and say “It’s out of my hands; I’m just reflecting the way things are.” The whole point of having a conference in the first place is to trigger some kind of change. If you’re not happy with the current make-up of the web community (and I most definitely am not), then a conference is the perfect opportunity to try to demonstrate an alternative. We do it with the subject matter of the talks—”Our code/process/tooling doesn’t have to be this way!”—and I think we should also apply that to the wider context: “Our culture doesn’t have to be this way!”

Passing up that chance isn’t just a missed opportunity, I think it’s also an abdication of responsibility. Believe me, I know that organising a conference is a lot of work, but that’s not a reason to cop out. On the contrary, it’s all the more reason to step up to the plate and try your damnedest to make a difference. Otherwise, why even have a conference?

Whenever the issue of diversity at conferences comes up, there is inevitably someone who says “All I care about is having the best speakers.” But if that were true, shouldn’t your conference (and every other conference) have exactly the same line-up every year?

The truth is that there are all sorts of factors that play into the choice of speakers. I think representation should be a factor, but that’s all it is—one factor of many. Is the subject matter relevant? That’s a factor. Do we already have someone on the line-up covering similar subject matter? That’s a factor. How much will it cost to get this speaker? That’s a factor. Is the speaker travelling from very far away? That’s a factor.

In the case of Patterns Day, I had to factor in the range of topics. I wanted a mixture of big-picture talks as well as hands-on nitty-gritty case studies. I also didn’t want it to be too developer-focused or too design-focused. I was aiming for a good mix of both.

In the end, I must admit that I am guilty of doing exactly what I’ve been railing against. I played it safe. I put together a line-up of speakers that I wanted to see, and that I knew with absolute certainty would deliver great presentations. There were plenty of potential issues for me to get stressed about in the run-up to the event, but the quality of the talks wasn’t one of them. On the one hand, I wish I had taken more chances with the line-up, but honestly, if I could do it over again, I wouldn’t change a thing.

Because I was trying to keep the ticket price as low as possible—and the venue hire was already a significant cost—I set myself the constraint of only having speakers from within the UK (Jina was the exception—she was going to come anyway as an attendee, so of course I asked her to speak). Knowing that the speaker’s travel costs would be low, I could plug the numbers into an algebraic formula for figuring out the ticket price:

costs ÷ seats = price

Add up all the costs and divide that total by the number of available seats to get the minimum ticket price.

In practice, you probably don’t want to have to sell absolutely every single ticket just to break even, so you set the price for a sales figure lower than 100%—maybe 80%, or 50% if you’re out to make a tidy profit (although if you’re out to make a tidy profit, I don’t think conferences are the right business to be in—ask any conference organiser).

Some conferences factor in money for sponsorship to make the event happen. I prefer to have sponsors literally sponsoring additions to the conference. In the case of Patterns Day, the coffee and pastries were sponsored by Deliveroo, and the videos were sponsored by Amazon. But sponsorship didn’t affect the pricing formula.

The Duke Of York’s has around 280 seats. I factored in about 30 seats for speakers, Clearlefties, and other staff. That left 250 seats available for attendees. But that’s not the number I plugged into the pricing formula. Instead, I chose to put 210 tickets on sale and figured out the ticket price accordingly.

What happened to the remaining 40 seats? The majority of them went to Codebar students and organisers. So if you bought a ticket for Patterns Day, you directly subsidised the opportunity for people under-represented in technology to attend. Thank you.

Speaking personally, I found that having the Codebar crew in attendance really made my day. They’re my heroes, and it meant the world to me that they were able to be there.

Zara, Alice, and Amber Patterns Day Anwen, Zara, Alice, Dot, and Amber Eden, Zara, Alice, and Chloe

The magical and the mundane

The iPhone—and by extension, the smartphone—is a decade old. Ian Bogost has written an interesting piece in The Atlantic charting our changing relationship with the technology.

First, it was like a toy dog:

A device that could be cared for, and conspicuously so.

Then, it was like a cigarette:

A nervous tic, facilitated by a handheld apparatus that releases relief when operated.

Later, it was like a rosary:

Its toy-dog quirks having been tamed, its compulsive nature having been accepted, the iPhone became the magic wand by which all worldly actions could be performed, all possible information acquired.

Finally, it simply becomes …a rectangle.

Abstract, as a shape. Flat, as a surface. But suggestive of so much. A table for community. A door for entry, or for exit. A window for looking out of, or a picture for looking into. A movie screen for distraction, or a cradle for comfort, or a bed for seduction.

Design dissolves in behaviour. This is something that Ben wrote about recently in his excellent Slapdashery series: “Everything’s amazing and nobody’s happy.”

Technology tweaks our desire for novelty; but as soon as we get it we’re usually bored. There are no technologies that I can think of that haven’t become mundane.

This is something I touched on in my talk last year at An Event Apart. There’s a thread throughout the talk about Arthur C. Clarke, and of course I quote his third law:

Any sufficiently advanced technology is indistinguishable from magic.

I propose an addendum to that:

Any sufficiently advanced technology is indistinguishable from magic at first.

The magical quickly becomes the mundane. That’s exactly the point that Louis CK is making in the piece that Ben references.

Seven years ago Frank wrote his wonderful essay There Is A Horse In The Apple Store:

I have a term called a “tiny pony.” It is a thing that is exceptional that no one, for whatever reason, notices. Or, conversely, it is an exceptional thing that everyone notices, but quickly grows acclimated to despite the brilliance of it all.

We are surrounded by magical tiny ponies. I mean, just think: right now you are reading some words at a URL on the World Wide Web. Even more magically, I just published some words at my own URL on the World Wide Web. That still blows my mind! I hope I never lose that feeling.

Forgetting again

In an article entitled The future of loneliness Olivia Laing writes about the promises and disappointments provided by the internet as a means of sharing and communicating. This isn’t particularly new ground and she readily acknowledges the work of Sherry Turkle in this area. The article is the vanguard of a forthcoming book called The Lonely City. I’m hopeful that the book won’t be just another baseless luddite reactionary moral panic as exemplified by the likes of Andrew Keen and Susan Greenfield.

But there’s one section of the article where Laing stops providing any data (or even anecdotal evidence) and presents a supposition as though it were unquestionably fact:

With this has come the slowly dawning realisation that our digital traces will long outlive us.

Citation needed.

I recently wrote a short list of three things that are not true, but are constantly presented as if they were beyond question:

  1. Personal publishing is dead.
  2. JavaScript is ubiquitous.
  3. Privacy is dead.

But I didn’t include the most pernicious and widespread lie of all:

The internet never forgets.

This truism is so pervasive that it can be presented as a fait accompli, without any data to back it up. If you were to seek out the data to back up the claim, you would find that the opposite is true—the internet is in constant state of forgetting.

Laing writes:

Faced with the knowledge that nothing we say, no matter how trivial or silly, will ever be completely erased, we find it hard to take the risks that togetherness entails.

Really? Suppose I said my trivial and silly thing on Friendfeed. Everything that was ever posted to Friendfeed disappeared three days ago:

You will be able to view your posts, messages, and photos until April 9th. On April 9th, we’ll be shutting down FriendFeed and it will no longer be available.

What if I shared on Posterous? Or Vox (back when that domain name was a social network hosting 6 million URLs)? What about Pownce? Geocities?

These aren’t the exceptions—this is routine. And yet somehow, despite all the evidence to the contrary, we still keep a completely straight face and say “Be careful what you post online; it’ll be there forever!”

The problem here is a mismatch of expectations. We expect everything that we post online, no matter how trivial or silly, to remain forever. When instead it is callously destroyed, our expectation—which was fed by the “knowledge” that the internet never forgets—is turned upside down. That’s where the anger comes from; the mismatch between expected behaviour and the reality of this digital dark age.

Being frightened of an internet that never forgets is like being frightened of zombies or vampires. These things do indeed sound frightening, and there’s something within us that readily responds to them, but they bear no resemblance to reality.

If you want to imagine a truly frightening scenario, imagine an entire world in which people entrust their thoughts, their work, and pictures of their family to online services in the mistaken belief that the internet never forgets. Imagine the devastation when all of those trivial, silly, precious moments are wiped out. For some reason we have a hard time imagining that dystopia even though it has already played out time and time again.

I am far more frightened by an internet that never remembers than I am by an internet that never forgets.

And worst of all, by propagating the myth that the internet never forgets, we are encouraging people to focus in exactly the wrong area. Nobody worries about preserving what they put online. Why should they? They’re constantly being told that it will be there forever. The result is that their history is taken from them:

If we lose the past, we will live in an Orwellian world of the perpetual present, where anybody that controls what’s currently being put out there will be able to say what is true and what is not. This is a dreadful world. We don’t want to live in this world.

Brewster Kahle

Normal

Here in the UK, there’s a “newspaper”—and I use the term advisedly—called The Sun. In longstanding tradition, page 3 of The Sun always features a photograph of a topless woman.

To anyone outside the UK, this is absolutely bizarre. Frankly, it’s pretty bizarre to most people in the UK as well. Hence the No More Page 3 campaign which seeks to put pressure on the editor of The Sun to ditch their vestigal ’70s sexism and get with the 21st Century.

Note that the campaign is not attempting to make the publication of topless models in a daily newspaper illegal. Note that the campaign is not calling for top-down censorship from press regulators. Instead the campaign asks only that the people responsible reassess their thinking and recognise the effects of having topless women displayed in what is supposedly a family newspaper.

Laura Bates of the Everyday Sexism project has gathered together just some examples of the destructive effects of The Sun’s page 3. And sure, in this age of instant access to porn via the internet, an image of a pair of breasts might seem harmless and innocuous, but it’s the setting for that image that wreaks the damage:

Being in a national newspaper lends these images public presence and, more harmfully for young people, the perception of mainstream cultural approval. Our society, through Page 3, tells both girls and boys ‘that’s what women are’.

Simply put, having this kind of objectification in a freely-available national newspaper normalises it. When it’s socially acceptable to have a publication like The Sun in a workplace, then it’s socially acceptable for that same workplace to have the accompanying air of sexism.

That same kind of normalisation happens in online communities. When bad behaviour is tolerated, bad behaviour is normalised.

There are obvious examples of online communities where bad behaviour is tolerated, or even encouraged: 4Chan, Something Awful. But as long as I can remember, there have also been online communites that normalise abhorrent attitudes, and yet still get a free pass (usually because the site in question would deliver bucketloads of traffic …as though that were the only metric that mattered).

It used to be Slashdot. Then it was Digg. Now it’s Reddit and Hacker News.

In each case, the defence of the bad behaviour was always explained by the sheer size of the community. “Hey, that’s just the way it is. There’s nothing can be done about it.” To put it another way …it’s normal.

But normality isn’t an external phenomenon that exists in isolation. Normality is created. If something is perceived as normal—whether that’s topless women in a national newspaper or threatening remarks in an online forum—that perception is fueled by what we collectively accept to be “normal”.

Last year, Relly wrote about her experience at a conference:

Then there was the one comment I saw in a live irc style backchannel at an event, just after I came off stage. I wish I’d had the forethought to screenshot it or something but I was so shocked, I dropped my laptop on the table and immediately went and called home, to check on my kids.

Why?

Because the comment said (paraphrasing) “This talk was so pointless. After she mentioned her kids at the beginning I started thinking of ways to hunt them down and punish her for wasting my time here.”

That’s a horrible thing for anyone to say. But I can understand how someone would think nothing of making a remark like that …if they began their day by reading Reddit or Hacker News. If you make a remark like that there, nobody bats an eyelid. It’s normal.

So what do we do about that? Do we simply accept it? Do we shrug our shoulders and say “Oh, well”? Do we treat it like some kind of unchangeable immovable force of nature; that once you have a large online community, bad behaviour should be accepted as the default mode of discourse?

No.

It’s hard work. I get that. Heck, I run an online community myself and I know just how hard it is to maintain civility (and I’ve done a pretty terrible job of it in the past). But it’s not impossible. Metafilter is a testament to that.

The other defence of sites like Reddit and Hacker News is that it’s unfair to judge the whole entity based purely on their worst episodes. I don’t buy that. The economic well-being of a country shouldn’t be based on the wealth of its richest citizens—or even the wealth of its average citizens—but its poorest.

That was precisely how Rebecca Watson was shouted down when she tried to address Reddit’s problems when she was on a panel at South by Southwest last year:

Does the good, no matter if it’s a fundraiser for a kid with cancer or a Secret Santa gift exchange, negate the bigotry?

Like I said, running an online community is hardDerek’s book was waaaay ahead of its time—but it’s not impossible. If we treat awful behaviour as some kind of unstoppable force that can’t be dealt with, then what’s the point in trying to have any kind of community at all?

Just as with the No More Page 3 campaign, I’m not advocating legal action or legislative control. Instead, I just want some awareness that what we think of as normal is what we collectively decide is normal.

I try not to be a judgemental person. But if I see someone in public with a copy of The Sun, I’m going to judge them. And no, it’s not a class thing: I just don’t consider misogyny to be socially acceptable. And if you participate in Reddit or Hacker News …well, I’m afraid I’m going to judge you too. I don’t consider it socially acceptable.

Of course my judgemental opinion of someone doesn’t make a blind bit of difference to anybody. But if enough of us made our feelings clear, then maybe slowly but surely, there might be a shift in feeling. There might just be a small movement of the needle that calibrates what we think of normal in our online communities.

A map to build by

The fifth and final Build has just wrapped up in Belfast. As always, it delivered an excellent day of thought-provoking talks.

It felt like some themes emerged, not just from this year, but from the arc of the last five years. More than one speaker tapped into a feeling that I’ve had for a while that the web has changed. The web has grown up. Unfortunately, it has grown up to be kind of a dickhead.

There were many times during the day’s talks at Build that I was reminded of Anil Dash’s The Web We Lost. Both Jason and Frank pointed to the imbalance of power on the web, where the bottom line has become more important than the user. It’s a landscape dominated by The Stacks—Google, Facebook, et al.—and by fly-by-night companies who have no interest in being good web citizens, and even less interest in the data that they’re sucking from their users.

Don’t get me wrong: I’m not saying that companies shouldn’t be interested in making money—that’s what companies do. But prioritising profit above all else is not going to result in a stable society. And the web is very much part of the fabric of society now. Still, the web is young enough to have escaped the kind of regulation that “real world” companies would be subjected to. Again, don’t get me wrong: I don’t want top-down regulation. What I want is some common standards of decency amongst web companies. If the web ends up getting regulated because of repeated acts of abuse, it will be a tragedy of the commons on an unprecedented scale.

I realise that sounds very gloomy and doomy, and I don’t want to give the impression that Build was a downer—it really wasn’t. As the last ever speaker at Build, Frank ended on a note of optimism. Sure, the way we think about the web now is filled with negative connotations: it appears money-grabbing, shallow, and locked down. But that doesn’t mean that the web is inherently like that.

Harking back to Ethan’s fantastic talk at last year’s Build, Frank made the point that our map of the web makes it seem a grim place, but the territory of the web isn’t necessarily a lost cause. What we need is a better map. A map of openness, civility, and—something that’s gone missing from the web’s younger days—a touch of wildness.

I take comfort from that. I take comfort from that because we are the map makers. The worst thing that could happen would be for us to fatalistically accept the negative turn that the web has taken as inevitable, as “just the way things are.” If the web has grown up to be a dickhead, it’s because we shaped it that way, either through our own actions or inactions. But the web hasn’t finished growing. We can still shape it. We can make it less of a dickhead. At the very least, we can acknowledge that things can and should be better.

I’m not sure exactly how we go about making a better map for the web. I have a vague feeling that it involves tapping into the kind of spirit that informs places like CERN—the kind of spirit that motivated the creation of the web itself. I have a feeling that making a better map for the web doesn’t involve forming startups and taking venture capital. Neither do I think that a map for a better web will emerge from working at Google, Facebook, Twitter, or any of the current incumbents.

So where do we start? How do we begin to attempt to make a better web without getting overwehlmed by the enormity of the task?

Perhaps the answer comes from one of the other speakers at this year’s Build. In a beautifully-delivered presentation, Paul Soulellis spoke about resistance:

How do we, as an industry of creative professionals, reconcile the fact that so much of what we make is used to perpetuate the demands of a bloated marketplace? A monoculture?

He spoke about resisting the intangible nature of digital work with “thingness”, and resisting the breakneck speed of the network with slowness. Perhaps we need our own acts of resistance if we want to change the map of the web.

I don’t know what those acts of resistance are. Perhaps publishing on your own website is an act of resistance—one that’s more threatening to the big players than they’d like to admit. Perhaps engaging in civil discourse online is an act of resistance.

Like I said, I don’t know. But I really appreciate the way that this year’s Build has pushed me into asking these uncomfortable questions. Like the web, Build has grown up over the years. Unlike the web, Build turned out just fine.

Battle for the planet of the APIs

Back in 2006, I gave a talk at dConstruct called The Joy Of API. It basically involved me geeking out for 45 minutes about how much fun you could have with APIs. This was the era of the mashup—taking data from different sources and scrunching them together to make something new and interesting. It was a good time to be a geek.

Anil Dash did an excellent job of describing that time period in his post The Web We Lost. It’s well worth a read—and his talk at The Berkman Istitute is well worth a listen. He described what the situation was like with APIs:

Five years ago, if you wanted to show content from one site or app on your own site or app, you could use a simple, documented format to do so, without requiring a business-development deal or contractual agreement between the sites. Thus, user experiences weren’t subject to the vagaries of the political battles between different companies, but instead were consistently based on the extensible architecture of the web itself.

Times have changed. These days, instead of seeing themselves as part of a wider web, online services see themselves as standalone entities.

So what happened?

Facebook happened.

I don’t mean that Facebook is the root of all evil. If anything, Facebook—a service that started out being based on exclusivity—has become more open over time. That’s the cause of many of its scandals; the mismatch in mental models that Facebook users have built up about how their data will be used versus Facebook’s plans to make that data more available.

No, I’m talking about Facebook as a role model; the template upon which new startups shape themselves.

In the web’s early days, AOL offered an alternative. “You don’t need that wild, chaotic lawless web”, it proclaimed. “We’ve got everything you need right here within our walled garden.”

Of course it didn’t work out for AOL. That proposition just didn’t scale, just like Yahoo’s initial model of maintaining a directory of websites just didn’t scale. The web grew so fast (and was so damn interesting) that no single company could possibly hope to compete with it. So companies stopped trying to compete with it. Instead they, quite rightly, saw themselves as being part of the web. That meant that they didn’t try to do everything. Instead, you built a service that did one thing really well—sharing photos, managing links, blogging—and if you needed to provide your users with some extra functionality, you used the best service available for that, usually through someone else’s API …just as you provided your API to them.

Then Facebook began to grow and grow. I remember the first time someone was showing me Facebook—it was Tantek of all people—I remember asking “But what is it for?” After all, Flickr was for photos, Delicious was for links, Dopplr was for travel. Facebook was for …everything …and nothing.

I just didn’t get it. It seemed crazy that a social network could grow so big just by offering …well, a big social network.

But it did grow. And grow. And grow. And suddenly the AOL business model didn’t seem so crazy anymore. It seemed ahead of its time.

Once Facebook had proven that it was possible to be the one-stop-shop for your user’s every need, that became the model to emulate. Startups stopped seeing themselves as just one part of a bigger web. Now they wanted to be the only service that their users would ever need …just like Facebook.

Seen from that perspective, the open flow of information via APIs—allowing data to flow porously between services—no longer seemed like such a good idea.

Not only have APIs been shut down—see, for example, Google’s shutdown of their Social Graph API—but even the simplest forms of representing structured data have been slashed and burned.

Twitter and Flickr used to markup their user profile pages with microformats. Your profile page would be marked up with hCard and if you had a link back to your own site, it include a rel=”me” attribute. Not any more.

Then there’s RSS.

During the Q&A of that 2006 dConstruct talk, somebody asked me about where they should start with providing an API; what’s the baseline? I pointed out that if they were already providing RSS feeds, they already had a kind of simple, read-only API.

Because there’s a standardised format—a list of items, each with a timestamp, a title, a description (maybe), and a link—once you can parse one RSS feed, you can parse them all. It’s kind of remarkable how many mashups can be created simply by using RSS. I remember at the first London Hackday, one of my favourite mashups simply took an RSS feed of the weather forecast for London and combined it with the RSS feed of upcoming ISS flypasts. The result: a Twitter bot that only tweeted when the International Space Station was overhead and the sky was clear. Brilliant!

Back then, anywhere you found a web page that listed a series of items, you’d expect to find a corresponding RSS feed: blog posts, uploaded photos, status updates, anything really.

That has changed.

Twitter used to provide an RSS feed that corresponded to my HTML timeline. Then they changed the URL of the RSS feed to make it part of the API (and therefore subject to the terms of use of the API). Then they removed RSS feeds entirely.

On the Salter Cane site, I want to display our band’s latest tweets. I used to be able to do that by just grabbing the corresponding RSS feed. Now I’d have to use the API, which is a lot more complex, involving all sorts of authentication gubbins. Even then, according to the terms of use, I wouldn’t be able to display my tweets the way I want to. Yes, how I want to display my own data on my own site is now dictated by Twitter.

Thanks to Jo Brodie I found an alternative service called Twitter RSS that gives me the RSS feed I need, ‘though it’s probably only a matter of time before that gets shuts down by Twitter.

Jo’s feelings about Twitter’s anti-RSS policy mirror my own:

I feel a pang of disappointment at the fact that it was really quite easy to use if you knew little about coding, and now it might be a bit harder to do what you easily did before.

That’s the thing. It’s not like RSS is a great format—it isn’t. But it’s just good enough and just versatile enough to enable non-programmers to make something cool. In that respect, it’s kind of like HTML.

The official line from Twitter is that RSS is “infrequently used today.” That’s the same justification that Google has given for shutting down Google Reader. It reminds of the joke about the shopkeeper responding to a request for something with “Oh, we don’t stock that—there’s no call for it. It’s funny though, you’re the fifth person to ask today.”

RSS is used a lot …but much of the usage is invisible:

RSS is plumbing. It’s used all over the place but you don’t notice it.

That’s from Brent Simmons, who penned a love letter to RSS:

If you subscribe to any podcasts, you use RSS. Flipboard and Twitter are RSS readers, even if it’s not obvious and they do other things besides.

He points out the many strengths of RSS, including its decentralisation:

It’s anti-monopolist. By design it creates a level playing field.

How foolish of us, therefore, that we ended up using Google Reader exclusively to power all our RSS consumption. We took something that was inherently decentralised and we locked it up into one provider. And now that provider is going to screw us over.

I hope we won’t make that mistake again. Because, believe me, RSS is far from dead just because Google and Twitter are threatened by it.

In a post called The True Web, Robin Sloan reiterates the strength of RSS:

It will dip and diminish, but will RSS ever go away? Nah. One of RSS’s weaknesses in its early days—its chaotic decentralized weirdness—has become, in its dotage, a surprising strength. RSS doesn’t route through a single leviathan’s servers. It lacks a kill switch.

I can understand why that power could be seen as a threat if what you are trying to do is force your users to consume their own data only the way that you see fit (and all in the name of “user experience”, I’m sure).

Returning to Anil’s description of the web we lost:

We get a generation of entrepreneurs encouraged to make more narrow-minded, web-hostile products like these because it continues to make a small number of wealthy people even more wealthy, instead of letting lots of people build innovative new opportunities for themselves on top of the web itself.

I think that the presence or absence of an RSS feed (whether I actually use it or not) is a good litmus test for how a service treats my data.

It might be that RSS is the canary in the coal mine for my data on the web.

If those services don’t trust me enough to give me an RSS feed, why should I trust them with my data?

Slow glass

The day that Opera announced that it was changing its browser to use the WebKit rendering engine, I was contacted by .net magazine for my opinion on the move. My response was:

I have no opinion on this right now.

Frankly, I’m always quite amazed at how others can form opinions so quickly. Sometimes opinions are formed and set on technologies before they’re even out and about in the world: little printers, Apple watches, Google glasses…

The case against Google Glass seemed to be a done deal after Mark Hurst published The Google Glass feature no one is talking about:

The key experiential question of Google Glass isn’t what it’s like to wear them, it’s what it’s like to be around someone else who’s wearing them.

It’s a very persuasive piece of writing and it certainly gave me food for thought. Then Eric wrote Glasshouse:

Our youngest tends to wake up fairly early in the morning, at least as compared to his sisters, and since I need less sleep than Kat I’m usually the one who gets up with him. This morning, he put away a box he’d just emptied of toys and I told him, “Well done!” He turned to me, stuck his hand up in the air, and said with glee, “Hive!”

I gave him the requested high-five, of course, and then another for being proactive. It was the first time he’d ever asked for one. He could not have looked more pleased with himself.

And I suddenly realized that I wanted to be able to say to my glasses, “Okay, dump the last 30 seconds of livestream to permanent storage.”

Now I’ve got another interesting, persuasive perspective on the yet-to-be-released product.

Just as we can be very quick to label websites and social networks as dead (see Flickr), I worry if we’re often too quick to look for the worst aspects in any new technology.

Natalia has written a great piece called No, let’s not stop the cyborgs in reaction to the over-the-top Luddism of the Stop The Cyborgs movement:

Healthy criticism and skepticism towards technologies and their impact on society is necessary, but framing it in a way that discredits all people with body and sense enhancing technologies is othering.

Now we get in to the question of whether technology can be inherently “good” or “bad.” Kevin Kelly avoids such loaded terms, but he does ascribe some kind of biased trajectory to our tools in his book What Technology Wants.

Natalia writes:

It’s also important to remember that technologies themselves aren’t always ethically questionable. It’s what we do with them that can be positive or contribute to suffering and misery. Sometimes the same technology can be used to help people and to simultaneously ruin lives for profit.

A fair point, but one that is most commonly used by the pro-gun lobby—proponents of a technology that I personally find very hard to view as neutral.

But the point remains: we seem to have a natural impulse to immediately think of the worst that could happen with any new technology (though I’m just as impatient with techno-utopians as I am with techno-dystopians). I really enjoy watching Black Mirror but its central question grows wearisome after a while. That question is “What’s the worst that could happen?”

I am, once more, reminded of the danger of self-fulfilling prophesies when it comes to seeing the worst in technologies like Google Glass. As Matt Webb’s algorithm puts it:

It’s not the end of privacy because it’s all newly visible, it’s the end of privacy because it looks like it’s the end of privacy because it’s all newly visible.

I was chatting with fellow sci-fi fan Jon Tan about Kim Stanley Robinson, whose work I (shamefully) haven’t dived into yet. Jon told me that a good starting point would be the Three Californias trilogy. It consists of one utopia, one dystopia, and one apocalypse. I like the sound of that.

Those who take an anti-technology stance, or at least an overly-negative stance on technology, are often compared to the Amish. But as Stewart Brand is quick to point out, the Amish don’t reject technology—instead, they take their time in deciding whether a new technology will, on balance, be better or worse for their society in the long term:

The Amish seek to master technology rather than become its slave.

I think that techno-utopians and -dystopians alike can appreciate that.

To CERN with love

I went to Switzerland yesterday. More specifically, Geneva. More specifically, CERN. More specifically, ATLAS. Tireless Communications Officer Claudia Marcelloni went out of her way to make sure that I had a truly grand tour of life at CERN.

Claudia at the Globe Control room

CERN is the ultimate area of overlap in the Venn diagram of geek interests: the place where the World Wide Web was invented while people were working on cracking the secrets of the universe.

I saw the world’s first web server—Tim Berners-Lee’s NeXT machine. I saw the original proposal for the World Wide Web, complete with the note scribbled across the top “vague but exciting.”

The first web server Information Management: A Proposal

But I understand what James meant when he described the whole web thing as a sideshow to the main event:

Because, you know the web is cool and all, but when you’re trying to understand the fundamental building blocks of the universe and constructing the single greatest scientific instrument of ours and perhaps any civilisation, the whole modern internet is a happy side effect, it is a nice to have.

The highlight of my day was listening to Christoph Rembser geek out about his work: hunting for signs of elusive dark matter by measuring missing momentum when smashing particles together near the speed of light in a 27 kilometre wide massive structure 100 metres underneath France and Switzerland, resulting in incredible amounts of data being captured and stored within an unimaginably short timescale. Awesome. Literally, awesome.

Christoph geeking out Dr. Christoph Rembser

But what really surprised me at CERN wasn’t learning about the creation of the web or learning about the incredible scientific work being done there. As a true-blooded web/science nerd, I had already read plenty about both. No, what really took me by surprise was the social structure at CERN.

According to most established social and economic theory, nothing should ever get done at CERN. It’s a collection of thousands of physics nerds—a mixture of theorists (the ones with blackboards) and experimentalists (the ones with computers). When someone wants to get something done, they present their ideas and ask for help from anyone with specific fields of expertise. Those people, if they like the sound of the idea, say “Okay” and a new collaboration is born.

That’s it. That’s how stuff gets done. It’s like a massive multiplayer hackday. It’s like the ultimate open source project (and yes, everything, absolutely everything, done at CERN is realised publicly). It is the cathedral and it is the bazaar. It is also the tower of Babel: people from everywhere in the world come to this place and collaborate, communicating any way they can. In the canteen, where Nobel prize winners sit with students, you can hear a multitude of accents and languages.

CERN is an amazing place. These thousands of people might be working on completely different projects, but there’s a shared understanding and a shared ethos amongst every one of them. That might sound like a flimsy basis for any undertaking, but it works. It works really, really well. And this isn’t just any old undertaking—they’re not making apps or shipping consumer products—they’re working on the most important questions that humans have ever attempted to answer. And they’re doing it all within a framework that, according to conventional wisdom, just shouldn’t work. But it does work. And that, in its own way, is also literally awesome.

Christoph described what it was like for him to come to CERN from Bonn, the then-capital of West Germany. It was 1989, a momentous year (and not just because Tim Berners-Lee wrote Information Management: A Proposal). Students were demonstrating and dying in Tiananmen Square. The Berlin wall was coming down (only later did I realise that my visit to CERN took place on October 3rd, Tag der Deutschen Einheit). At CERN, Christoph met Chinese students, Russian scientists, people from all over the world transcending their political differences to collaborate on truly fundamental questions. And he said that when people returned to their own countries, they surely carried with them some of that spirit that they had experienced together at CERN.

Compared to the actual work going on at CERN, that idea is a small one. It may not be literally awesome …but it really resonated with me.

I think I understand a little better now where the web comes from.

I approve of this message

OurSpace

It’s hard to believe that it’s been half a decade since The Show from Ze Frank graced our tubes with its daily updates. Five years ago to the day, he recorded the greatest three minutes of speech ever committed to video.

In the midst of his challenge to find the ugliest MySpace page ever, he received this comment:

Having an ugly Myspace contest is like having a contest to see who can eat the most cheeseburgers in 24 hours… You’re mocking people who, for the most part, have no taste or artistic training.

Ze’s response is a manifesto to the democratic transformative disruptive power of the web. It is magnificent.

In Myspace, millions of people have opted out of pre-made templates that “work” in exchange for ugly. Ugly when compared to pre-existing notions of taste is a bummer. But ugly as a representation of mass experimentation and learning is pretty damn cool.

Regardless of what you might think, the actions you take to make your Myspace page ugly are pretty sophisticated. Over time as consumer-created media engulfs the other kind, it’s possible that completely new norms develop around the notions of talent and artistic ability.

Spot on.

That’s one of the reasons why I dread the inevitable GeoCities-style shutdown of MySpace. Let’s face it, it’s only a matter of time. And when it does get shut down, we will forever lose a treasure trove of self-expression on a scale never seen before in the history of the planet. That’s so much more important than whether it’s ugly or not. As Phil wrote about the ugly and neglected fragments of Geocities:

GeoCities is an awful, ugly, decrepit mess. And this is why it will be sorely missed. It’s not only a fine example of the amateur web vernacular but much of it is an increasingly rare example of a period web vernacular. GeoCities sites show what normal, non-designer, people will create if given the tools available around the turn of the millennium.

Substitute MySpace for GeoCities and you get an idea of the loss we are facing.

Let’s not make the same mistake twice.

Voice of the Beeb hive

Ian Hunter at the BBC has written a follow-up post to his initial announcement of the plans to axe 172 websites. The post is intended to clarify and reassure. It certainly clarifies, but it is anything but reassuring.

He clarifies that, yes, these websites will be taken offline. But, he reassures us, they will be stored …offline. Not on the web. Without URLs. Basically, they’ll be put in a hole in the ground. But it’s okay; it’s a hole in the ground operated by the BBC, so that’s alright then.

The most important question in all of this is why the sites are being removed at all. As I said, the BBC’s online mothballing policy has—up till now—been superb. Well, now we have an answer. Here it is:

But there still may come a time when people interested in the site are better served by careful offline storage.

There may be a parallel universe where that sentence makes sense, but it would have to be one in which the English language is used very differently.

As an aside, the use of language in the “explanation” is quite fascinating. The post is filled with the kind of mealy-mouthed filler words intended to appease those of us who are concerned that this is a terrible mistake. For example, the phrase “we need to explore a range of options including offline storage” can be read as “the sites are going offline; live with it.”

That’s one of the most heartbreaking aspects of all of this: the way that it is being presented as a fait accompli: these sites are going to be ripped from the fabric of the network to be tossed into a single offline point of failure and there’s nothing that we—the license-payers—can do about it.

I know that there are many people within the BBC who do not share this vision. I’ve received some emails from people who worked on some of the sites scheduled for deletion and needless to say, they’re not happy. I was contacted by an archivist at the BBC, for whom this plan was unwelcome news that he first heard about here on adactio.com. The subsequent reaction was:

It was OK to put a videotape on a shelf, but putting web pages offline isn’t OK.

I hope that those within the BBC who disagree with the planned destruction will make their voices heard. For those of us outside the BBC, it isn’t clear how we can best voice our concerns. You could make a complaint to the BBC, though that seems to be intended more for complaints about programme content.

In the meantime, you can download all or some of the 172 sites and plop them elsewhere on the web. That’s not an ideal solution—ideally, the BBC shouldn’t be practicing a deliberate policy of link rot—but it allows us to prepare for the worst.

I hope that whoever at the BBC has responsibility for this decision will listen to reason. Failing that, I hope that we can get a genuine explanation as to why this is happening, because what’s currently being offered up simply doesn’t cut it. Perhaps the truth behind this decision lies not so much with the BBC, but with their technology partner, Siemens, who have a notorious track record for shafting the BBC, charging ludicrous amounts of money to execute the most trivial of technical changes.

If this decision is being taken for political reasons, I would hope that someone at the BBC would have the honesty to say so rather than simply churning out more mealy-mouthed blog posts devoid of any genuine explanation.