Journal tags: head

10

sparkline

Alt writing

I made the website for this year’s UX London by hand.

Well, that’s not entirely true. There’s exactly one build tool involved. I’m using Sergey to include global elements—the header and footer—something that’s still not possible in HTML.

So it’s minium viable static site generation rather than actual static files. It’s still very hands-on though and I enjoy that a lot; editing HTML and CSS directly without intermediary tools.

When I update the site, it’s usually to add a new speaker to the line-up (well, not any more now that the line up is complete). That involves marking up their bio and talk description. I also create a couple of different sized versions of their headshot to use with srcset. And of course I write an alt attribute to accompany that image.

By the way, Jake has an excellent article on writing alt text that uses the specific example of a conference site. It raises some very thought-provoking questions.

I enjoy writing alt text. I recently described how I updated my posting interface here on my own site to put a textarea for alt text front and centre for my notes with photos. Since then I’ve been enjoying the creative challenge of writing useful—but also evocative—alt text.

Some recent examples:

But when I was writing the alt text for the headshots on the UX London site, I started to feel a little disheartened. The more speakers were added to the line-up, the more I felt like I was repeating myself with the alt text. After a while they all seemed to be some variation on “This person looking at the camera, smiling” with maybe some detail on their hair or clothing.

  • Videha Sharma
    The beaming bearded face of Videha standing in front of the beautiful landscape of a riverbank.
  • Candi Williams
    Candi working on her laptop, looking at the camera with a smile.
  • Emma Parnell
    Emma smiling against a yellow background. She’s wearing glasses and has long straight hair.
  • John Bevan
    A monochrome portrait of John with a wry smile on his face, wearing a black turtleneck in the clichéd design tradition.
  • Laura Yarrow
    Laura smiling, wearing a chartreuse coloured top.
  • Adekunle Oduye
    A profile shot of Adekunle wearing a jacket and baseball cap standing outside.

The more speakers were added to the line-up, the harder I found it not to repeat myself. I wondered if this was all going to sound very same-y to anyone hearing them read aloud.

But then I realised, “Wait …these are kind of same-y images.”

By the very nature of the images—headshots of speakers—there wasn’t ever going to be that much visual variation. The experience of a sighted person looking at a page full of speakers is that after a while the images kind of blend together. So if the alt text also starts to sound a bit repetitive after a while, maybe that’s not such a bad thing. A screen reader user would be getting an equivalent experience.

That doesn’t mean it’s okay to have the same alt text for each image—they are all still different. But after I had that realisation I stopped being too hard on myself if I couldn’t come up with a completely new and original way to write the alt text.

And, I remind myself, writing alt text is like any other kind of writing. The more you do it, the better you get.

Accessibility testing

I was doing some accessibility work with a client a little while back. It was mostly giving their site the once-over, highlighting any issues that we could then discuss. It was an audit of sorts.

While I was doing this I started to realise that not all accessibility issues are created equal. I don’t just mean in their severity. I mean that some issues can—and should—be caught early on, while other issues can only be found later.

Take colour contrast. This is something that should be checked before a line of code is written. When designs are being sketched out and then refined in a graphical editor like Figma, that’s the time to check the ratio between background and foreground colours to make sure there’s enough contrast between them. You can catch this kind of thing later on, but by then it’s likely to come with a higher cost—you might have to literally go back to the drawing board. It’s better to find the issue when you’re at the drawing board the first time.

Then there’s the HTML. Most accessibility issues here can be caught before the site goes live. Usually they’re issues of ommission: form fields that don’t have an explicitly associated label element (using the for and id attributes); images that don’t have alt text; pages that don’t have sensible heading levels or landmark regions like main and nav. None of these are particularly onerous to fix and they come with the biggest bang for your buck. If you’ve got sensible forms, sensible headings, alt text on images, and a solid document structure, you’ve already covered the vast majority of accessibility issues with very little overhead. Some of these checks can also be automated: alt text for images; labels for inputs.

Then there’s interactive stuff. If you only use native HTML elements you’re probably in the clear, but chances are you’ve got some bespoke interactivity on your site: a carousel; a mega dropdown for navigation; a tabbed interface. HTML doesn’t give you any of those out of the box so you’d need to make your own using a combination of HTML, CSS, JavaScript and ARIA. There’s plenty of testing you can do before launching—I always ask myself “What would Heydon do?”—but these components really benefit from being tested by real screen reader users.

So if you commission an accessibility audit, you should hope to get feedback that’s mostly in that third category—interactive widgets.

If you get feedback on document structure and other semantic issues with the HTML, you should fix those issues, sure, but you should also see what you can do to stop those issues going live again in the future. Perhaps you can add some steps in the build process. Or maybe it’s more about making sure the devs are aware of these low-hanging fruit. Or perhaps there’s a framework or content management system that’s stopping you from improving your HTML. Then you need to execute a plan for ditching that software.

If you get feedback about colour contrast issues, just fixing the immediate problem isn’t going to address the underlying issue. There’s a process problem, or perhaps a communication issue. In that case, don’t look for a technical solution. A design system, for example, will not magically fix a workflow issue or route around the problem of designers and developers not talking to each other.

When you commission an accessibility audit, you want to make sure you’re getting the most out of it. Don’t squander it on issues that you can catch and fix yourself. Make sure that the bulk of the audit is being spent on the specific issues that are unique to your site.

Server Timing

Harry wrote a really good article all about the performance measurement Time To First Byte. Time To First Byte: What It Is and Why It Matters:

While a good TTFB doesn’t necessarily mean you will have a fast website, a bad TTFB almost certainly guarantees a slow one.

Time To First Byte has been the chink in my armour over at thesession.org, especially on the home page. Every time I ran Lighthouse, or some other performance testing tool, I’d get a high score …with some points deducted for taking too long to get that first byte from the server.

Harry’s proposed solution is to set up some Server Timing headers:

With a little bit of extra work spent implementing the Server Timing API, we can begin to measure and surface intricate timings to the front-end, allowing web developers to identify and debug potential bottlenecks previously obscured from view.

I rememberd that Drew wrote an excellent article on Smashing Magazine last year called Measuring Performance With Server Timing:

The job of Server Timing is not to help you actually time activity on your server. You’ll need to do the timing yourself using whatever toolset your backend platform makes available to you. Rather, the purpose of Server Timing is to specify how those measurements can be communicated to the browser.

He even provides some PHP code, which I was able to take wholesale and drop into the codebase for thesession.org. Then I was able to put start/stop points in my code for measuring how long some operations were taking. Then I could output the results of these measurements into Server Timing headers that I could inspect in the “Network” tab of a browser’s dev tools (Chrome is particularly good for displaying Server Timing, so I used that while I was conducting this experiment).

I started with overall database requests. Sure enough, that was where most of the time in time-to-first-byte was being spent.

Then I got more granular. I put start/stop points around specific database calls. By doing this, I was able to zero in on which operations were particularly costly. Once I had done that, I had to figure out how to make the database calls go faster.

Spoiler: I did it by adding an extra index on one particular table. It’s almost always indexes, in my experience, that make the biggest difference to database performance.

I don’t know why it took me so long to get around to messing with Server Timing headers. It has paid off in spades. I wish I had done it sooner.

And now thesession.org is positively zipping along!

Detecting image requests in service workers

In Going Offline, I dive into the many different ways you can use a service worker to handle requests. You can filter by the URL, for example; treating requests for pages under /blog or /articles differently from other requests. Or you can filter by file type. That way, you can treat requests for, say, images very differently to requests for HTML pages.

One of the ways to check what kind of request you’re dealing with is to see what’s in the accept header. Here’s how I show the test for HTML pages:

if (request.headers.get('Accept').includes('text/html')) {
    // Handle your page requests here.
}

So, logically enough, I show the same technique for detecting image requests:

if (request.headers.get('Accept').includes('image')) {
    // Handle your image requests here.
}

That should catch any files that have image in the request’s accept header, like image/png or image/jpeg or image/svg+xml and so on.

But there’s a problem. Both Safari and Firefox now use a much broader accept header: */*

My if statement evaluates to false in those browsers. Sebastian Eberlein wrote about his workaround for this issue, which involves looking at file extensions instead:

if (request.url.match(/\.(jpe?g|png|gif|svg)$/)) {
    // Handle your image requests here.
}

So consider this post a patch for chapter five of Going Offline (page 68 specifically). Wherever you see:

if (request.headers.get('Accept').includes('image'))

Swap it out for:

if (request.url.match(/\.(jpe?g|png|gif|svg)$/))

And feel to add any other image file extensions (like webp) in there too.

Sticky headers

I made a little tweak to The Session today. The navigation bar across the top is “sticky” now—it doesn’t scroll with the rest of the content.

I made sure that the stickiness only kicks in if the screen is both wide and tall enough to warrant it. Vertical media queries are your friend!

But it’s not enough to just put some position: fixed CSS inside a media query. There are some knock-on effects that I needed to mitigate.

I use the space bar to paginate through long pages. It drives me nuts when sites with sticky headers don’t accommodate this. I made use of Tim Murtaugh’s sticky pagination fixer. It makes sure that page-jumping with the keyboard (using the space bar or page down) still works. I remember when I linked to this script two years ago, thinking “I bet this will come in handy one day.” Past me was right!

The other “gotcha!” with having a sticky header is making sure that in-page anchors still work. Nicolas Gallagher covers the options for this in a post called Jump links and viewport positioning. Here’s the CSS I ended up using:

:target:before {
    content: '';
    display: block;
    height: 3em;
    margin: -3em 0 0;
}

I also needed to check any of my existing JavaScript to see if I was using scrollTo anywhere, and adjust the calculations to account for the newly-sticky header.

Anyway, just a few things to consider if you’re going to make a navigational element “sticky”:

  1. Use min-height in your media query,
  2. Take care of keyboard-initiated page scrolling,
  3. Adjust the positioning of in-page links.

Homebrew header hardening

I’m at Homebrew Website Club. I figured I’d use this time to document some tweaking I’ve been doing to the back end of my website.

securityheaders.io is a handy site for testing whether your website’s server is sending sensible headers. Think of it like SSL Test for a few nitty-gritty details.

adactio.com was initially scoring very low, but the accompanying guide to hardening your HTTP headers meant I was able to increase my ranking to acceptable level.

My site is running on an Apache server on an Ubuntu virtual machine on Digital Ocean. If you’ve got a similar set-up, this might be useful…

I ssh’d into my server and went to this folder in the Apache directory

cd /etc/apache2/sites-available

There’s a file called default-ssl.conf that I need to edit (my site is being served up over HTTPS; if your site isn’t, you should edit 000-default.conf instead). I type:

nano default-ssl.conf

Depending on your permissions, you might need to type:

sudo nano default-ssl.conf

Now I’m inside nano. It’s like any other text editor you might be used to using, if you imagined what it would be like to remove all the useful features from it.

Within the <Directory /var/www/> block, I add a few new lines:

<IfModule mod_headers.c>
  Header always set X-Xss-Protection "1; mode=block"
  Header always set X-Frame-Options "SAMEORIGIN"
  Header always set X-Content-Type-Options "nosniff"
</IfModule>

Those are all no-brainers:

  • Enable protection against cross-site-scripting.
  • Don’t allow your site to be put inside a frame.
  • Don’t allow anyone to change the content-type headers of your files after they’ve been sent from the server.

If you’re serving your site over HTTPS, and you’re confident that you don’t have any mixed content (a mixture of HTTPS and HTTP), you can add this line as well:

Header always set Content-Security-Policy "default-src https: data: 'unsafe-inline' 'unsafe-eval'"

To really up your paranoia (and let’s face it, that’s what security is all about; justified paranoia), you can throw this in too:

Header unset Server
Header unset X-Powered-By

That means that your server will no longer broadcast its intimate details. Of course, I’ve completely reversed that benefit by revealing to you in this blog post that my site is running on Apache on Ubuntu.

I’ll tell you something else too: it’s powered by PHP. There’s some editing I did there too. But before I get to that, let’s just finish up that .conf file…

Hit ctrl and o, then press enter. That writes out the file you’ve edited. Now you can leave nano: press ctrl and x.

You’ll need to restart Apache for those changes to take effect. Type:

service apache2 restart

Or, if permission is denied:

sudo service apache2 restart

Now, about that PHP thing. Head over to a different directory:

cd /etc/php5/fpm

Time to edit the php.ini file. Type:

nano php.ini

Or, if you need more permissions:

sudo nano php.ini

It’s a long file, but you’re really only interested in one line. A shortcut to finding that line is to hit ctrl and w (for “where is?”), type expose, and hit enter. That will take you to the right paragraph. If you see a line that says:

expose_php = On

Change it to:

expose_php= Off

Save the file (ctrl and o, enter) then exit nano (ctrl and x).

Restart Apache:

service apache2 restart

Again, you might need to preface that with sudo.

Alright, head on back to securityheaders.io and see how your site is doing now. You should be seeing a much better score.

There’s one more thing I should be doing that’s preventing me from getting a perfect score. That’s Public Key Pinning. It sounds a bit too scary for a mere mortal like me to attempt. Or rather, the consequences of getting it wrong (which I probably would), sound too scary.

What do I know?

On our way back from New Zealand, Jessica and I stopped off in Sydney for a day. That same evening, the “What Do You Know?” event was going on—a series of five minute lightning talks from Sydney’s finest web geeks.

Maxine asked me if I could do a turn so I put together a quick spiel called Five Things I Learned from the Internet. Those five things are:

  1. How to wrap headphone cables in a tangle-free way.
  2. How to fold a T-shirt in seconds.
  3. How to tie shoelaces correctly (thanks, Adam).
  4. How to eat a cupcake (thanks, Tara).
  5. How to peel a banana (thanks, Kyle) with a bonus lesson on the bananus.

At least one of those things will blow your mind. Pwshoo!

Ending September

September was quite a month. There were plenty of events that I attended right here in Brighton:

In the middle of all that, I went to Tennessee for Breaking Development and Mobilewood.

I finished the month with a trip to Italy for the inaugural From The Front conference. It was a great little grassroots affair. It was basically a free event—there was an ostensible cover charge of ten euros just to ensure that people didn’t sign up without showing up. That’s why I waived my usual speaking fee (as an aside, if you’re a conference organiser and you’re thinking about asking me to speak for free at an event that charges hundreds of dollars/pounds/euros to attendees …don’t).

I have to admit that the location of the event did make a difference. I jumped at the chance to return to Bologna. Jessica and I even managed to squeeze in a trip down to Florence. Pictures were taken.

The evening before travelling to Italy, before I packed my bag I had a chat with Jen for her podcast, The Web Ahead.

5by5 | The Web Ahead #3: Jeremy Keith on Everything Web on Huffduffer

We talked about a lot of stuff from the nitty-gritty of responsive web design workflows and processes to being future friendly in the face of the mobile browser landscape. We also discussed long-term digital preservation and the web’s role as a storage medium for our collective culture. It sounds like a random grab-bag of topics, but in my mind all of this is connected.

I somehow managed to avoid even once mentioning a space elevator.

</head>

The <head> conference—a title designed to screw up a thousand CMSs—has just wrapped up. It spanned three days and as many continents. It was a preposterously ambitious undertaking and, incredibly, it worked!

While there were some meatspace hubs, the majority of the action took place in cyberspace. That means the carbon footprint of the attendees is considerably less than that amassed by travelling to a “regular” conference. It also means that the logistics involved were an order of magnitude greater. That Aral was able to organise it all is a testament to his dedication, enthusiasm and sheer bloody-mindedness.

Ironically for a virtual conference, the London hub of <head> was one of the best IRL geek gatherings I’ve been to. It was held in the salubrious surroundings of The Magic Circle. While there was no prestidigitation, Aral did manage to conjure up a great day.

I kicked the day off with a short talk called The Long Web. It covered some similar ground as my keynote from Accessibility 2.0—one passage was lifted verbatim—but the emphasis this time was very much on digital preservation and long-term thinking. The audio and video should be available before too long.

After my talk, I had a very pleasant chat with Aral on the sofa on the stage. That was the template for the rest of the day: fifteen minute presentations followed by five minute follow-up questions. I took on the role of interviewer for some of the presenters, which was a real pleasure (I’ve made no secret of my enjoyment of this role).

Not every slot followed the presentation+chat format. Steph and Ann had a slideless chat on the sofa, Smily Raymaker sang a song, and Tim O’Reilly finished off the day with a great informal chat with Aral. In between, there was a whole range of talks covering a wide spread of topics: web security, Flash, digital identity, and tracking energy consumption. Though the mood of the day was always light-hearted and fun, there was an emergent consensus in the content of the talks of big-picture, long-term thinking. There was an echo of Jonathan Harris’s rallying cry for the web community to put away childish things and attempt to tackle the challenges facing our species.

It was a thought-provoking and enjoyable day out in London. And, from what I caught of the rest of the event, the whole conference had a very high standard indeed. Quite an achievement.

Aral, my hat is off to you, my friend; I offer my heartfelt congratulations on a job well done.

Sound and vision

Every creation of Tony Wilson’s was labelled with the letters followed by a number. The first poster was FAC1. was FAC51.

The Joy Division album was FACT10. The album artwork was designed by Peter Saville. The words “Unknown Pleasures” don’t appear on the cover. Neither do the words “Joy Division”. Instead, the cover contains a series of 100 lines representing pulses from —thanks to . It was a groundbreaking piece of graphic design. Its beauty lies in its simplicity: a two-dimensional representation of raw data.

That was almost thirty years ago. This week Radiohead released the video for the song House of Cards from the album In Rainbows …except it isn’t really a video at all. It wasn’t shot on film or video. It is a three-dimensional representation of raw data.

You can play with the data visualisation, altering it while the song plays. You can even download the raw data. You are not just allowed to play around with the data, you are encouraged to do so. There’s a YouTube group for aggregating the results.

Suddenly every other music video seems very flat and passive. I’m reminded of a prescient passage from Douglas Adams’s essay How to Stop Worrying and Learn to Love the Internet:

I expect that history will show “normal” mainstream twentieth century media to be the aberration in all this.

Please, miss, you mean they could only just sit there and watch? They couldn’t do anything? Didn’t everybody feel terribly isolated or alienated or ignored?

Yes, child, that’s why they all went mad. Before the Restoration.

What was the Restoration again, please, miss?

The end of the twentieth century, child. When we started to get interactivity back.