WebAIM: Screen Reader User Survey Results
The results of the second screen reader survey from WebAIM are, once again, required reading.
5th | 10th | 15th | 20th | 25th | 30th | ||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
12am | |||||||||||||||||||||||||||||||
4am | |||||||||||||||||||||||||||||||
8am | |||||||||||||||||||||||||||||||
12pm | |||||||||||||||||||||||||||||||
4pm | |||||||||||||||||||||||||||||||
8pm |
The results of the second screen reader survey from WebAIM are, once again, required reading.
I'm hungry.
Some web geeks recommend some movies. I am one of the web geeks.
You can now store (and scale) MySQL databases with Amazon. Handy.
There's some lovely Buran porn here.
Here lies what we could salvage from the ashes of GeoCities.
I’ve been doing a fair bit of yakking lately, all recorded for posterity.
First off, I had a chat with Tim from Design Critique on Ajax design considerations, mostly recapping what I talked about UI13 last year.
After that, I had a natter with Ross from Web Axe, this time focusing on practical web accessibility.
Then Andy, Rich and I paid a visit to the Boagworld crew out in the back of beyond where we had a free-for-all five-way chat about Clearleft and Headscape.
Lastly, I had a video chat with Ryan Taylor for his series, Please Start From The Beginning.
Add them all up and you’ve got a veritable aural onslaught. If you manage to make it through all of those, then you will almost certainly be very weary of listening to my voice.
A beautifully designed location-based web magazine.
The V&A has an API. Who knew? Looks very nice indeed.
When I first heard that Yahoo were planning to bulldoze Geocities, I was livid. After I blogged in anger, I was taken to task for jumping the gun. Give ‘em a chance,
I was told. They may yet do something to save all that history.
They did fuck all. They told Archive.org what URLs to spider and left it up to them to do the best they could with preserving internet history. Meanwhile, Jason Scott continued his crusade to save as much as he could:
This is fifteen years and decades of man-hours of work that you’re destroying, blowing away because it looks better on the bottom line.
We are losing a piece of internet history. We are losing the destinations of millions of inbound links. But most importantly we are losing people’s dreams and memories.
Geocities dies today. This is a bad day for the internet. This is a bad day for our collective culture. In my opinion, this is also a bad day for Yahoo. I, for one, will find it a lot harder to trust a company that finds this to be acceptable behaviour …despite the very cool and powerful APIs produced by the very smart and passionate developers within the same company.
I hope that my friends who work at Yahoo understand that when I pour vitriol upon their company, I am not aiming at them. Yahoo has no shortage of clever people. But clearly they are down in the trenches doing development, not in the upper echelons making the decision to butcher Geocities. It’s those people, the decision makers, that I refer to as twunts. Fuckwits. Cockbadgers. Pisstards.
Beautiful artwork in a fun puzzle game.
In local news: Area man receives messages from chalkboard.
It has been said before but I’ll say it again: copy is interface. Josh sums it up nicely in his post Writing Microcopy:
The fastest way to improve your interface is to improve your copy-writing.
The canonical online example is Moo.com with its adorably anthropomorphised Little Moo robot personality. An oft-cited offline paragon is Innocent Smoothies with their cheeky little packaging easter egg delighters.
My favourite meatspace exemplar is right here in Brighton. The Earth and Stars pub has an outside chalkboard with a distinct personality. Over the past two years, I’ve been chronicling its announcements on Flickr.
Some samples:
And my favourite:
Walking past the chalkboard this week, I was pleased to see that it had been updated. Taking out my camera, I read the latest message:
I’m being cyberstalked by a paranoid existentialist chalkboard.
Made me smile.
Taking shopping lists and setting them in a more typographically pleasing way.
Derek Powazek gave up smoking recently so any outward signs of irritability should be forgiven. That said, the anger in two of his recent posts is completely understandable: Spammers, Evildoers, and Opportunists and the follow-up, SEO FAQ.
His basic premise is money spent on hiring someone who labels themselves as an SEO expert would be better spent in producing well marked-up relevant content. I think he’s right. In the comments, the more reasonable remarks are based on semantics. Good SEO, they argue, is all about producing well marked-up relevant content.
Fair enough. But does it really need its own separate label? Personally, I would always suggest hiring a good content strategist or copy writer over hiring an SEO consultant any day. Here’s why:
Google—or at least the search arm of the company—is dedicated to a simple goal: giving people the most relevant content for their search. Google search is facilitated by ‘bots and algorithms, but it is fundamentally very human-centric.
Search Engine Optimisation is an industry based around optimising for the ‘bots and algorithms at Google.
But if those searchbots are dedicated to finding the best content for humans, why not cut out the middleman and go straight to optimising for humans?
If you optimise for people, which usually involves producing well marked-up relevant content, then you will get the approval of the ‘bots and algorithms by default …because that’s exactly the kind of content that they are trying to find and rank. This is the approach taken by Aarron Walter in his excellent book Building Findable Websites.
On Twitter, Mike Migurski said:
I think SEO is just user-centered design for robots.
…which would make it robot-centred design. But that’s only half the story. SEO is really robot-centred design for robots that are practising user-centred design.
Ask yourself this: do you think Wikipedia ever hired an SEO consultant in order to get its high rankings on Google?
Test cases for font-linking.
Steve Souders does the research and reveals the sad truth about the effect font-linking has on performance.
Zombies are disturbing. Teletubbies are disturbing. Zombie teletubbies are doubleplus disturbing.
"I purchased this product 4.47 Billion Years ago and when I opened it today, it was half empty."
A few months ago, Jakob Nielsen wrote about passwords. Specifically, he wrote about the standard practice of the contents of password fields being masked by default. In his typical black/white, on/off, right/wrong Boolean worldview, Father Jakob called for this practice to be abolished completely.
Meanwhile, back in the real world, Apple take a more empathetic approach, acknowledging that there often very good reasons for masking passwords. But that doesn’t mean you can’t offer the user the option to disable password masking if they choose.
This pattern came up in a conversation at Clearleft recently. We were discussing a sign-up process, trying to avoid the nasty pattern of asking users to input the same value twice. We were all in agreement that Apple’s solution to password masking was pretty elegant.
I’d like to use this pattern on the sign up form for Huffduffer but I can’t see a way of easily integrating it with the Mad Libs approach. But I have implemented this option on the log-in form.
Here’s what’s happening under the hood:
It would have been a lot simpler to just use JavaScript to toggle the type
attribute of one field between “password” and “text”. But, in a certain browser that shall remain nameless, you can’t do that …for very sound security reasons, no doubt.
So the script isn’t as elegant as I’d wish but it gets the job done. Feel free to view source on the JavaScript.
Update: Jonathan Holst points me to a post by Jeff Atwood on this subject. It’s worth reading just to boggle at the insanity of Lotus Notes’ security features
. From the comments there, I found a bookmarklet to reveal password characters.
A handy RESTful interface for retrieving favicons as images.
A 2004 paper on huffduffing.
Glenn has taken Google's Social Graph API, YQL and various parsers, and he's wrapped it all up in one JavaScript library. The demos are mind-boggingly impressive.
A quick, slick primer on font linking.
A CSS gallery with a difference. This one highlights sites with good print stylesheets.
My new favourite single serving site.
A nice collection of design tools and methodologies.
The Web Magazine for Young Designers and Developers. Very nicely done, and all in HTML5 too.
Hixie has been making changes to microdata in HTML5 based, not on opinion or theory, but on the results of user testing.
I’ve already described how machine tags on Huffduffer trigger a number of third-party API calls. Tagging something with music:artist=...
, book:author=...
, film:title=...
or any number of similar machine tags will fire off calls to places like Amazon, The New York Times, or Last.fm.
For a while now, I’ve wanted to include Flickr in that list of third-party services but I couldn’t think of an easy way of associating audio files with photos. Then I realised that a mechanism already exists, and it’s another machine tag. Anything on Flickr that’s been tagged with lastfm:event=...
will probably be a picture of a musical artist.
So if anything is tagged on Huffduffer with music:artist=...
, all I need to do is fire off a call to Last.fm to get a list of that artist’s events using the method artist.getEvents
. Once I have the event IDs I can search Flickr for photos that have been machine tagged with those IDs.
There’s just one problem. Last.fm’s API only returns future events for an artist. There’s no method for past events.
Undeterred, I found a RESTful interface that provides the past events of an artist on Last.fm. The format returned isn’t JSON or XML. It’s HTML. It turns out that past events are freely available in the profile for any artist on Last.fm with the identifier last.fm/music/{artist}/+events/{year}
. Here, for example, are Salter Cane gigs in 2009: last.fm/music/Salter+Cane/+events/2009
.
If only those events were structured in hCalendar! As it is, I have to run through all the links in the document to find the href
s beginning with the string http://www.last.fm/event/
and then extract the event ID that immediately follows that string.
Once I’ve extracted the event IDs for an artist, I can fire off a search on Flickr using the flickr.photos.search
method with a machine_tags
parameter (as well as passing the artist name in the text
parameter just to be sure).
Here’s an example result in the sidebar on Huffduffer: huffduffer.com/tags/music:artist=Bat+for+Lashes
It’s messy but it works. I guess that’s the dictionary definition of a hack.
A good look at choosing fonts for font linking.
A few weeks back, I saw that Fanfarlo were going to be playing at The Hanbury Ballroom in Brighton this Wednesday. I figured I’d probably end up going to the gig so I marked myself as “maybe attending” on the event page on Last.fm.
Fast forward to last week and I’m browsing through the list of upcoming events in Brighton on Last.fm. I see that The Fiery Furnaces will be playing in The Prince Albert on October 7th. When I click through to the event page, this is what I see:
Don’t forget you might be going to Fanfarlo at The Hanbury Club on the same date.
That’s nice. That’s really nice. It’s a small touch but it’s the combination of all those small things that adds up to a pleasant experience. This felt …thoughtful.
Of course, it still doesn’t change the fact that I have to choose between Fanfarlo and The Fiery Furnaces.