
Ortigian alley.
Ortigian alley.
A handy little script from Aaron to improve the form validation experience.
The opening paragraphs of this article should be a mantra recited by every web developer before they begin their working day:
Things on the web can break — the odds are stacked against us. Lots can go wrong: a network request fails, a third-party library breaks, a JavaScript feature is unsupported (assuming JavaScript is even available), a CDN goes down, a user behaves unexpectedly (they double-click a submit button), the list goes on.
Fortunately, we as engineers can avoid, or at least mitigate the impact of breakages in the web apps we build. This however requires a conscious effort and mindset shift towards thinking about unhappy scenarios just as much as happy ones.
I love, love, love the emphasis on reducing assumptions:
Taking a more defensive approach when writing code helps reduce programmer errors arising from making assumptions. Pessimism over optimism favours resilience.
Hell, yeah!
Accepting the fragility of the web is a necessary step towards building resilient systems. A more reliable user experience is synonymous with happy customers. Being equipped for the worst (proactive) is better than putting out fires (reactive) from a business, customer, and developer standpoint (less bugs!).
I think I’ve found some more strange service worker behaviour in Chrome.
It all started when I was checking out the very nice new redesign of WebPageTest. I figured while I was there, I’d run some of my sites through it. I passed in a URL from The Session. When the test finished, I noticed that the “screenshot” tab said that something was being logged to the console. That’s odd! And the file doing the logging was the service worker script.
I fired up Chrome (which isn’t my usual browser), and started navigating around The Session with dev tools open to see what appeared in the console. Sure enough, there was a failed fetch
attempt being logged. The only time my service worker script logs anything is in the catch
clause of fetching pages from the network. So Chrome was trying to fetch a web page, failing, and logging this error:
The service worker navigation preload request failed with a network error.
But all my pages were loading just fine. So where was the error coming from?
After a lot of spelunking and debugging, I think I’ve figured out what’s happening…
First of all, I’m making use of navigation preloads in my service worker. That’s all fine.
Secondly, the website is a progressive web app. It has a manifest file that specifies some metadata, including start_url
. If someone adds the site to their home screen, this is the URL that will open.
Thirdly, Google recently announced that they’re tightening up the criteria for displaying install prompts for progressive web apps. If there’s no network connection, the site still needs to return a 200 OK
response: either a cached copy of the URL or a custom offline page.
So here’s what I think is happening. When I navigate to a page on the site in Chrome, the service worker handles the navigation just fine. It also parses the manifest file I’ve linked to and checks to see if that start URL would load if there were no network connection. And that’s when the error gets logged.
I only noticed this behaviour because I had specified a query string on my start URL in the manifest file. Instead of a start_url
value of /
, I’ve set a start_url
value of /?homescreen
. And when the error shows up in the console, the URL being fetched is /?homescreen
.
Crucially, I’m not seeing a warning in the console saying “Site cannot be installed: Page does not work offline.” So I think this is all fine. If I were actually offline, there would indeed be an error logged to the console and that start_url
request would respond with my custom offline page. It’s just a bit confusing that the error is being logged when I’m online.
I thought I’d share this just in case anyone else is logging errors to the console in the catch
clause of fetches and is seeing an error even when everything appears to be working fine. I think there’s nothing to worry about.
Update: Jake confirmed my diagnosis and agreed that the error is a bit confusing. The good news is that it’s changing. In Chrome Canary the error message has already been updated to:
DOMException: The service worker navigation preload request failed due to a network error. This may have been an actual network error, or caused by the browser simulating offline to see if the page works offline: see https://w3c.github.io/manifest/#installability-signals
Much better!
Another five pieces of sweet, sweet low-hanging fruit:
- Always label your inputs.
- Highlight input element on focus.
- Break long forms into smaller sections.
- Provide error messages.
- Avoid horizontal layout forms unless necessary.
This is a great walkthough of making a common form pattern accessible. No complex code here: some HTML is all that’s needed.
How design fiction was co-opted. A piece by Tim Maughan with soundbites from Julian Bleecker, Anab Jain, and Scott Smith.
Bayesian analysis vs. statistical significance, clearly explained.
It seems that some code that I wrote in Going Offline is haunted. It’s the trimCache
function.
First, there was the issue of a typo. Or maybe it’s more of a brainfart than a typo, but either way, there’s a mistake in the syntax that was published in the book.
Now it turns out that there’s also a problem with my logic.
To recap, this is a function that takes two arguments: the name of a cache, and the maximum number of items that cache should hold.
function trimCache(cacheName, maxItems) {
First, we open up the cache:
caches.open(cacheName)
.then( cache => {
Then, we get the items (keys) in that cache:
cache.keys()
.then(keys => {
Now we compare the number of items (keys.length
) to the maximum number of items allowed:
if (keys.length > maxItems) {
If there are too many items, delete the first item in the cache—that should be the oldest item:
cache.delete(keys[0])
And then run the function again:
.then(
trimCache(cacheName, maxItems)
);
A-ha! See the problem?
Neither did I.
It turns out that, even though I’m using then
, the function will be invoked immediately, instead of waiting until the first item has been deleted.
Trys helped me understand what was going on by making a useful analogy. You know when you use setTimeout
, you can’t put a function—complete with parentheses—as the first argument?
window.setTimeout(doSomething(someValue), 1000);
In that example, doSomething(someValue)
will be invoked immediately—not after 1000 milliseconds. Instead, you need to create an anonymous function like this:
window.setTimeout( function() {
doSomething(someValue)
}, 1000);
Well, it’s the same in my trimCache
function. Instead of this:
cache.delete(keys[0])
.then(
trimCache(cacheName, maxItems)
);
I need to do this:
cache.delete(keys[0])
.then( function() {
trimCache(cacheName, maxItems)
});
Or, if you prefer the more modern arrow function syntax:
cache.delete(keys[0])
.then( () => {
trimCache(cacheName, maxItems)
});
Either way, I have to wrap the recursive function call in an anonymous function.
Here’s a gist with the updated trimCache
function.
What’s annoying is that this mistake wasn’t throwing an error. Instead, it was causing a performance problem. I’m using this pattern right here on my own site, and whenever my cache of pages or images gets too big, the trimCaches
function would get called …and then wouldn’t stop running.
I’m very glad that—witht the help of Trys at last week’s Homebrew Website Club Brighton—I was finally able to get to the bottom of this. If you’re using the trimCache
function in your service worker, please update the code accordingly.
Management regrets the error.
This might just be the most nerdily specific book I’ve read and enjoyed. Even if you’re not planning to build a web browser any time soon, it’s kind of fascinating to see how HTML is parsed—and how much of an achievement the HTML spec is, for specifying consistent error-handling, if nothing else.
The last few chapters are still in progress, but you can read the whole thing online or buy an ePub version.
Over on the Failed Architecture site, there’s a piece about Kevin Lynch’s 1960 book The Image Of The City. It’s kind of fun to look back at a work like that, from today’s vantage point of ubiquitous GPS and smartphones with maps that bestow God-like wayfinding. How much did Lynch—or any other futurist from the past—get right about our present?
Quite a bit, as it turns out.
Lynch invented the term ‘imageability’ to describe the degree to which the urban environment can be perceived as a clear and coherent mental image. Reshaping the city is one way to increase imageability. But what if the cognitive map were complemented by some external device? Lynch proposed that this too could strengthen the mental image and effectively support navigation.
Past visions of the future can be a lot of fun. Matt Novak’s Paleofuture blog is testament to that. Present visions of the future are rarely as enjoyable. But every so often, one comes along…
Kevin Kelly has a new piece in Wired magazine about Augmented Reality. He suggests we don’t call it AR. Sounds good to me. Instead, he proposes we use David Gelernter’s term “the mirrorworld”.
I like it! I feel like the term won’t age well, but that’s not the point. The term “cyberspace” hasn’t aged well either—it sounds positively retro now—but Gibson’s term served its purpose in prompting discussing and spurring excitement. I feel like Kelly’s “mirrorworld” could do the same.
Incidentally, the mirrorworld has already made an appearance in the William Gibson book Spook Country in the form of locative art:
Locative art, a melding of global positioning technology to virtual reality, is the new wrinkle in Gibson’s matrix. One locative artist, for example, plants a virtual image of F. Scott Fitzgerald dying at the very spot where, in fact, he had his Hollywood heart attack, and does the same for River Phoenix and his fatal overdose.
Yup, that sounds like the mirrorworld:
Time is a dimension in the mirrorworld that can be adjusted. Unlike the real world, but very much like the world of software apps, you will be able to scroll back.
Now look, normally I’m wary to the point of cynicism when it comes to breathless evocations of fantastical futures extropolated from a barely functioning technology of today, but damn, if Kevin Kelly’s enthusiasm isn’t infectious! He invokes Borges. He acknowledges the challenges. But mostly he pumps up the excitement by baldly stating possible outcomes as though they are inevitabilities:
We will hyperlink objects into a network of the physical, just as the web hyperlinked words, producing marvelous benefits and new products.
When he really gets going, we enter into some next-level science-fictional domains:
The mirrorworld will be a world governed by light rays zipping around, coming into cameras, leaving displays, entering eyes, a never-ending stream of photons painting forms that we walk through and visible ghosts that we touch. The laws of light will govern what is possible.
And then we get sentences like this:
History will be a verb.
I kind of love it. I mean, I’m sure we’ll look back on it one day and laugh, shaking our heads at its naivety, but for right now, it’s kind of refreshing to read something so unabashedly hopeful and so wildly optimistic.
Whenever I create a fetch
event inside a service worker, my code roughly follows the same pattern. There’s a then
clause which gets executed if the fetch is successful, and a catch
clause in case anything goes wrong:
fetch( request)
.then( fetchResponse => {
// Yay! It worked.
})
.catch( fetchError => {
// Boo! It failed.
});
In my book—Going Offline—I’m at pains to point out that those arguments being passed into each clause are yours to name. In this example I’ve called them fetchResponse
and fetchError
but you can call them anything you want.
I always do something with the fetchResponse
inside the then
clause—either I want to return
the response or put it in a cache.
But I rarely do anything with fetchError
. Because of that, I’ve sometimes made the mistake of leaving it out completely:
fetch( request)
.then( fetchResponse => {
// Yay! It worked.
})
.catch( () => {
// Boo! It failed.
});
Don’t do that. I think there’s some talk of making the error argument optional, but for now, some browsers will get upset if it’s not there.
So always include that argument, whether you call it fetchError
or anything else. And seeing as it’s an error, this might be a legitimate case for outputing it to the browser’s console, even in production code.
And yes, you can output to the console from a service worker. Even though a service worker can’t access anything relating to the document
object, you can still make use of window.console
, known to its friends as console
for short.
My muscle memory when it comes to sending something to the console is to use console.log
:
fetch( request)
.then( fetchResponse => {
return fetchResponse;
})
.catch( fetchError => {
console.log(fetchError);
});
But in this case, the console.error
method is more appropriate:
fetch( request)
.then( fetchResponse => {
return fetchResponse;
})
.catch( fetchError => {
console.error(fetchError);
});
Now when there’s a connectivity problem, anyone with a console window open will see the error displayed bold and red.
If that seems a bit strident to you, there’s always console.warn
which will still make the output stand out, but without being quite so alarmist:
fetch( request)
.then( fetchResponse => {
return fetchResponse;
})
.catch( fetchError => {
console.warn(fetchError);
});
That said, in this case, console.error
feels like the right choice. After all, it is technically an error.
Paul Yabsley wrote to let me know about an error in Going Offline. It’s rather embarrassing because it’s code that I’m using in the service worker for adactio.com but for some reason I messed it up in the book.
It’s the trimCache
function in Chapter 7: Tidying Up. That’s the reusable piece of code that recursively reduces the number of items in a specified cache (cacheName
) to a specified amount (maxItems
). On page 95 and 96 I describe the process of creating the function which, in the book, ends up like this:
function trimCache(cacheName, maxItems) {
cacheName.open( cache => {
cache.keys()
.then( items => {
if (items.length > maxItems) {
cache.delete(items[0])
.then(
trimCache(cacheName, maxItems)
); // end delete then
} // end if
}); // end keys then
}); // end open
} // end function
See the problem? It’s right there at the start when I try to open the cache like this:
cacheName.open( cache => {
That won’t work. The open
method only works on the caches
object—I should be passing the name of the cache into the caches.open
method. So the code should look like this:
caches.open( cacheName )
.then( cache => {
Everything else remains the same. The corrected trimCache
function is here:
function trimCache(cacheName, maxItems) {
caches.open(cacheName)
.then( cache => {
cache.keys()
.then(items => {
if (items.length > maxItems) {
cache.delete(items[0])
.then(
trimCache(cacheName, maxItems)
); // end delete then
} // end if
}); // end keys then
}); // end open then
} // end function
Sorry about that! I must’ve had some kind of brainfart when I was writing (and describing) that one line of code.
You may want to deface your copy of Going Offline by taking a pen to that code example. Normally I consider the practice of writing in books to be barbarism, but in this case …go for it.
Update: There was another error in the code for trimCache
! Here’s the fix.
There was a time, circa 2009, when no home design story could do without a reference to Mad Men. There is a time, circa 2018, when no personal tech story should do without a Black Mirror reference.
Black Mirror Home. It’s all fun and games until the screaming starts.
When these products go haywire—as they inevitably do—the Black Mirror tweets won’t seem so funny, just as Mad Men curdled, eventually, from ha-ha how far we’ve come to, oh-no we haven’t come far enough.
Well now, this is a clever bit of hardware hacking.
Surfaces viewed from an angle tend to look shiny, and you can tell if a finger is touching the surface by checking if it’s touching its own reflection.
Rebuttals to the most oft-asked requests for browsers to change the way they handle CSS.
Ethan points out the tension between net neutrality and AMP:
The more I’ve thought about it, I think there’s a strong, clear line between ISPs choosing specific kinds of content to prioritize, and projects like Google’s Accelerated Mobile Project. And apparently, so does the FCC chair: companies like Google, Facebook, or Apple are choosing which URLs get delivered as quickly as possible. But rather than subsidizing that access through paid sponsorships, these companies are prioritizing pages republished through their proprietary channels, using their proprietary document formats.
Nosedive is the first episode of season three of Black Mirror.
It’s fairly light-hearted by the standards of Black Mirror, but all the more chilling for that. It depicts a dysutopia where people rate one another for points that unlock preferential treatment. It’s like a twisted version of the whuffie from Cory Doctorow’s Down And Out In The Magic Kingdom. Cory himself points out that reputation economies are a terrible idea.
Nosedive has become a handy shortcut for pointing to the dangers of social media (in the same way that Minority Report was a handy shortcut for gestural interfaces and Her is a handy shortcut for voice interfaces).
“Social media is bad, m’kay?” is an understandable but, I think, fairly shallow reading of Nosedive. The problem isn’t with the apps, it’s with the system. A world in which we desperately need to keep our score up if we want to have any hope of advancing? That’s a nightmare scenario.
The thing is …that system exists today. Credit scores are literally a means of applying a numeric value to human beings.
Nosedive depicts a world where your score determines which seats you get in a restaurant, or which model of car you can rent. Meanwhile, in our world, your score determines whether or not you can get a mortgage.
Nosedive depicts a world in which you know your own score. Meanwhile, in our world, good luck with that:
It is very difficult for a consumer to know in advance whether they have a high enough credit score to be accepted for credit with a given lender. This situation is due to the complexity and structure of credit scoring, which differs from one lender to another.
Lenders need not reveal their credit score head, nor need they reveal the minimum credit score required for the applicant to be accepted. Owing only to this lack of information to the consumer, it is impossible for him or her to know in advance if they will pass a lender’s credit scoring requirements.
Black Mirror has a good track record of exposing what’s unsavoury about our current time and place. On the surface, Nosedive seems to be an exposé on the dangers of going to far with the presentation of self in everyday life. Scratch a little deeper though, and it reveals an even more uncomfortable truth: that we’re living in a world driven by systems even worse than what’s depicted in this dystopia.
How about this for a nightmare scenario:
Two years ago Douglas Rushkoff had an unpleasant encounter outside his Brooklyn home. Taking out the rubbish on Christmas Eve, he was mugged — held at knife-point by an assailant who took his money, his phone and his bank cards. Shaken, he went back indoors and sent an email to his local residents’ group to warn them about what had happened.
“I got two emails back within the hour,” he says. “Not from people asking if I was OK, but complaining that I’d posted the exact spot where the mugging had taken place — because it might adversely affect their property values.”
Dave uses just a smidgen of JavaScript to whip HTML5’s native form validation into shape.
Instead of being prescriptive about error messaging, we use what the browser natively gives us.
What an excellent example of a responsive calendar!