Tags: robust

10

sparkline

Tuesday, January 2nd, 2018

Robust Client-Side JavaScript – A Developer’s Guide · molily

This is a terrific resource on writing client-side JavaScript without making too many assumptions.

It starts by covering some of the same topics as Resilient Web Design—fault tolerance, Postel’s law, progressive enhancement—but then goes deep, deep, deep into the specifics of applying that to JavaScript.

And the whole thing is available here for free under a Creative Commons licence!

Tuesday, May 17th, 2016

Design systems and Postel’s law | Journal | The Personal Disquiet of Mark Boulton

Marvellous insights from Mark on how the robustness principle can and should be applied to styeguides and pattern libraries (‘sfunny—I was talking about Postel’s Law just this morning at An Event Apart in Boston).

Being liberal in accepting things into the system, and being liberal about how you go about that, ensures you don’t police the system. You collaborate on it.

So, what about the output? Remember: be ’conservative in what you do’. For a design system, this means your output of the system – guidelines, principles, design patterns, code, etc etc. – needs to be clear, unambiguous, and understandable.

Thursday, July 9th, 2015

Designing with Progressive Enhancement — sixtwothree.org

The full text of Jason’s great talk at this year’s CSS Summit. It’s a great read, clearing up many of the misunderstandings around progressive enhancement and showing some practical examples of progressive enhancement working at each level of the web’s technology stack

Friday, April 24th, 2015

Everyone has JavaScript, right?

And that’s why you always use progressive enhancement!

screen shot from the TV series Arrested Development, showing a character whose catchphrase began 'And that's why...'

Thursday, April 23rd, 2015

Sunday, April 12th, 2015

Keeping it simple: coding a carousel by Christian Heilmann

I like this nice straightforward approach. Instead of jumping into the complexities of the final interactive component, Chris starts with the basics and layers on the complexity one step at a time, thereby creating a more robust solution.

If I had one small change to suggest, maybe aria-label might work better than offscreen text for the controls …as documented by Heydon.

Monday, May 30th, 2011

Hashcloud

Hashbangs. Yes, again. This is important, dammit!

When the topic first surfaced, prompted by Mike’s post on the subject, there was a lot of discussion. For a great impartial round-up, I highly recommend two posts by James Aylett:

There seems to be a general concensus that hashbang URLs are bad. Even those defending the practice portray them as a necessary evil. That is, once a better solution is viable—like the HTML5 History API—then there will no longer be any need for #! in URLs. I’m certain that it’s a matter of when, not if Twitter switches over.

But even then, that won’t be the end of the story.

Dan Webb has written a superb long-zoom view on the danger that hashbangs pose to the web:

There’s no such thing as a temporary fix when it comes to URLs. If you introduce a change to your URL scheme you are stuck with it for the forseeable future. You may internally change your links to fit your new URL scheme but you have no control over the rest of the web that links to your content.

Therein lies the rub. Even if—nay when—Twitter switch over to proper URLs, there will still be many, many blog posts and other documents linking to individual tweets …and each of those links will contain #!. That means that Twitter must make sure that their home page maintains a client-side routing mechanism for inbound hashbang links (remember, the server sees nothing after the # character—the only way to maintain these redirects is with JavaScript).

As Paul put it in such a wonderfully pictorial way, the web is agreement. Hacks like hashbang URLs—and URL shorteners—weaken that agreement.

Sunday, May 29th, 2011

danwebb.net - It’s About The Hashbangs

A superb post by Dan on the bigger picture of what’s wrong with hashbang URLs. Well written and well reasoned.

Thursday, February 10th, 2011

ongoing by Tim Bray · Broken Links

Tim Bray calmly explains why hash-bang URLs are a very bad idea.

This is what we call “tight coupling” and I thought that anyone with a Computer Science degree ought to have been taught to avoid it.

Wednesday, February 9th, 2011

Going Postel

I wrote a little while back about my feelings on hash-bang URLs:

I feel so disappointed and sad when I see previously-robust URLs swapped out for the fragile #! fragment identifiers. I find it hard to articulate my sadness…

Fortunately, Mike Davies is more articulate than I. He’s written a detailed account of breaking the web with hash-bangs.

It would appear that hash-bang usage is on the rise, despite the fact that it was never intended as a long-term solution. Instead, the pattern (or anti-pattern) was intended as a last resort for crawling Ajax-obfuscated content:

So the #! URL syntax was especially geared for sites that got the fundamental web development best practices horribly wrong, and gave them a lifeline to getting their content seen by Googlebot.

Mike goes into detail on the Gawker outage that was a direct result of its “sites” being little more than single pages that require JavaScript to access anything.

I’m always surprised when I come across as site that deliberately chooses to make its content harder to access.

Though it may not seem like it at times, we’re actually in a pretty great position when it comes to front-end development on the web. As long as we use progressive enhancement, the front-end stack of HTML, CSS, and JavaScript is remarkably resilient. Remove JavaScript and some behavioural enhancements will no longer function, but everything will still be addressable and accessible. Remove CSS and your lovely visual design will evaporate, but your content will still be addressable and accessible. There aren’t many other platforms that can offer such a robust level of .

This is no accident. The web stack is rooted in . If you serve an HTML document to a browser, and that document contains some tags or attributes that the browser doesn’t understand, the browser will simply ignore them and render the document as best it can. If you supply a style sheet that contains a selector or rule that the browser doesn’t recognise, it will simply pass it over and continue rendering.

In fact, the most brittle part of the stack is JavaScript. While it’s far looser and more forgiving than many other programming languages, it’s still a programming language and that means that a simple typo could potentially cause an entire script to fail in a browser.

That’s why I’m so surprised that any front-end engineer would knowingly choose to swap out a solid declarative foundation like HTML for a more brittle scripting language. Or, as Simon put it:

Gizmodo launches redesign, is no longer a website (try visiting with JS disabled): http://gizmodo.com/

Read Mike’s article, re-read this article on URL design and listen to what John Resig has to say in this interview .