Going up the country …for Hack Farm.
Saturday, January 31st, 2015
Friday, January 30th, 2015
This is a talk I gave at An Event Apart about eighteen months ago, all about irish music, the web, long-term thinking, and yes, you guessed it—progressive enhancement.
I have conflicting feelings about Web Components. I am simultaneously very excited and very nervous.
There are broadly two ways that they could potentially be used:
- Web Components are used by developers to incrementally add more powerful elements to their websites. This evolutionary approach feels very much in line with the thinking behind the extensible web manifesto. Or:
- Web Components are used by developers as a monolithic platform, much like Angular or Ember is used today. The end user either gets everything or they get nothing.
The second scenario is a much more revolutionary approach—sweep aside the web that has come before, and usher in a new golden age of Web Components. Personally, I’m not comfortable with that kind of year-zero thinking. I prefer evolution over revolution:
Revolutions sometimes change the world to the better. Most often, however, it is better to evolve an existing design rather than throwing it away. This way, authors don’t have to learn new models and content will live longer. Specifically, this means that one should prefer to design features so that old content can take advantage of new features without having to make unrelated changes. And implementations should be able to add new features to existing code, rather than having to develop whole separate modes.
The evolutionary model is exemplified by the design of HTML 5.
The revolutionary model is exemplified by the design of XHTML 2.
I really hope that the Web Components model goes down the first route.
Up until recently, my inner Web Components pendulum was swinging towards the hopeful end of my spectrum of anticipation. That was mainly driven by the ability of custom elements to extend existing HTML elements.
So, for example, instead of creating a new element like this:
…you can piggyback off the existing semantics of the
button element like this:
For a real-world example, see Github’s use of
I wrote about creating responsible Web Components:
That means we can use web components as a form of progressive enhancement, turbo-charging pre-existing elements instead of creating brand new elements from scratch. That way, we can easily provide fallback content for non-supporting browsers.
I’d like to propose that a fundamental principle of good web component design should be: “Where possible, extend an existing HTML element instead of creating a new element from scratch.”
Peter Gasston also has a great post on best practice for creating custom elements:
It’s my opinion that, for as long as there is a dependence on JS for custom elements, we should extend existing elements when writing custom elements. It makes sense for developers, because new elements have access to properties and methods that have been defined and tested for many years; and it makes sense for users, as they have fallback in case of JS failure, and baked-in accessibility fundamentals.
But now it looks like this superpower of custom elements is being nipped in the bud:
It also does not address subclassing normal elements. Again, while that seems desirable the current ideas are not attractive long term solutions. Punting on it in order to ship a v1 available everywhere seems preferable.
Now, I’m not particularly wedded to the syntax of using the
is="" attribute to extend existing elements …but I do think that the ability to extend existing elements declaratively is vital. I’m not alone, although I may very well be in the minority.
Bruce has outlined some use cases and Steve Faulkner has enumerated the benefits of declarative extensibility:
I think being able to extend existing elements has potential value to developers far beyond accessibility (it just so happens that accessibility is helped a lot by re-use of existing HTML features.)
Like Steve, I’ve no particularly affection (or enmity) towards the
I also have a niggling worry that this may affect the uptake of web components.
I think he’s absolutely right. I think there are many developers out there in a similar position to me, uncertain exactly what to make of this new technology. I was looking forward to getting really stuck into Web Components and figuring out ways of creating powerful little extensions that I could start using now. But if Web Components turn out to be an all-or-nothing technology—a “platform”, if you will—then I will not only not be using them, I’ll be actively arguing against their use.
I sneezed while I was brushing my teeth.
That was messy.
Thursday, January 29th, 2015
A scholarship fund for women students at the Flatiron School, in memory of Chloe.
Spotify has named the program the Chloe Weil Scholarship as a memorial to Chloe Weil, an inspiring designer and engineer who took a strong interest in creating opportunities for women in technology.
Smart thinking on optimising the perceived performance of loading web fonts: if you prioritise the most widely-used weight and style (usually the regular roman), and load other weights and styles subsequently, then it appears as though the font is ready sooner.
Rushing doesn’t improve things, it might even slow you down. Focusing on a few things and doing them well is worthwhile. Sharing what you learn—even while you’re still figuring things out—is even better.
We hired Charlotte, our first junior developer at Clearleft recently, and my job has taken on more of a teaching role. I’m really enjoying it, but I have no idea what I’m doing, and I worry that I’m doing all the wrong things.
This article looks like it has some good, sensible advice …although I should probably check to see if Charlotte agrees.
I really like the self-examination that Ian and his team at Lonely Planet are doing here. Instead of creating a framework for creating a living style guide and calling it done, they’re constantly looking at what could be done better, and revisiting earlier decisions.
I’m intrigued by the way they’ve decided to reorganise their files by component rather than by filetype.
A short documentary on the wonderful Grace Hopper.
Wednesday, January 28th, 2015
The Guardian have hit the big red button and made their responsive site the default. Great stuff!
(top tip: don’t read the comments)
Dammit! I’ll be out of the UK when this is on:
What an opportunity!
29 years ago today.
Luke continues to tilt against the windmills of the security theatre inertia that still has us hiding passwords by default. As ever, he’s got the data to back up his findings.
Tuesday, January 27th, 2015
Everyone who calls for WebKit in Internet Explorer is exactly the same kind of developer who would have coded to Internet Explorer 15 years ago (and probably happily displayed the best viewed in badge).
It’s happening again, and every petulant, lazy developer who calls for a WebKit-only world is responsible.
From the ashes of Opera, a new browser is born. Download the tech preview and take it for a spin—it’s quite nice.
Anna documents the most interesting bit (for me) of her new wearable/watch/wrist-device/whatever — the web browser.
This Eno-esque deck of cards by Scott could prove very useful for a lot of Clearleft projects.
I love Lyza’s comment on the par-for-the-course user-agent string of Microsoft’s brand new Spartan browser:
There must be an entire field emerging: UA archaeologist and lore historian. It’s starting to read like the “begats” in the bible. All browsers much connect their lineage to Konqueror or face a lack-of-legitimacy crisis!
I’ve been thinking about this a lot lately; alternate ways of paginating through the past e.g. by day instead of by arbitrary amount.
“You have to mark those promoted tweets as spam—it’s the equivalent to pressing the dog’s nose into the poop.” —me, just now, to @wordridden
Follows link to read an article:
Muscle memory dismisses pop-up overlay.
Muscle memory dismisses cookie y/n.
You can’t beat the simultaneously reviving and relaxing properties of a good cup of tea.
A question of timing
I’ve been updating my collection of design principles lately, adding in some more examples from Android and Windows. Coincidentally, Vasilis unveiled a neat little page that grabs one list of principles at random —just keep refreshing to see more.
I also added this list of seven principles of rich web applications to the collection, although they feel a bit more like engineering principles than design principles per se. That said, they’re really, really good. Every single one is rooted in performance and the user’s experience, not developer convenience.
Don’t get me wrong: developer convenience is very, very important. Nobody wants to feel like they’re doing unnecessary work. But I feel very strongly that the needs of the end user should trump the needs of the developer in almost all instances (you may feel differently and that’s absolutely fine; we’ll agree to differ).
That push and pull between developer convenience and user experience is, I think, most evident in the first principle: server-rendered pages are not optional. Now before you jump to conclusions, the author is not saying that you should never do client-side rendering, but instead points out the very important performance benefits of having the server render the initial page. After that—if the user’s browser cuts the mustard—you can use client-side rendering exclusively.
Anyway, I found myself nodding along enthusiastically with that first of seven design principles. Then I got to the second one: act immediately on user input. That sounds eminently sensible, and it’s backed up with sound reasoning. But it finishes with:
Techniques like PJAX or TurboLinks unfortunately largely miss out on the opportunities described in this section.
Ah. See, I’m a big fan of PJAX. It’s essentially the same thing as the Hijax technique I talked about many years ago in Bulletproof Ajax, but with the new addition of HTML5’s History API. It’s a quick’n’dirty way of giving the illusion of a fat client: all the work is actually being done in the server, which sends back chunks of HTML that update the interface. But it’s true that, because of that round-trip to the server, there’s a bit of a delay and so you often end up briefly displaying a loading indicator.
I contend that spinners or “loading indicators” should become a rarity
I agree …but I also like using PJAX/Hijax. Now how do I reconcile what’s best for the user experience with what’s best for my own developer convenience?
I’ve come up with a compromise, and you can see it in action on The Session. There are multiple examples of PJAX in action on that site, like pretty much any page that returns paginated results: new tune settings, the latest events, and so on. The steps for initiating an Ajax request used to be:
- Listen for any clicks on the page,
- If a “previous” or “next” button is clicked, then:
- Display a loading indicator,
- Request the new data from the server, and
- Update the page with the new data.
In one sense, I am acting immediately to user input, because I always display the loading indicator straight away. But because the loading indicator always appears, no matter how fast or slow the server responds, it sometimes only appears very briefly—just for a flash. In that situation, I wonder if it’s serving any purpose. It might even be doing the opposite to its intended purpose—it draws attention to the fact that there’s a round-trip to the server.
“What if”, I asked myself, “I only showed the loading indicator if the server is taking too long to send a response back?”
The updated flow now looks like this:
- Listen for any clicks on the page,
- If a “previous” or “next” button is clicked, then:
- Start a timer, and
- Request the new data from the server.
- If the timer reaches an upper limit, show a loading indicator.
- When the server sends a response, cancel the timer and
- Update the page with the new data.
Even though there are more steps, there’s actually less happening from the user’s perspective. Where previously you would experience this:
- I click on a button,
- I briefly see a loading indicator,
- I see the new data.
Now your experience is:
- I click on a button,
- I see the new data.
…unless the server or the network is taking too long, in which case the loading indicator appears as an interim step.
The question is: how long is too long? How long do I wait before showing the loading indicator?
The Nielsen Norman group offers this bit of research:
0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.
So I should set my timer to 100 milliseconds. In practice, I found that I can set it to as high as 200 to 250 milliseconds and keep it feeling very close to instantaneous. Anything over that, though, and it’s probably best to display a loading indicator: otherwise the interface starts to feel a little sluggish, and slightly uncanny. (“Did that click do any—? Oh, it did.”)
You can test the response time by looking at some of the simpler pagination examples on The Session: new recordings or new discussions, for example. To see examples of when the server takes a bit longer to send a response, you can try paginating through search results. These take longer because, frankly, I’m not very good at optimising some of those search queries.
There you have it: an interface that—under optimal conditions—reacts to user input instantaneously, but falls back to displaying a loading indicator when conditions are less than ideal. The result is something that feels like a client-side web thang, even though the actual complexity is on the server.
Now to see what else I can learn from the rest of those design principles.
Sunday, January 25th, 2015
Saturday, January 24th, 2015
For people of a certain age, this will bring back memories of a classic screensaver.
If you had told me back then that the screensaver could one day be recreated in CSS, I’m not sure I would’ve believed it.
In case you missed it earlier:
It was awesome.
Y’know, you should really pay more attention in future.
Friday, January 23rd, 2015
That’s Netscape 1.0n, released in December of 1994, running inside Windows 3.11, released in August of 1993, running inside of Google Chrome 39.0.2171.99 m, released about a week ago, on a Windows 7 PC, released in 2009.
But when it comes to trying to navigate the web with that set-up, things get a bit depressing.
Seeing @ElonMusk reference a Culture GCU ship Mind makes me all gooey inside.
I was chatting with some people recently about “enterprise software”, trying to figure out exactly what that phrase means (assuming it isn’t referring to the LCARS operating system favoured by the United Federation of Planets). I always thought of enterprise software as “big, bloated and buggy,” but those are properties of the software rather than a definition.
The more we discussed it, the clearer it became that the defining attribute of enterprise software is that it’s software you never chose to use: someone else in your organisation chose it for you. So the people choosing the software and the people using the software could be entirely different groups.
That old adage “No one ever got fired for buying IBM” is the epitome of the world of enterprise software: it’s about risk-aversion, and it doesn’t necessarily prioritise the interests of the end user (although it doesn’t have to be that way).
In his critique of AngularJS PPK points to an article discussing the framework’s suitability for enterprise software and says:
My own anecdotal experience suggests that Angular is not only suitable for enterprise software, but—assuming the definition provided above—Angular is enterprise software. In other words, the people deciding that something should be built in Angular are not necessarily the same people who will be doing the actual building.
Like I said, this is just anecdotal, but it’s happened more than once that a potential client has approached Clearleft about a project, and made it clear that they’re going to be building it in Angular. Now, to me, that seems weird: making a technical decision about what front-end technologies you’ll be using before even figuring out what your website needs to do.
Well, yes, technically Angular is a front-end framework, but conceptually and philosophically it’s much more like a back-end framework (actually, I think it’s conceptually closest to a native SDK; something more akin to writing iOS or Android apps, while others compare it to ASP.NET). That’s what PPK is getting at in his follow-up post, Front end and back end. In fact, one of the rebuttals to PPKs original post basically makes the exactly same point as PPK was making: Angular is for making (possibly enterprise) applications that happen to be on the web, but are not of the web.
On the web, but not of the web. I’m well aware of how vague and hand-wavey that sounds so I’d better explain what I mean by that.
Yes, like a broken record, I am once again talking about progressive enhancement. But honestly, that’s because it maps so closely to the strengths of the web: you start off by providing a service, using the simplest of technologies, that’s available to anyone capable of accessing the internet. Then you layer on all the latest and greatest browser technologies to make the best possible experience for the most number of people. But crucially, if any of those enhancements aren’t available to someone, that’s okay; they can still accomplish the core tasks.
So that’s one view of the web. It’s a view of the web that I share with other front-end developers with a background in web standards.
There’s another way of viewing the web. You can treat the web as a delivery mechanism. It is a very, very powerful delivery mechanism, especially if you compare it to alternatives like CD-ROMs, USB sticks, and app stores. As long as someone has the URL of your product, and they have a browser that matches the minimum requirements, they can have instant access to the latest version of your software.
That’s pretty amazing, but the snag for me is that bit about having a browser that matches the minimum requirements. For me, that clashes with the universality that lies at the heart of the World Wide Web. Sites built in this way are on the web, but are not of the web.
If you’re coming from a programming environment where you have a very good idea of what the runtime environment will be (e.g. a native app, a server-side script) then this idea of having minimum requirements for the runtime environment makes total sense. But, for me, it doesn’t match up well with the web, because the web is accessed by web browsers. Plural.
It’s telling that we’ve fallen into the trap of talking about what “the browser” is capable of, as though it were indeed a single runtime environment. There is no single “browser”, there are multiple, varied, hostile browsers, with differing degrees of support for front-end technologies …and that’s okay. The web was ever thus, and despite the wishes of some people that we only code for a single rendering engine, the web will—I hope—always have this level of diversity and competition when it comes to web browsers (call it fragmentation if you like). I not only accept that the web is this messy, chaotic place that will be accessed by a multitude of devices, I positively welcome it!
The alternative is to play a game of “let’s pretend”: Let’s pretend that web browsers can be treated like a single runtime environment; Let’s pretend that everyone is using a capable browser on a powerful device.
The problem with playing this game of “let’s pretend” is that we’ve played it before and it never works out well: Let’s pretend that everyone has a broadband connection; Let’s pretend that everyone has a screen that’s at least 960 pixels wide.
I refused to play that game in the past and I still refuse to play it today. I’d much rather live with the uncomfortable truth of a fragmented, diverse landscape of web browsers than live with a comfortable delusion.
The alternative—to treat “the browser” as though it were a known quantity—reminds of the punchline to all those physics jokes that go “Assume a perfectly spherical cow…”
If you’re willing to accept that assumption—and say to hell with the 250,000,000 people using Opera Mini (to pick just one example)—then Angular is a very powerful tool for helping you build something that is on the web, but not of the web.
Now I’m not saying that this way of building is wrong, just that it is at odds with my own principles. That’s why Angular isn’t necessarily a bad tool, but it’s a bad tool for me.
We often talk about opinionated software, but the truth is that all software is opinionated, because all software is built by humans, and humans can’t help but imbue their beliefs and biases into what they build (Tim Berners-Lee’s World Wide Web being a good example of that).
Software, like all technologies, is inherently political. … Code inevitably reflects the choices, biases and desires of its creators.
If the answer to that question is “yes”, then the software will help you. But if the answer is “no”, then you will be constantly butting heads with the software. At that point it’s no longer a useful tool for you. That doesn’t mean it’s a bad tool, just that it’s not a good fit for your needs.
That’s the reason why you can have one group of developers loudly proclaiming that a particular framework “rocks!” and another group proclaiming equally loudly that it “sucks!”. Neither group is right …and neither group is wrong. It comes down to how well the assumptions of that framework match your own worldview.
(Incidentally, Brett Slatkin ran the numbers to compare the speed of client-side vs. server-side rendering. His methodology is very telling: he tested in Chrome and …another Chrome. “The browser” indeed.)
So …depending on the way you view the web—“universal access” or “delivery mechanism”—Angular is either of no use to you, or is an immensely powerful tool. It’s entirely subjective.
But the problem is that if Angular is indeed enterprise software—i.e. somebody else is making the decision about whether or not you will be using it—then you could end up in a situation where you are forced to use a tool that not only doesn’t align with your principles, but is completely opposed to them. That’s a nightmare scenario.
Hyperlinks subvert hierarchy.
Thursday, January 22nd, 2015
First, the browsers competed on having proprietary crap. Then, the browsers competed on standards support. Now, finally, the browsers are competing on what they can offer their users.
Wednesday, January 21st, 2015
Remember Aaron’s dConstruct talk? Well, the Atlantic has more details of his work at the Cooper Hewitt museum in this wide-ranging piece that investigates the role of museums, the value of APIs, and the importance of permanent URLs.
As I was leaving, Cope recounted how, early on, a curator had asked him why the collections website and API existed. Why are you doing this?
His retrospective answer wasn’t about scholarship or data-mining or huge interactive exhibits. It was about the web.
I find this incredibly inspiring.
Tuesday, January 20th, 2015
Lining up Responsive Day Out 3
I’ve been scheming away for a little while now on the third and final Responsive Day Out, and things have been working out better than I could have hoped—my dream line-up is becoming a reality.
Two thirds of the line-up is assembled and ready to go:
- Zoe Mickley Gillenwater
- Jake Archibald
- Alice Bartlett
- Peter Gasston
- Rachel Shillcock
- Ruth John
- Heydon Pickering
- Alla Kholmatola
See? It’s looking pretty darn good, if you ask me.
You can expect plenty of meaty front-end development topics around the latest in CSS and browser APIs, but also plenty of talk on process, accessibility, performance, and the design challenges of responsive design.
My plan is to go out with a bang for this last Responsive Day Out and, the way things are looking, that’s on the cards.
I’ll let you know when tickets will be available. It’ll probably be sometime in early March. They will, as with previous years, be ludicrously good value.
Oh, and to get you in the mood, this might be a good time to revisit the audio recordings from the first two years.
(and that’s just 2/3rds of the speakers for http://responsiveconf.com/ …more to come)
Rob Larsen was published a book with O’Reilly called “The Uncertain Web: Web Development in a Changing Landscape”. I like it:
A refreshingly honest look at the chaotic, wonderful world of web development, with handy, practical advice for making future-friendly, backward-compatible websites.
A profile of the wonderful Internet Archive.
No one believes any longer, if anyone ever did, that “if it’s on the Web it must be true,” but a lot of people do believe that if it’s on the Web it will stay on the Web. Chances are, though, that it actually won’t.
Brewster Kahle is my hero.
Kahle is a digital utopian attempting to stave off a digital dystopia. He views the Web as a giant library, and doesn’t think it ought to belong to a corporation, or that anyone should have to go through a portal owned by a corporation in order to read it. “We are building a library that is us,” he says, “and it is ours.”
Monday, January 19th, 2015
I don’t agree with the conclusion of this post:
But I think the author definitely taps into a real issue:
The real problem is the perception that any code running in the browser is front-end code.
This is something we’re running into at Clearleft: we’ve never done backend programming (by choice), but it gets confusing if a client wants us to create something in Angular or Ember, “because that’s front end code, right?”
The difference between back-enders and front-enders is that the first work in only one environment, while the second have to work with myriad of environments that may hold unpleasant surprises.
Saturday, January 17th, 2015
Designing primarily in a laptop web browser and testing with a mouse rather than fingers may come to look very out of date soon.
Friday, January 16th, 2015
This is quite amazing!
I remember getting up on Christmas day 2003 (I was in Arizona), hoping to get news of Beagle 2’s successful landing. Alas, the news never came.
For something that size to be discovered now …that’s quite something.
Thursday, January 15th, 2015
I have doubts about Angular 1.x’s suitability for modern web development. If one is uncharitably inclined, one could describe it as a front-end framework by non-front-enders for non-front-enders.
Wednesday, January 14th, 2015
So myself and @WordRidden are just chilling in our Kyoto hotel room …when it starts shaking a bit.
It’s our first (little) earthquake!
Laid low in Kyoto.
Tuesday, January 13th, 2015
Monday, January 12th, 2015
Sunday, January 11th, 2015
Japan, I am in you.
Saturday, January 10th, 2015
Going to Japan. brb
Friday, January 9th, 2015
I have to admit, my initial reaction to the idea of providing free access to some websites for people in developing countries was “well, it’s better than no access at all, right?” …but the more I think about it, the more I realise how short-sighted that is. The power of the internet stems from being a stupid network and anything that compromises that—even with the best of intentions—is an attack on its fundamental principles.
On the surface, it sounds great for carriers to exempt popular apps from data charges. But it’s anti-competitive, patronizing, and counter-productive.
Dropping our films down the memory hole. Welcome to the digital dark age.
I’m not a new developer, but I can definitely relate to this. In fact, when I’ve spoken to any developer about this, it turns out that everyone feels overwhelmed by how much we’re expected to know. That’s not good. We should open up and talk about this more (like Charlotte is doing here).
As someone entering their mid 40s, I find this research into “the U-curve” immensely reassuring.
I’ve spoken at quite a few events over the last few years (2014 was a particularly busy year). Many—in fact, most—of those events were overseas. Quite a few were across the atlantic ocean, so I’ve partaken of quite a few transatlantic flights.
Most of the time, I’d fly British Airways. They generally have direct flights to most of the US destinations where those speaking engagements were happening. This means that I racked up quite a lot of frequent-flyer miles, or as British Airways labels them, “avios.”
Frequent-flyer miles were doing gamification before gamification was even a thing. You’re lured into racking up your count, even though it’s basically a meaningless number. With BA, for example, after I’d accumulated a hefty balance of avios points, I figured I’d try to the use them to pay for an upcoming flight. No dice. You can increase your avios score all you like; when it actually comes to spending them, computer says “no.”
So my frequent-flyer miles were basically like bitcoins—in one sense, I had accumulated all this wealth, but in another sense, it was utterly worthless.
(I’m well aware of just how first-world-problemy this sounds: “Oh, it’s simply frightful how inconvenient it is for one to spend one’s air miles these days!”)
Early in 2014, I decided to flip it on its head. Instead of waiting until I needed to fly somewhere and then trying to spend my miles to get there (which never worked), I instead looked at where I could possibly get to, given my stash of avios points. The BA website was able to tell me, “hey, you can fly to Japan and back …if you travel in the off-season …in about eight months’ time.”
Alrighty, then. Let’s do that.
Now, even if you can book a flight using avios points, you still have to pay all the taxes and surcharges for the flight (death and taxes remain the only certainties). The taxes for two people to fly from London to Tokyo and back are not inconsiderable.
But here’s the interesting bit: the taxes are a fixed charge; they don’t vary according to what class you’re travelling. So when I was booking the flight, I was basically presented with the option to spend X amount of unspendable imaginary currency to fly economy, or more of unspendable imaginary currency to fly business class, or even more of the same unspendable imaginary currency to fly—get this—first class!
Hmmm …well, let me think about that decision for almost no discernible length of time. Of course I’m going to use as many of those avios points as I can! After all, what’s the point of holding on to them—it’s not like they’re of any use.
The end result is that tomorrow, myself and Jessica are going to fly from Heathrow to Narita …and we’re going to travel in the first class cabin! Squee!
Not only that, but it turns out that there are other things you can spend your avios points on after all. One of those things is hotel rooms. So we’ve managed to spend even more of the remaining meaningless balance of imaginary currency on some really nice hotels in Tokyo.
We’ll be in Japan for just over a week. We’ll start in Tokyo, head down to Kyoto, do a day trip to Mount Kōya, and then end up back in Tokyo.
We are both ridiculously excited about this trip. I’m actually going somewhere overseas that doesn’t involve speaking at a conference—imagine that!
There’s so much to look forward to—Sushi! Ramen! Yakitori!
And all it cost us was a depletion of an arbitrary number of points in a made-up scoring mechanism.
Trying out the lovely scarf that @Wordridden made for me.
As always, systems thinking makes a lot of sense for analysing problems, even if—or, especially if—it’s a social issue.
There’s more than a whiff of Indie Web thinking in this sequel to the Cluetrain Manifesto from Doc Searls and Dave Weinberger.
The Net’s super-power is connection without permission. Its almighty power is that we can make of it whatever we want.
It’s quite lawn-off-getty …but I also happen to agree with pretty much all of it.
Although it’s kind of weird that it’s published on somebody else’s website.
A cheap’n’cheerful way of monitoring uptime for domains.
Smart thinking here on the eternal dilemma with loading web fonts. Filament Group have thought about how the initial experience of the first page load could be quite different to subsequent page loads.
Thursday, January 8th, 2015
Brad’s writing a book.
Insert take-my-money.gif here.
An important clarification from Stephen:
You don’t actually design in the browser
When I speak of designing in the browser, I mean creating browser-based design mockups/comps (I use the terms interchangeably), as opposed to static comps (like the PSDs we’re all used to). So it’s not the design. It’s the visualization of the design—the one you present to stakeholders.
Personally, I think it’s as crazy to start in the browser as it is to start with Photoshop—both have worldviews and constraints that will affect your thinking. Start with paper.
After a morning of dark torrential downpour, the sun has come out, shining on the gleaming rain-washed streets to create a powerful glare.
Wednesday, January 7th, 2015
There are some good points here comparing HTTP2 and SPDY, but I’m mostly linking to this because of the three wonderful opening paragraphs:
A very long time ago —in 1989 —Ronald Reagan was president, albeit only for the final 19½ days of his term. And before 1989 was over Taylor Swift had been born, and Andrei Sakharov and Samuel Beckett had died.
In the long run, the most memorable event of 1989 will probably be that Tim Berners-Lee hacked up the HTTP protocol and named the result the “World Wide Web.” (One remarkable property of this name is that the abbreviation “WWW” has twice as many syllables and takes longer to pronounce.)
Tim’s HTTP protocol ran on 10Mbit/s, Ethernet, and coax cables, and his computer was a NeXT Cube with a 25-MHz clock frequency. Twenty-six years later, my laptop CPU is a hundred times faster and has a thousand times as much RAM as Tim’s machine had, but the HTTP protocol is still the same.
People working in open-plan offices need to develop the ability—à la City And The City—to not see what is on their co-worker’s screens.
Events in 2015
Quite a significant chunk of my time last year was spent organising dConstruct 2014. The final result was worth it, but it really took it out of me. It got kind of stressful there for a while: ticket sales weren’t going as well as previous years, so I had to dip my toes into the world of… (shudder) marketing.
That was my third year organising dConstruct, and I’m immensely proud of all three events. dConstruct 2012—also known as “the one with James Burke”—remains a highlight of my life. But—especially after the particularly draining 2014 event—I’m going to pass on organising it this year.
To be honest, I think that dConstruct 2014, the tenth one, could stand as a perfectly fine final event. It’s not like it needs to run forever, right?
Andy has been pondering this very question, but he’s up for giving dConstruct at least one more go in 2015:
As we prepare for our tenth anniversary, we’ve also been asking whether it should be our last—at least for a while. The jury is still out, and we probably won’t make any decisions till after the event.
Y’know, it could turn out that dConstruct in 2015 might reinvigorate my energy, but for now, I’m just too burned out to contemplate taking it on myself. Anyway, I know that the other Clearlefties are more than capable of putting together a fantastic event.
But dConstruct wasn’t the only event I organised last year. 2014’s Responsive Day Out was a wonderful event, and much less stressful to organise. That’s mostly because it’s a very different beast to dConstruct; much looser, smaller, and easy-going, with fewer expectations. That makes for a fun day out all ‘round.
I wasn’t even sure if there was going to be a second Responsive Day Out, but I’m really glad we did it. In fact, I think there’s room for one last go.
I’ve already started putting a line-up together (and I’m squeeing with excitement about it already!), and this will definitely be the last Responsive Day Out, but keep your calendar clear on Friday, June 19th for…
I’m always surprised to find that working web developers often don’t know (or care) about basic protocol-level stuff like when to use GET and when to use POST.
My point is that a lot of web developers today are completely ignorant of the protocol that is the basis for their job. A core understanding of HTTP should be a base requirement for working in this business.
But as people spend more time on their mobile devices and in their apps, their Internet has taken a step backward, becoming more isolated, more disorganized and ultimately harder to use — more like the web before search engines.
Dan has started writing up what he did on his Summer hols …on a container ship travelling to China.
It is, of course, in the form of an email newsletter because that’s what all the cool kids are doing these days.
This a great step-by-step walkthrough from Rey on setting up a remote version of Internet Explorer for testing on Mac.
Tuesday, January 6th, 2015
Tim Berners-Lee is quite rightly worried about linkrot:
The disappearance of web material and the rotting of links is itself a major problem.
He brings up an interesting point that I hadn’t fully considered: as more and more sites migrate from HTTP to HTTPS (A Good Thing), and the W3C encourages this move, isn’t there a danger of creating even more linkrot?
…perhaps doing more damage to the web than any other change in its history.
I think that may be a bit overstated. As many others point out, almost all sites making the switch are conscientious about maintaining redirects with a 301 status code.
Anyway, the discussion does bring up some interesting points. Transport Layer Security is something that’s handled between the browser and the server—does it really need to be visible in the protocol portion of the URL? Or is that visibility a positive attribute that makes it clear that the URL is “good”?
And as more sites move to HTTPS, should browsers change their default behaviour? Right now, typing “example.com” into a browser’s address bar will cause it to automatically expand to http://example.com …shouldn’t browsers look for https://example.com first?
All good food for thought.
There’s a Google Doc out there with some advice for migrating to HTTPS. Unfortunately, the trickiest part—getting and installing certificates—is currently an owl-drawing tutorial, but hopefully it will get expanded.
If you’re looking for even more reasons why enabling TLS for your site is a good idea, look no further than the latest shenanigans from ISPs in the UK (we lost the battle for net neutrality in this country some time ago).
BT just inserted a popup into someone’s site, encouraging me to switch on content filtering. That is Very Not Cool. pic.twitter.com/QMnLRawsNW— David Thompson (@fatbusinessman) December 30, 2014
They can’t do that to pages served over HTTPS.
Scrub, scrub, scrub.
Scrub is in the air.
Will you still scrub me tomorrow?
You’ve lost that scrubbing feeling.
This time, it’s no so much the launch …it’s the landing!
Monday, January 5th, 2015
That’s how I roll.
You can now read Aaron’s excellent book online. I highly recommend reading the first chapter for one of the best descriptions of progressive enhancement that I’ve ever read.
Sunday, January 4th, 2015
Like an Enid Blyton adventure for the 21st century, James goes out into the country and explores the networks of microwave transmitters enabling high-frequency trading.
If you think that London’s skyscraper boom is impressive – the Shard, the Walkie-Talkie, the Cheesegrater, the Gherkin – go to Slough. It is not height that matters, but bandwidth.
Matt wrote a great article called Ten Years of Podcasting: Fighting Human Nature (although I’m not entirely sure why he put it on Ev’s site instead of—or in addition to—his own). It’s a look back at the history of podcasting, and how it has grown out of its nerdy origins to become more of a mainstream activity. In it, he kindly gives a shout-out to Huffduffer:
…a way to make piecemeal meta-podcasts on the fly built up from random shows (here’s my feed).
If you use the iOS app Workflow, there’s a nifty tutorial for extracting the audio from YouTube videos, posting the audio to Dropbox, and subscribing in Huffduffer. I’m letting the side down somewhat though: Huffduffer’s API is currently read-only, but it would so much more powerful if you could post from other apps. I need to wrap my head around OAuth to do this. I was hoping to do OwnYourGram-style API with IndieAuth and micropub (once your Huffduffer profile has your website URL, and that URL has
rel="me" links to OAuth providers like Twitter, Flickr, or Github, all the pieces should be in place), but alas IndieAuth only works on a domain or subdomain basis so
/username URLs are out.
Anyway, back to Matt’s article about podcasting. He writes:
Personally, I like it when new podcasts use Soundcloud for their hosting, because on a desktop computer it means I can easily dip into their archives and play random episodes, scrub to certain segments and get a feel for the show before I subscribe.
It’s true that if you’re sitting in front of a desktop computer, Soundcloud is a great way to listen to an audio file there and then. But it’s a lousy way to host a podcast.
The whole point of podcasting is that it’s time-shifted. You get to listen to the audio you want, when you want. The whole point of Soundcloud is that you listen to audio then and there. That’s great if you’re a musician, looking to make sure that people can’t make copies of your music, but it’s terrible if you’re a podcaster.
To be fair, Soundcloud’s primary audience is still musicians, rather than podcasters, so it makes sense for them to prioritise that use-case. But still, they really go out of their way to obfuscate the actual audio file. Even if the publisher has checked the right box to allow users to download the audio file, the result is a very literal interpretation of that: you can download the file, but you can’t copy the URL and paste it into, say, an app for listening later (and you certainly can’t huffduff it).
Case in point: Matt finishes his article with:
If you don’t have time to read the above, it’s available as a 14min audio file…
That audio file is hosted on Soundcloud. You can listen to it there, or you can listen to it through the embedded player on the article itself. But that’s it. You can’t take it with you. You can’t listen to it later. You can’t, for example, listen to it in your car, even though as Matt says:
…for most Americans, killing time listening to podcasts in a car is a great place.
If you can figure out a way to get at Matt’s audio file (and maybe even huffduff it), I’d be much obliged.
Like Merlin says:
If the episode page for your podcast doesn’t have a clear link to a downloadable audio file? That ain’t no podcast. [mic drop]— Merlin Mann (@hotdogsladies) December 23, 2014
Saturday, January 3rd, 2015
Friday, January 2nd, 2015
Good advice from Chris, particularly if you’re the one who has to live with the CSS you write.
As Obi-Wan Kenobi once said, “You must do what you feel is right, of course.”
A short profile of Michael Moorcock’s Elric series (though, for me, Jerry Cornelius is the champion that remains eternal in my memory).
Kenneth has isolated Chrome’s dev tools into its own app. This is a big step towards this goal:
Why are DevTools still bundled with the browsers? What if clicking “inspect element” simply started an external DevTools app?
With DevTools separated from one specific browser, a natural next step would be making the DevTools app work with other browsers.
How to get Yosemite to display five-digit years. It’s a bit of a hack, but we’ve got another 7,985 years to figure out a better solution.
Thursday, January 1st, 2015
Happy new year!
Happy new day!