Saturday, April 12th, 2014
Many people are—quite rightly, in my opinion—upset about the prospect of DRM landing in the W3C HTML specification at the behest of media companies like Netflix and the MPAA.
This would mean that a web browser would have to include support for the plugin-like architecture of Encrypted Media Extensions if they want to claim standards compliance.
A common rebuttal to any concerns about this is that any such concerns are hypocritical. After all, we’re quite happy to use other technologies—Apple TV, Silverlight, etc.—that have DRM baked in.
I think that this rebuttal is a crock of shit.
It is precisely because other technologies are locked down that it’s important to keep the web open.
I own an Apple TV. I use it to watch Netflix. So I’m using DRM-encumbered technologies all the time. But I will fight tooth and nail to keep DRM out of web browsers. That’s not hypocrisy. That’s a quarantine measure.
Stuart summarises the current situation nicely:
From what I’ve seen, this is a discussion of pragmatism: given that DRM exists and movies use it and people want movies, is it a good idea to integrate DRM movie playback more tightly with the web?
His conclusion perfectly encapsulates why I watch Netflix on my Apple TV and I don’t want DRM on the web:
The argument has been made that if the web doesn’t embrace this stuff, people won’t stop watching videos: they’ll just go somewhere other than the web to get them, and that is a correct argument. But what is the point in bringing people to the web to watch their videos, if in order to do so the web becomes platform-specific and unopen and balkanised?
As an addendum, I heard a similar “you’re being a hypocrite” argument when I raised security concerns about EME at the last TAG meetup in London:
I tried to steer things away from the ethical questions and back to the technical side of things by voicing my concerns with the security model of EME. Reading the excellent description by Henri, sentences like this should give you the heebie-jeebies:
Alex told me that my phone already runs code that I cannot inspect and does things that I have no control over. So hey, what does it matter if my web browser does the same thing, right?
I’m reminded of something that Anne wrote four years ago when a vulnerability was discovered that affected Flash, Java, and web browsers:
We have higher standards for browsers.
Monday, April 7th, 2014
Flickr Commons is a wonderful thing. That’s why I’m concerned:
Y’know, I’m worried about what will happen to my own photos when Flickr inevitably goes down the tubes (there are still some good people there fighting the good fight, but they’re in the minority and they’re battling against the douchiest of Silicon Valley managerial types who have been brought in to increase “engagement” by stripping away everything that makes Flickr special) …but what really worries me is what’s going to happen to Flickr Commons. It’s an unbelievably important and valuable resource.
The Brooklyn Museum is taking pre-emptive measures:
As of today, we have left Flickr (including The Commons).
Unfortunately, they didn’t just leave their Flickr collection; they razed it to the ground. All those links, all those comments, and all those annotations have been wiped out.
They’ve moved their images over to Wikimedia Commons …for now. It turns out that they have a very cavalier attitude towards online storage (a worrying trait for a museum). They’re jumping out of the frying pan of Flickr and into the fire of Tumblr:
In the past few months, we’ve been testing Tumblr and it’s been a much better channel for this type of content.
Audio and video is being moved around to where the eyeballs and earholes currently are:
We have left iTunesU in favor of sharing content via YouTube and SoundCloud.
I find this quite disturbing. A museum should be exactly the kind of institution that should be taking a thoughtful, considered approach to how it stores content online. Digital preservation should be at the heart of its activities. Instead, it takes a back seat to chasing the fleeting thrill of “engagement.”
Leaving Flickr Commons could have been the perfect opportunity to invest in long-term self-hosting. Instead they’re abandoning the Titanic by hitching a ride on the Hindenberg.
There’ll be another Connections event this month, following on from the excellent inaugural humdinger. Save the date: Wednesday, April 23rd at 7pm in the delightful surroundings of 68 Middle Street.
There’s one obvious connection between the two speakers this time ‘round: their first names are homophones.
We’ve got Leigh Taylor of Medium and Gravita fame. He’ll be talking about this holacracy stuff that people have been banging on about lately, and what it takes to actually make a creative company work in a decentralised way.
We’ve also got Lee Bryant, an ol’ pal of mine from way back who recently launched POST*SHIFT. He too will be talking about flexible organisational structures.
Should be good brain-tickling fun. You can secure your place at the event now. It’s free. But the usual warning applies: if you can’t make it, be sure to cancel your ticket—if you book a place and then don’t show up, you will be persona non grata for any future Connections.
See you in two weeks time.
Friday, March 28th, 2014
All the videos from last year’s dConstruct have been posted on Vimeo (with a backup on the Internet Archive). If you were there, you can re-live the fun all over again. And if you weren’t there, you can see just what you missed:
- Amber Case
- Luke Wroblewski
- Nicole Sullivan
- Simone Rebaudengo
- Sarah Angliss
- Keren Elazari
- Maciej Cegłowski
- Dan Williams
- Adam Buxton
Don’t forget the audio is also available for your listening pleasure. Slap the RSS feed into the podcasting application of your choosing.
Revisiting the brilliance of last year’s dConstruct should get you in the mood for this year’s event. Put the date in your calendar: Friday, September 5th. Last year was all about Communicating With Machines. This year will be all about Living With The Network.
More details will be unveiled soon (he said, hoping to cultivate a feeling of mystery and invoke a sense of anticipation).
Thursday, March 27th, 2014
Cennydd wrote a really good post recently called Why don’t designers take Android seriously?
I completely agree with his assessment that far too many developers are ignoring or dismissing Android for two distasteful reasons:
- Android is difficult
- User behaviours are different:
Put uncharitably, the root issue is “Android users are poor”.
But before that, Cennydd compares the future trajectories of other platforms and finds them wanting in comparison to Android: Windows, iOS, …the web.
On that last comparison, I (unsurprisingly) disagree. But it’s not because I think the web is a superior platform; it’s because I don’t think the web is a platform at all.
I wrote about this last month:
The web is not a platform. It’s a continuum.
I think it’s a category error to compare the web to Android or Windows or iOS. It’s like comparing Coca-Cola, Pepsi, and liquid. The web is something that permeates the platforms. From one point of view, this appears to make the web less than the operating system that someone happens to be using to access it. But in the same way that a chicken is an egg’s way of reproducing and a scientist is the universe’s way of observing itself, an operating system is the web’s way of providing access to itself.
Wait a minute, though …Cennydd didn’t actually compare Android to the web. He compared Android to the web browser. Like I’ve said before:
We talk about “the browser” when we should be talking about the browsers. I’m guilty of this. I’ll use phrases like “designing in the browser” or talk about “what we can do in the browser”, when really I should be talking about designing in the browsers and what we can do in the browsers.
But Cennydd’s comparison does raise an interesting question: what is a web browser exactly? Answering that question probably requires an answer to the question: what is the web?
(At this point you might be thinking, “Ah, this is just semantics!” and you’d be right. Abandon ship here if you feel that way. But to describe something as “just semantics” is like pointing at all the written works in every library and saying “but they’re just words”, or taking in the entire trajectory of human civilisation and saying “but those are just ideas”. So yeah, this is “just” semantics.)
But to be honest, I don’t think that the Hypertext Transfer Protocol is the important part of the web; it’s the URLs that really matter. It’s the addressability of the files that’s the killer app of the web in my opinion.
I was re-reading Weaving The Web and in that book, Tim Berners-Lee describes his surprise when people started using HTML to mark up their content. He expected HTML to be used for indices that would point to the URLs of the actual content, which could be in any file format (PDF, word processessing documents, or whatever). It turned out that HTML had just enough expressiveness and grokability to be used instead of those other formats.
Perhaps then, a web browser is something that can access URLs. Certainly in pretty much every example of a web browser throughout the web’s history, the URL has been front and centre: if the web were a platform, the URL bar would be its command line.
But, like the rise of HTML, the visibility of the URL in a web browser is an accident of history. It was added almost as an afterthought as a power-user feature: why would most people care what the URL of the content happens to be? It’s the content itself that matters, and you’d get to that content not by typing URLs, but by following hyperlinks.
There’s an argument to be made that, with the rise of search engines, the visibility of URLs has become less important. See, for example, the way that every advertisement for a website on the Tokyo subway doesn’t show a URL; it shows what to type into a search engine instead (and I’ve started seeing this in some TV adverts here in the UK too).
So a web browser that doesn’t expose the URLs of what it’s rendering is still a web browser.
Instagram’s native app is a web browser.
Facebook’s native app is a web browser.
Twitter’s native app is a web browser.
Like Paul said:
Monolithic browsers are not the only User Agent.
I was initially confused when Anna tweeted:
Reading the responses to @Cennydd’s tweet about designers needing to pay attention to Android. The web is fragmented. That’s our job.
I understood Cennydd’s point to be about native apps, not the web. But if, as I’ve just said, many native apps are in fact web browsers, does that mean that making native apps is a form of web development?
I don’t think so. I think making a native app has much more in common with making a web browser than it does with making a web site/app/thang. Certainly the work that Clearleft has done in this area felt that way: the Channel 4 News app is a browser for Channel 4 News; the Evo iPad app is a browser for Evo.
So if your job involves making browsers like those, then yes, you absolutely should be paying more attention to Android, for all the reasons that Cennydd suggests.
But if, like me, you have zero interest in making browsers—whether it’s a browser for Android, iOS, OS X, Windows, Blackberry, Linux, or NeXT—you should still be paying attention to Android because it’s just one of the many ways that people will be accessing the web.
It’s all too easy for us to fall into the trap of thinking that people will only be using traditional monolithic web browsers to access what we build. The truth is that our work will be accessed on the desktop, on mobile, and on tablets, but also on watches, on televisions, and sure, even fridges, but also on platforms that may not even have screens.
It’s certainly worth remembering that what you make will be viewed in the context of an artisanal browser. Like Jen says:
The “native apps are better” argument ignores the fact one of the most popular things to do in apps is read the web.
But just because we know that our work will be accessed on a whole range of devices and platforms doesn’t mean that we should optimise for those specific devices and platforms. That just won’t scale. The only sane future-friendly approach is to take a device-agnostic, platform-agnostic approach and deliver something that’s robust enough to work in this stunningly-wide range of browsers and user-agents (hint: progressive enhancement is your friend).
I completely agree with Cennydd: I think that ignoring Android is narrow-minded, blinkered and foolish …but I feel the same way about ignoring Windows, Blackberry, Nokia, or the Playstation. I also think it would be foolish to focus on any one of those platforms at the expense of others.
I love the fact that the web can be accessed on so many platforms and devices by so many different kinds of browsers. I only wish there more: more operating systems, more kinds of devices, more browsers. Any platform that allows more people to access the web is good with me. That’s why I, like Cennydd, welcome the rise of Android.
Stop seeing fragmentation. Start seeing diversity.
Monday, March 24th, 2014
Tickets for Responsive Day Out 2 go on sale at noon tomorrow. Like I said, it was extremely popular last year and sold out very quickly. I don’t know if that’s going to happen again this year, but if you’re thinking about grabbing a ticket, I wouldn’t dawdle too much if I were you.
There’s a new addition to the line-up: Yaili is going to talk about the ongoing responsive work going on at Ubuntu.com—I’m really looking forward to hearing about that.
Then again, I’m really looking forward to hearing from all the speakers. It’s going to be like Christmas is coming early; a responsive, jam-packed Christmas.
Here’s the ticket page if you want to get in there the moment tickets go on sale. It’s not live yet, but at the stroke of midday you can secure your place.
Sunday, March 23rd, 2014
I went up to London for the Edge Conference on Friday. It’s not your typical conference. Instead of talks, there are panels, but not the crap kind, where nobody says anything of interest: these panels are ruthlessly curated and prepared. There’s lots of audience interaction too, but again, not the crap kind, where one or two people dominate the discussion with their own pet topics: questions are submitted ahead of time, and then you are called upon to ask it at the right moment. It’s like Question Time for the web.
The first panel was on that hottest of topics: Web Components. Peter Gasston kicked it off with a superb introduction to the subject. Have a read of his equally-excellent article in Smashing Magazine to get the gist.
Needless to say, this panel covered similar ground to the TAG meetup I attended a little while back, and left me with similar feelings: I’m equal parts excited and nervous; optimistic and worried. If Web Components work out, and we get a kind emergent semantics of UI widgets, it’ll be a huge leap forward for the web. But if we end up with a Tower of Babel, things could get very messy indeed. We’ll probably get both at once. And I think that’ll be (mostly) okay.
I butted into the discussion when the topic of accessibility came up. I was a little worried about what I was hearing, which was mainly, “Oh, ARIA takes care of the accesibility.” I felt like Web Components were passing the buck to ARIA, which would be fine if it weren’t for the fact that ARIA can’t cover all the possible use-cases of Web Components.
I chatted about this with Derek and Nicole during the break, but I’m not sure if I was articulating my thoughts very well, so I’ll have another stab at it here:
Let me set the scene for Web Components…
Historically, HTML has had a limited vocubalary for expressing interface widgets—mostly a bunch of specialised form fields like, say, the
select element. The plus side is that there’s a consensus of understanding among the browsers, so you don’t have to explain what a
select element does; the browsers already know. The downside is that whenever we want to add a new interface element like
input type="range", it takes time to get into browsers and through the standards process. Web Components allow you to conjure up interface elements, and you don’t have to lobby browser makers or standards groups in order to make browsers understand your newly-minted element: you provide all the behavioural and styling instructions in one bundle.
select element because the browser knows what it is and can expose that knowledge to the assistive technology. If we’re going to start making up our own interface elements, we now have to take on the responsibility of providing that information to assistive technology.
That’s not a criticism of ARIA: that’s the way it was designed. It’s a reactionary technology, designed to plug the gaps where the native semantics of HTML just don’t cut it. The vocabulary of ARIA was created by looking at the kinds of interface elements people are making—tabs, sliders, and so on. That’s fine, but it can’t scale to keep pace with Web Components.
The problem that Web Components solve—the fact that it currently takes too long to get a new interface element into browsers—doesn’t have a corresponding solution when it comes to accessibility hooks. Just adding more and more predefined ARIA roles won’t cut it—we need some kind of extensible accessibility that matches the expressive power of Web Components. We don’t need a bigger vocabulary in ARIA, we need a way to define our own vocabulary—an extensible ARIA, if you will.
Hmmm… I’m still not sure I’m explaining myself very well.
Anyway, I just want to make sure that accessibility doesn’t get left behind (again!) in our rush to create a new solution to our current problems. With Web Components still in their infancy, this feels like the right time to raise these concerns.
That highlights another issue, one that Nicole picked up on. It’s really important that the extensible web community and the accessibility community talk to each other.
Frankly, the accessibility community can be its own worst enemy sometimes. So don’t get me wrong: I’m not bringing up my concerns about the accessibility of Web Components in order to cry “fail!”—I just want to make sure that it’s on the table (and I’m glad that Alex is one of the people driving Web Components—his history with Dojo reassures me that we can push the boundaries of interface widgets on the web without leaving accessibility behind).
Anyway …that’s enough about that. I haven’t mentioned all the other great discussions that took place at Edge Conference.
The Web Components panel was followed by a panel on developer tools. This was dominated by representatives from different browsers, each touting their own set of in-browser tools. But the person who I really wanted to rally behind was Kenneth Auchenberg. He quite rightly asks why our developer tools and our text editors are two different apps. And rather than try to put text editors into developer tools, what we really want is to pull developer tools into our text editors …all the developer tools from all the browsers, not just one set of developer tools from one specific browser.
If you haven’t seen Kenneth’s presentation from Full Frontal, I urge you to watch it or listen to it.
I had my hand up to jump into the discussion towards the end, but time ran out so I didn’t get a chance. Paul came over afterwards and asked what I was going to say. Here’s what I told him…
I’m fascinated by the social dynamics around how browsers get made. This is an area where different companies are simultaneously collaborating and competing.
Broadly speaking, the feature set of a web browser can be divided into two buckets:
In the other bucket, you’ve got all the stuff that browsers compete against each other with: speed, security, the user interface, etc. A lot of this takes place behind closed doors, and that’s fine. There’s no real need for browser makers to collaborate on this stuff, and it could even hurt their competetive advantage if they did collaborate.
But here’s the problem; developer tools seem to be coming out of that second bucket instead of the first. There doesn’t seem to be much communication between the browser makers on developer tools. That’s fine if you see developer tools as an opportunity for competition, but it’s lousy if you see developer tools as an opportunity for interoperability.
This is why Kenneth’s work is so important. He’s crying out for more interoperability between browsers when it comes to developer tools. Why can’t they all use the same low-level APIs under the hood? Then they can still compete on how pretty their dev tools look, without making life miserable for developers who want to move quickly between browsers.
As painful as it might be, I think that browser makers should get together in some semi-formalised way to standardise this stuff. I don’t think that the W3C or the WHATWG are necessarily the right places for this kind of standardisation, but any kind of official cooperation would be good.
The panel on build processes for front-end development kicked off with Gareth saying a few words. Some of those words included the sentence:
Make is probably older than you.
Cue glares from me and Scott.
Gareth also said that making websites means making software. We’re all making software—live with it.
This made me nervous. I’ve always felt that one of the great strengths of the web has been its low barrier to entry. The idea of a web that can only be made by qualified software developers doesn’t sound like a good thing to me.
Fortunately, things got cleared up later on. Somebody else asked a question about whether the barrier to entry was being raised by the complexity of tools like preprocessors, compilers, and transpilers. The consensus of the panel was that these are power tools for power users. So if someone were learning to make a website from scratch, you wouldn’t start them off with, say, Sass, without first learning CSS.
It was a fun panel, made particulary enjoyable by the presence of Kyle Simpson. I like the cut of his jib. Alas, I didn’t get the chance to tell him that in person. I had to duck out of the afternoon’s panels to get back to Brighton due to unforeseen family circumstances. But I did manage to catch some of the later panels on the live stream.
A common thread I noticed amongst many of the panels was a strong bias for decantralisation, rather than collaboration. That was most evident with Web Components—the whole point is that you can make up your own particular solution rather than waiting for a standards body. But it was also evident in the Developer Tools line-up, where each browser maker is reinventing the same wheels. And when it came to Build Process, it struck me that everyone is scratching their own itch instead of getting together to work on an itch solution.
There’s nothing wrong with that kind of Darwinian approach to solving our problems, but it does seem a bit wasteful. Mairead Buchan was at Edge Conference too and she noticed the same trend. Sounds like she’s going to do something about it too.
Monday, March 17th, 2014
The World Wide Web turned 25 last week. Happy birthday!
As is so often the case when web history is being discussed, there is much conflating of “the web” and “the internet” in some mainstream media outlets. The internet—the network of networks that allows computers to talk to each other across the globe—is older than 25 years. The web—a messy collection of HTML files linked via URLs and delivered with the Hypertext Transfer Protocol (HTTP)—is just one of the many types of information services that uses the pipes of the internet (yes, pipes …or tubes, if you prefer—anything but “cloud”).
Now, some will counter that although the internet and the web are technically different things, for most people they are practically the same, because the web is by far the most common use-case for the internet in everyday life. But I’m not so sure that’s true. Email is a massive part of the everyday life of many people—for some poor souls, email usage outweighs web usage. Then there’s streaming video services like Netflix, and voice-over-IP services like Skype. These sorts of proprietary protocols make up an enormous chunk of the internet’s traffic.
The reason I’m making this pedantic distinction is that there’s been a lot of talk in the past year about keeping the web open. I’m certainly in agreement on that front. But if you dig deeper, it turns out that most of the attack vectors are at the level of the internet, not the web.
Net neutrality is hugely important for the web …but it’s hugely important for every other kind of traffic on the internet too.
The Snowden revelations have shown just how shockingly damaging the activities of the NSA and GCHQ are …to the internet. But most of the communication protocols they’re intercepting are not web-based. The big exception is SSL, and the fact that they even thought it would be desirable to attempt to break it shows just how badly they need to be stopped—that’s the mindset of a criminal organisation, pure and simple.
So, yes, we are under attack, but let’s be clear about where those attacks are targeted. The internet is under attack, not the web. Not that that’s a very comforting thought; without a free and open internet, there can be no World Wide Web.
But by and large, the web trundles along, making incremental improvements to itself: expanding the vocabulary of HTML, updating the capabilities of HTTP, clarifying the documentation of URLs. Forgive my anthropomorphism. The web, of course, does nothing to itself; people are improving the web. But the web always has been—and always will be—people.
For some time now, my primary concern for the web has centred around what I see as its killer feature—the potential for long-term storage of knowledge. Yes, the web can be (and is) used for real-time planet-spanning communication, but there are plenty of other internet technologies that can do that. But the ability to place a resource at a URL and then to access that same resource at that same URL after many years have passed …that’s astounding!
Using any web browser on any internet-enabled device, you can instantly reach the first web page ever published. 23 years on, it’s still accessible. That really is something special. Digital information is not usually so long-lived.
On the 25th anniversary of the web, I was up in London with the rest of the Clearleft gang. Some of us were lucky enough to get a behind-the-scenes peak at the digital preservation work being done at the British Library:
In a small, unassuming office, entire hard drives, CD-ROMs and floppy disks are archived, with each item meticulously photographed to ensure any handwritten notes are retained. The wonderfully named ‘ancestral computing’ corner of the office contains an array of different computer drives, including 8-inch, 5 1⁄4-inch, and 3 1⁄2-inch floppy disks.
Most of the data that they’re dealing with isn’t much older than the web, but it’s an order of magnitude more difficult to access; trapped in old proprietary word-processing formats, stuck on dying storage media, readable only by specialised hardware.
Standing there looking at how much work it takes to rescue our cultural heritage from its proprietary digital shackles, I was struck once again by the potential power of the web. With such simple components—HTML, HTTP, and URLs—we have the opportunity to take full advantage of the planet-spanning reach of the internet, without sacrificing long-term access.
As long as we don’t screw it up.
Right now, we’re screwing it up all the time. The simplest way that we screw it up is by taking it for granted. Every time we mindlessly repeat the fallacy that “the internet never forgets,” we are screwing it up. Every time we trust some profit-motivated third-party service to be custodian of our writings, our images, our hopes, our fears, our dreams, we are screwing it up.
The evening after the 25th birthday of the web, I was up in London again. I managed to briefly make it along to the 100th edition of Pub Standards. It was a long time coming. In fact, there was a listing on Upcoming.org for the event. The listing was posted on February 5th, 2007.
Of course, you can’t see the original URL of that listing. Upcoming.org was “sunsetted” by Yahoo, the same company that “sunsetted” Geocities in much the same way that the Enola Gay sunsetted Hiroshima. But here’s a copy of that listing.
Fittingly, there was an auction held at Pub Standards 100 in aid of the Internet Archive. The schwag of many a “sunsetted” startup was sold off to the highest bidder. I threw some of my old T-shirts into the ring and managed to raise around £80 for Brewster Kahle’s excellent endeavour. My old Twitter shirt went for a pretty penny.
I was originally planning to bring my old Pownce T-shirt along too. But at the last minute, I decided I couldn’t part with it. The pain is still too fresh. Also, it serves a nice reminder for me. Trusting any third-party service—even one as lovely as Pownce—inevitably leads to destruction and disappointment.
That’s another killer feature of the web: you don’t need anyone else. You can publish to this world-changing creation without asking anyone for permission. I wish it were easier for people to do this: entrusting your heritage to the Yahoos and Pownces of the world is seductively simple …but only in the short term.
In 25 years time, I want to be able to access these words at this URL. I’m going to work to make that happen.
Tuesday, March 11th, 2014
When we decided to put on last year’s Responsive Day Out, it was a fairly haphazard, spur-of-the-moment affair. Well, when I say “spur of the moment”, I mean there was just three short months between announcing the event and actually doing it. In event-organising terms, that’s flying by the seat of your pants.
The Responsive Day Out was a huge success—just ask anyone who was there. Despite the lack of any of the usual conference comforts (we didn’t even have badges), everyone really enjoyed the whizz-bang, lickety-split format: four blocks of three back-to-back quickfire 20 minute talks, with each block wrapped up with a short discussion. And the talks were superb …really superb.
It was always intended as a one-off event. But I was surprised by how often people asked when the next one would be, either because they were there and loved it, or because they missed out on getting a ticket but heard how great it was. For a while, I was waving off those questions, saying that we had no plans for another Responsive Day Out. I figured that we had covered quite a lot in that one day, and now we should just be getting on with building the responsive web, right?
But then I started to notice how many companies were only beginning to make the switch to working responsively within the past year. It’s like the floodgates have opened. I’ve been going into companies and doing workshops where I’ve found myself thinking time and time again that these people could really benefit from an event like the Responsive Day Out.
Slowly but surely, the thought of having another Responsive Day Out grew and grew in my mind.
So let’s do this.
On Friday, June 27th, come on down to Brighton for Responsive Day Out 2: Elastic Bugaloo. It will be bloody brilliant.
The format will be mostly the same as last year, with one big change: one of the day’s slots won’t feature three quick back-to-back talks. Instead it will be a keynote presentation by none other than the Responitor himself, Duke Ethan of Marcotte.
There are some other differences from last year. Whereas last year’s speakers all came from within the borders of the UK, this year I’ve invited some supremely talented people from other parts of Europe. You can expect mind-expanding knowledge bombs on workflow, process, front-end technologies, and some case studies.
If you check out the website, you’ll see just some of the speakers I’ve got lined up for you: Stephanie, Rachel, Stephen …but that’s not the full line-up. I’m still gathering together the last few pieces of the day’s puzzle. But I’ve got to say, I’m already ridiculously excited to hear what everyone has to say.
The expanded scope of the line-up means that the ticket price is a bit more this year—last year’s event was laughably cheap—but it’s still a ridiculously low price: just £80 plus VAT, bringing it to a grand total of just £96 all in. That’s unheard of for a line-up of this calibre.
I’m planning to put tickets on sale on two weeks from today on March 25th. Last year’s Responsive Day Out was insanely popular and sold out almost immediately. Make sure you grab your ticket straight away.
To get in the mood, you might want to listen to the podcast or watch the videos from last year.
See you in Brighton on June 27th. This is going to be fun!
(By the way, if your company fancies dropping a few grand to sponsor an after-party for Responsive Day Out 2: The Revenge, let me know. The low-cost, no-frills approach means that right now, there’s no after-party planned, but if your company threw one, they would earn the undying gratitude of hundreds of geeks.)
Friday, March 7th, 2014
When I was talking about Async, Ajax, and animation, I mentioned the little trick I’ve used of generating a
progress element to indicate to the user that an Ajax request is underway.
I sometimes use the same technique even if Ajax isn’t involved. When a form is being submitted, I find it’s often good to provide explicit, immediate feedback that the submission is underway. Sure, the browser will do its own thing but a browser doesn’t differentiate between showing that a regular link has been clicked, and showing that all those important details you just entered into a form are on their way.
progress element is inserted at the end of the form …which is usually right by the submit button that the user will have just pressed.
While I’m at it, I also set a variable to indicate that a POST submission is underway. So even if the user clicks on that submit button multiple times, only one request is set.
You’ll notice that I’m attaching an event to each
form element, rather than using event delegation to listen for a
click event on the parent document and then figuring out whether that
click event was triggered by a submit button. Usually I’m a big fan of event delegation but in this case, it’s important that the event I’m listening to is the
submit event. A form won’t fire that event unless the data is truly winging its way to the server. That means you can do all the client-side validation you want—making good use of the
required attribute where appropriate—safe in the knowledge that the
progess element won’t be generated until the form has passed its validation checks.
If you like this particular pattern, feel free to use the code. Better yet, improve upon it.