
Peeling and eating shrimp.
5th | 10th | 15th | 20th | 25th | 30th | ||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
12am | |||||||||||||||||||||||||||||||
4am | |||||||||||||||||||||||||||||||
8am | |||||||||||||||||||||||||||||||
12pm | |||||||||||||||||||||||||||||||
4pm | |||||||||||||||||||||||||||||||
8pm |
Peeling and eating shrimp.
Silhouette
Clouds, reflected.
Dune
A fascinating tale of mistaken identity with one of Lyza’s photos.
This is hilarious …for about two dozen people.
For everyone else, it’s as opaque as the rest of the standardisation process.
Sea oats
Standing in the surf.
Silver light
Tantek shares a fascinating history lesson from Tim Berners-Lee on how the IETF had him change his original nomenclature of UDI—Universal Document Identifier—to what we now use today: URL—Uniform Resource Locator.
On the beach
Avocado margarita.
The cool side of the temple.
Bye, bye, Epcot.
Geodesic
Buckyball
Dystopia
Stay tuned …we’re building your future.
It’s the future. Take it.
Portal
Drinking around the world.
Travelling to Buckminster Fuller’s world of tomorrow.
Spaceship Earth!
I just saw my first ever rocket launch (from a distance).
It was awesome.
“MVC stands for Maybe Viewable Content.” — @ScottJehl
Listening to @ScottJehl talk about web thangs at An Event Apart.
I approve.
45 years ago, 01969-10-29T22:30, the first ARPANET message was sent.
The message: “lo”
(supposed to be “login” but the system crashed)
I don’t tend to be a “magic pill” kind of believer, but I can honestly say that embracing progressive enhancement can radically change your business for the better. And I’m glad to see Google agrees with me.
HTML5 is now a W3C recommendation. Here’s what a bunch of people—myself included—have to say about that.
The Redshirt Wedding.
Luau, luau, oh yeah, me gotta go.
Raising a glass to HTML 5 becoming a W3C recommendation.
The first flavour of markup to do so in 15 years.
“Mickey, don’t divorce Minnie just because she’s a little crazy.”
“I didn’t say she was a little crazy; I said she was fucking Goofy!”
Cartography porn.
It’s tiki time!
A self-describing list of cursors available through CSS.
Google has updated its advice to people making websites, who might want to have those sites indexed by Google. There are two simple bits of advice: optimise for performance, and use progressive enhancement.
Just like modern browsers, our rendering engine might not support all of the technologies a page uses. Make sure your web design adheres to the principles of progressive enhancement as this helps our systems (and a wider range of browsers) see usable content and basic functionality when certain web design features are not yet supported.
Jaimee is ready to rock Disney World.
This was a fun podcast—myself and Cyd from Code For America talk to Karen and Ethan about how we worked together. Good times.
The audio is available for your huffduffing pleasure.
I’m at Disney World for a special edition of An Event Apart, so this lightning talk from Dan Williams seems appropriate to revisit.
Tomorrowland
This ride’s for @bradfrost.
Spirehead
Liz survived Space Mountain!
Survivors of Space Mountain.
Scott and bershon Ethan.
Setting out to an explore an area of Earth that was terraformed during the 20th century (nearby to one of that century’s main spaceports).
Florida, terraformed.
Beer flight
Hello, Magic Kingdom.
Going to Disney World. brb
Having lunch in the BA lounge at Gatwick, admiring @KyleBean’s work on the front of “Business Life” magazine.
Really enjoyed playing music with @SalterCane this afternoon.
The best way to deal with a rainy day.
A warm-hearted short story about a moonshot. By Tom Hanks.
Aaron wrote a great post a little while back called A Fundamental Disconnect. In it, he points to a worldview amongst many modern web developers, who see JavaScript as a universally-available technology in web browsers. They are, in effect, viewing a browser’s JavaScript engine as a runtime environment, and treating web development no different to any other kind of software development.
The one problem I’ve seen, however, is the fundamental disconnect many of these developers seem to have with the way deploying code on the Web works. In traditional software development, we have some say in the execution environment. On the Web, we don’t.
Treating JavaScript support in “the browser” as a known quantity is as much of a consensual hallucination as deciding that all viewports are 960 pixels wide. Even that phrasing—“the browser”—shows a framing that’s at odds with the reality of developing for the web; we don’t have to think about “the browser”, we have to think about browsers:
Lakoffian self-correction: if I’m about to talk about doing something “in the browser”, I try to catch myself and say “in browsers” instead.
While we might like think that browsers have all reached a certain level of equilibrium, as Aaron puts it “the Web is messy”:
And, as much as we might like to control a user’s experience down to the very pixel, those of us who have been working on the Web for a while understand that it’s a fool’s errand and have adjusted our expectations accordingly. Unfortunately, this new crop of Web developers doesn’t seem to have gotten that memo.
Please don’t think that either Aaron or I are saying that you shouldn’t use JavaScript. Far from it! It’s simply a matter of how you wield the power of JavaScript. If you make your core tasks dependent on JavaScript, some of your potential users will inevitably be left out in the cold. But if you start by building on a classic server/client model, and then enhance with JavaScript, you can have your cake and eat it too. Modern browsers get a smooth, rich experience. Older browsers get a clunky experience with full page refreshes, but that’s still much, much better than giving them nothing at all.
Aaron makes the case that, while we cannot control which browsers people will use, we can control the server environment.
Stuart takes issue with that assertion in a post called Fundamentally Disconnected. In it, he points out that the server isn’t quite the controlled environment that Aaron claims:
Aaron sees requiring a specific browser/OS combination as an impractical impossibility and the wrong thing to do, whereas doing this on the server is positively virtuous. I believe that this is no virtue.
It’s true enough that the server isn’t some rock-solid never-changing environment. Anyone who’s ever had to do install patches or update programming languages knows this. But at least it’s one single environment …whereas the web has an overwhelming multitude of environments; one for every browser/OS/device combination.
Stuart finishes on a stirring note:
The Web has trained its developers to attempt to build something that is fundamentally egalitarian, fundamentally available to everyone. That’s why the Web’s good. The old software model, of something which only works in one place, isn’t the baseline against which the Web should be judged; it’s something that’s been surpassed.
However he wraps up by saying that…
…the Web is the largest, most widely deployed, most popular and most ubiquitous computing platform the world has ever known. And its programming language is JavaScript.
In a post called Missed Connections, Aaron pushes back against that last point:
The fact is that you can’t build a robust Web experience that relies solely on client-side JavaScript.
While JavaScript may technically be available and consistently-implemented across most devices used to access our sites nowadays, we do not control how, when, or even if that JavaScript is ultimately executed.
Stuart responds in a post called Reconnecting (and, by the way, how great is it to see this kind of thoughtful blog-to-blog discussion going on?).
I am, in general and in total agreement with Aaron, opposed to the idea that without JavaScript a web app doesn’t work.
But here’s the problem with progressively enhancing from server functionality to a rich client:
A web app which does not require its client-side scripting, which works on the server and then is progressively enhanced, does not work in an offline environment.
Good point.
Now, at this juncture, I could point out that—by using progressive enhancement—you can still have the best of both worlds. Stuart has anticpated that:
It is in theory possible to write a web app which does processing on the server and is entirely robust against its client-side scripting being broken or missing, and which does processing on the client so that it works when the server’s unavailable or uncontactable or expensive or slow. But let’s be honest here. That’s not an app. That’s two apps.
Ah, there’s the rub!
When I’ve extolled the virtues of progressive enhancement in the past, the pushback I most often receive is on this point. Surely it’s wasteful to build something that works on the server and then reimplement much of it on the client?
Personally, I try not to completely reinvent all the business logic that I’ve already figured out on the server, and then rewrite it all in JavaScript. I prefer to use JavaScript—and specifically Ajax—as a dumb waiter, shuffling data back and forth between the client and server, where the real complexity lies.
I also think that building in this way will take longer …at first. But then on the next project, it takes less time. And on the project after that, it takes less time again. From that perspective, it’s similar to switching from tables for layout to using CSS, or switching from building fixed-with sites to responsive design: the initial learning curve is steep, but then it gets easier over time, until it simply becomes normal.
But fundamentally, Stuart is right. Developers don’t like to violate the DRY principle: Don’t Repeat Yourself. Writing code for the server environment, and then writing very similar code for the browser—I mean browsers—is a bad code smell.
Here’s the harsh truth: building websites with progressive enhancement is not convenient.
Building a client-side web thang that requires JavaScript to work is convenient, especially if you’re using a framework like Angular or Ember. In fact, that’s the main selling point of those frameworks: developer convenience.
The trade-off is that to get that level of developer convenience, you have to sacrifice the universal reach that the web provides, and limit your audience to the browsers that can run a pre-determined level of JavaScript. Many developers are quite willing to make that trade-off.
Developer convenience is a very powerful and important force. I wish that progressive enhancement could provide the same level of developer convenience offered by Angular and Ember, but right now, it doesn’t. Instead, its benefits are focused on the end user, often at the expense of the developer.
Personally, I’m willing to take that hit. I’ve always maintained that, given the choice between making something my problem, and making something the user’s problem, I’ll choose to make it my problem every time. But I absolutely understand the mindset of developers who choose otherwise.
But perhaps there’s a way to cut this Gordian knot. What if you didn’t need to write your code twice? What if you could write code for the server and then run the very same code on the client?
This is the promise of isomorphic JavaScript. It’s a terrible name for a great idea.
For me, this is the most exciting aspect of Node.js:
With Node.js, a fast, stable server-side JavaScript runtime, we can now make this dream a reality. By creating the appropriate abstractions, we can write our application logic such that it runs on both the server and the client — the definition of isomorphic JavaScript.
Some big players are looking into this idea. It’s the thinking behind AirBnB’s Rendr.
Interestingly, the reason why many large sites are investigating this approach isn’t about universal access—quite often they have separate siloed sites for different device classes. Instead it’s about performance. The problem with having all of your functionality wrapped up in JavaScript on the client is that, until all of that JavaScript has loaded, the user gets absolutely nothing. Compare that to rendering an HTML document sent from the server, and the perceived performance difference is very noticable.
“We don’t have any non-JavaScript users” No, all your users are non-JS while they’re downloading your JS
— Jake Archibald (@jaffathecake) May 28, 2012
Here’s the ideal situation:
With Node.js on the server, and JavaScript in the client, steps 3 and 4 could theoretically use the same code.
So why aren’t we seeing more of these holy-grail apps that achieve progressive enhancement without code duplication?
Well, partly it’s back to that question of controlling the server environment.
@sil @adactio @dracos That architecture is a hard choice to make because it ties you to a small set of runtimes on the server.
— Dan Webb (@danwrong) September 22, 2014
@sil @adactio @dracos plus, I think there’s something positive about hard separation of client and server code. Gets you thinking right.
— Dan Webb (@danwrong) September 22, 2014
This is something that Nicholas Zakas tackled a year ago when he wrote about Node.js and the new web front-end. He proposes a third layer that sits between the business logic and the rendered output. By applying the idea of isomorphic JavaScript, this interface layer could be run on the server (as Node.js) or on the client (as JavaScript), while still allowing you to have the rest of your server environment running whatever programming language works for you.
It’s still early days for this kind of thinking, and there are lots of stumbling blocks—trying to write JavaScript that can be executed on both the server and the client isn’t so easy. But I’m pretty excited about where this could lead. I love the idea of building in a way that provide the performance and universal access of progressive enhancement, while also providing the developer convenience of JavaScript frameworks.
In the meantime, building with progressive enhancement may have to involve a certain level of inconvenience and duplication of effort. It’s a price I’m willing to pay, but I wish I didn’t have to. And I totally understand that others aren’t willing to pay that price.
But while the mood might currently seem to be in favour of using monolithic JavaScript frameworks to build client-side apps that rely on JavaScript in browsers, I think that the tide might change if we started to see poster children for progressive enhancement.
Three years ago, when I was trying to convince clients and fellow developers that responsive design was the way to go, it was a hard sell. It reminded me of trying to sell the benefits of using web standards instead of using tables for layout. Then, just as the Doug’s redesign of Wired and Mike’s redesign of ESPN helped sell the idea of CSS for layout, the Filament Group’s work on the Boston Globe made it a lot easier to sell the idea of responsive design. Then Paravel designed a responsive Microsoft homepage and the floodgates opened.
Now …who wants to do the same thing for progressive enhancement?
Four different techniques for vertical centring in CSS, courtesy of Jake.
internet.org might more accurately be called very-small-piece-of-internet.org
Hi,
I’m really sorry it’s taken me so long to write back to you (over a month!)—I’m really crap at email.
I’m writing to you hoping you can help me make my colleagues take html5 “seriously”. They have read your book, they know it’s the “right” thing to do, but still they write !doctype HTML and then div, div, div, div, div…
Now, if you could provide me with some answers to their “why bother?- questions” would be really appreciated.
I have to be honest, I don’t think it’s worth spending lots of time agonising over what’s the right element to use for marking up a particular piece of content.
That said, I also think it’s lazy to just use divs and spans for everything, if a more appropriate element is available.
Paragraphs, lists, figures …these are all pretty straightforward and require almost no thought.
Deciding whether something is a section or an article, though …that’s another story. It’s not so clear. And I’m not sure it’s worth the effort. Frankly, a div might be just fine in most cases.
For example, can one assume that in the future we will be pulling content directly from websites and therefore it would be smart to tell this technology which content is the article, what are the navigation and so on?
There are some third-party tools (like Readability) that pay attention to the semantics of the elements you use, but the most important use-case is assistive technology. For tools such as screen readers, there’s a massive benefit to marking up headings, lists, and other straightforward elements, as well as some of the newer additions like nav and main.
But for many situations, a div is just fine. If you’re just grouping some stuff together that doesn’t have a thematic relation (for instance, you might be grouping them together to apply a particular style), then div works perfectly well. And if you’re marking up a piece of inline text and you’re not emphasising it, or otherwise differentiating it semantically, then a span is the right element to use.
So for most situations, I don’t think it’s worth overthinking the choice of HTML elements. A moment or two should be enough to decide which element is right. Any longer than that, and you might as well just use a div or span, and move on to other decisions.
But there’s one area where I think it’s worth spending a bit longer to decide on the right element, and that’s with forms.
When you’re marking up forms, it’s really worth making sure that you’re using the right element. Never use a span or a div if you’re just going to add style and behaviour to make it look and act like a button: use an actual button instead (not only is it the correct element to use, it’s going to save you a lot of work).
Likewise, if a piece of text is labelling a form control, don’t just use a span; use the label element. Again, this is not only the most meaningful element, but it will provide plenty of practical benefit, not only to screen readers, but to all browsers.
So when it comes to forms, it’s worth sweating the details of the markup. I think it’s also worth making sure that the major chunks of your pages are correctly marked up: navigation, headings. But beyond that, don’t spend too much brain energy deciding questions like “Is this a definition list? Or a regular list?” or perhaps “Is this an aside? Or is it a footer?” Choose something that works well enough (even if that’s a div) and move on.
But if your entire document is nothing but divs and spans then you’re probably going to end up making more work for yourself when it comes to the CSS and JavaScript that you apply.
There’s a bit of a contradiction to what I’m saying here.
On the one hand, I’m saying you should usually choose the most appropriate element available because it will save you work. In other words, it’s the lazy option. Be lazy!
On the other hand, I’m saying that it’s worth taking a little time to choose the most appropriate element instead of always using a div or a span. Don’t be lazy!
I guess what I’m saying is: find a good balance. Don’t agonise over choosing appropriate HTML elements, but don’t just use divs and spans either.
Hope that helps.
Hmmm… you know, I think I might publish this response on my blog.
Cheers,
Jeremy
Do you want to know what the truth is about shrimps? They’re the idiots of the sea! One time I saw a shrimp just swim right into a rock.
My ears are ringing after an evening with The Sadies—the hardest working band in rock’n’roll.
Pie
I’m quite intrigued by the thinking behind this CSS selector of Heydon’s.
* + * {
margin-top: 1.5em;
}
I should try it out and see how it feels.
I was back in Nürnberg last week for the second border:none. Joschi tried an interesting format for this year’s event. The first day was a small conference-like gathering with an interesting mix of speakers, but the second day was much more collaborative, with people working together in “creator units”—part workshop, part round-table discussion.
I teamed up with Aaron to lead the session on all things indie web. It turned out to be a lot of fun. Throughout the day, we introduced the little building blocks, one by one. By the end of the day, it was amazing to see how much progress people made by taking this layered approach of small pieces, loosely stacked.
The first step is: do you have a domain name?
Okay, next step: are you linking from that domain to other profiles of you on the web? Twitter, Instagram, Github, Dribbble, whatever. If so, here’s the first bit of hands-on work: add rel="me"
to those links.
<a rel="me" href="https://twitter.com/adactio">Twitter</a>
<a rel="me" href="https://github.com/adactio">Github</a>
<a rel="me" href="https://www.flickr.com/people/adactio">Flickr</a>
If you don’t have any profiles on other sites, you can still mark up your telephone number or email address with rel="me"
. You might want to do this in a link
element in the head
of your HTML.
<link rel="me" href="mailto:jeremy@adactio.com" />
<link rel="me" href="sms:+447792069292" />
As soon as you’ve done that, you can make use of IndieAuth. This is a technique that demonstrates a recurring theme in indie web building blocks: take advantage of the strengths of existing third-party sites. In this case, IndieAuth piggybacks on top of the fact that many third-party sites have some kind of authentication mechanism, usually through OAuth. The fact that you’re “claiming” a profile on a third-party site using rel="me"
—and the third-party profile in turn links back to your site—means that we can use all the smart work that went into their authentication flow.
You can see IndieAuth in action by logging into the Indie Web Camp wiki. It’s pretty nifty.
If you’ve used rel="me"
to link to a profile on something like Twitter, Github, or Flickr, you can authenticate with their OAuth flow. If you’ve used rel="me"
for your email address or phone number, you can authenticate by email or SMS.
Next question: are you publishing stuff on your site? If so, mark it up using h-entry. This involves adding a few classes to your existing markup.
<article class="h-entry">
<div class="e-content">
<p>Having fun with @aaronpk, helping @border_none attendees mark up their sites with rel="me" links, h-entry classes, and webmention endpoints.</p>
</div>
<time class="dt-published" datetime="2014-10-18 08:42:37">8:42am</time>
</article>
Now, the reason for doing this isn’t for some theoretical benefit from search engines, or browsers, but simply to make the content you’re publishing machine-parsable (which will come in handy in the next steps).
Aaron published a note on his website, inviting everyone to leave a comment. The trick is though, to leave a comment on Aaron’s site, you need to publish it on your own site.
Here’s my response to Aaron’s post. As well as being published on my own site, it also shows up on Aaron’s. That’s because I sent a webmention to Aaron.
Webmention is basically a reimplementation of pingback, but without any of the XML silliness; it’s just a POST request with two values—the URL of the origin post, and the URL of the response.
My site doesn’t automatically send webmentions to any links I reference in my posts—I should really fix that—but that’s okay; Aaron—like me—has a form under each of his posts where you can paste in the URL of your response.
This is where those h-entry classes come in. If your post is marked up with h-entry, then it can be parsed to figure out which bit of your post is the body, which bit is the author, and so on. If your response isn’t marked up as h-entry, Aaron just displays a link back to your post. But if it is marked up in h-entry, Aaron can show the whole post on his site.
Okay. By this point, we’ve already come really far, and all people had to do was edit their HTML to add some rel
attributes and class
values.
For true site-to-site communication, you’ll need to have a webmention endpoint. That’s a bit trickier to add to your own site; it requires some programming. Here’s my minimum viable webmention that I wrote in PHP. But there are plenty of existing implentations you can use, like this webmention plug-in for WordPress.
Or you could request an account on webmention.io, which is basically webmention-as-a-service. Handy!
Once you have a webmention endpoint, you can point to it from the head
of your HTML using a link
element:
<link rel="mention" href="https://adactio.com/webmention" />
Now you can receive responses to your posts.
Here’s the really cool bit: if you sign up for Bridgy, you can start receiving responses from third-party sites like Twitter, Facebook, etc. Bridgy just needs to know who you are on those networks, looks at your website, and figures everything out from there. And it automatically turns the responses from those networks into h-entry. It feels like magic!
Here are responses from Twitter to my posts, as captured by Bridgy.
That was mostly what Aaron and I covered in our one-day introduction to the indie web. I think that’s pretty good going.
The next step would be implementing the idea of POSSE: Publish on your Own Site, Syndicate Elsewhere.
You could do this using something as simple as If This, Then That e.g. everytime something crops up in your RSS feed, post it to Twitter, or Facebook, or both. If you don’t have an RSS feed, don’t worry: because you’re already marking your HTML up in h-entry, it can be converted to RSS easily.
I’m doing my own POSSEing to Twitter, which I’ve written about already. Since then, I’ve also started publishing photos here, which I sometimes POSSE to Twitter, and always POSSE to Flickr. Here’s my code for posting to Flickr.
I’d really like to POSSE my photos to Instagram, but that’s impossible. Instagram is a data roach-motel. The API provides no method for posting photos. The only way to post a picture to Instagram is with the Instagram app.
My only option is to do the opposite of POSSEing, which is PESOS: Publish Elsewhere, and Syndicate to your Own Site. To do that, I need to have an endpoint on my own site that can receive posts.
Working side by side with Aaron at border:none inspired me to finally implement one more indie web building block I needed: micropub.
Having a micropub endpoint here on my own site means that I can publish from third-party sites …or even from native apps. The reason why I didn’t have one already was that I thought it would be really complicated to implement. But it turns out that, once again, the trick is to let other services do all the hard work.
First of all, I need to have something to manage authentication. Well, I already have that with IndieAuth. I got that for free just by adding rel="me"
to my links to other profiles. So now I can declare indieauth.com as my authorization endpoint in the head
of my HTML:
<link rel="authorization_endpoint" href="https://indieauth.com/auth" />
Now I need some way of creating and issuing authentation tokens. See what I mean about it sounding like hard work? Creating a token endpoint seems complicated.
But once again, someone else has done the hard work so I don’t have to. Tokens-as-a-service:
<link rel="token_endpoint" href="https://tokens.indieauth.com/token" />
The last piece of the puzzle is to point to my own micropub endpoint:
<link rel="micropub" href="https://adactio.com/micropub" />
That URL is where I will receive posts from third-party sites and apps (sent through a POST request with an access token in the header). It’s up to me to verify that the post is authenticated properly with a valid access token. Here’s the PHP code I’m using.
It wasn’t nearly as complicated as I thought it would be. By the time a post and a token hits the micropub endpoint, most of the hard work has already been done (authenticating, issuing a token, etc.). But there are still a few steps that I have to do:
I tested my micropub endpoint using Quill, a nice little posting interface that Aaron built. It comes with great documentation, including a guide to creating a micropub endpoint.
It worked.
Here’s another example: Ben Roberts has a posting interface that publishes to micropub, which means I can authenticate myself and post to my site from his interface.
Finally, there’s OwnYourGram, a service that monitors your Instagram account and posts to your micropub endpoint whenever there’s a new photo.
That worked too. And I can also hook up Bridgy to my Instagram account so that any activity on my Instagram photos also gets sent to my webmention endpoint.
Each one of these building blocks unlocks greater and greater power:
Each one of those building blocks you implement unlocks more and more powerful tools:
But its worth remembering that these are just implementation details. What really matters is that you’re publishing your stuff on your website. If you want to use different formats and protocols to do that, that’s absolutely fine. The whole point is that this is the independent web—you can do whatever you please on your own website.
Still, if you decide to start using these tools and technologies, you’ll get the benefit of all the other people who are working on this stuff. If you have the chance to attend an Indie Web Camp, you should definitely take it: I’m always amazed by how much is accomplished in one weekend.
Some people have started referring to the indie web movement. I understand where they’re coming from; it certainly looks like a “movement” from the outside, and if you attend an Indie Web Camp, there’s a great spirit of sharing. But my underlying motivations are entirely selfish. In the same way that I don’t really care about particular formats or protocols, I don’t really care about being part of any kind of “movement.” I care about my website.
As it happens, my selfish motivations align perfectly with the principles of an indie web.
Japanese lunch
Posting to https://adactio.com/ from https://ben.thatmustbe.me/new through the magic of micropub.
It Just Works.®™
Petra has always been the strong one. She was the best friend that Chloe could have possibly had. Little wonder then that Chloe’s death continues to hit her so hard.
I still can’t fully comprehend it all nor do I have any idea how to learn to move on. All I know is that ever since the day I found out, I’ve been on an emotional rollercoaster. I go from being in shock, to being sad and angry, or completely numb.
Petra is getting help now. That’s good. She’s also writing about what she has been going through. That’s brave. Very brave.
She is one of the best human beings I know.
Hey @AListApart, how about getting @AntonPeck to illustrate some articles?
For he is a great illustrator and you publish great articles.
Revived with green tea and karaage.
Hirsch.
Bottles.
These whiskies.
Entering Nürnberg’s old town.
Having fun with @aaronpk, helping @border_none attendees mark up their sites with rel=”me” links, h-entry classes, and webmention endpoints.
Following on from
, I’m showing my posting interface at the border:none “creator unit.”Pork on pumpkin.
Feldsalat.
Getting schooled in DOM Clobbering by @0x6D6172696F.
The short answer: not much.
The UK Web Archive at The British Library outlines its process for determining just how bad the linkrot is after just one decade.
Looking forward to a really good mix of talks at today’s @Border_None event.
On the anniversary of Jon Postel’s death, it’s worth re-reading @vgcerf’s RFC 2468.
Kaffee und Kucken.
Food photography.
Jessica is pleased with her plate of sausages and sauerkraut.
A great technique from Heydon for styling radio buttons however you want.
Nürnberger Rostbratwürste mit Sauerkraut. Lecker!
Patty’s excellent talk on responsive design and progressive enhancement. Stick around for question-and-answer session at the end, wherein I attempt to play hardball, but actually can’t conceal my admiration and the fact that I agree with every single word she said.
Jessica.
Nürnberg.
Over 3,000 idiots and counting.
This is the intersection of Hanlon’s Razor with Clarke’s third law: any sufficiently advanced incompetence is indistinguishable from malice.
This would be funny if it weren’t, in a very literal sense, evil.
Come to tomorrow’s @Border_None in Nürnberg on a pay-what-you-like ticket, thanks to @Clearleft:
Going to Nürnberg. brb
I went to the States to speak at the Artifact conference in Providence (which was great). I extended the trip so that I caould make it to Science Hack Day in San Francisco (which was also great). Then I made my way back with a stopover in New York for the fifth and final Brooklyn Beta (which was, you guessed it, great).
The last day of Brooklyn Beta was a big friendly affair with close to a thousand people descending on a hangar-like building in Brooklyn’s naval yard. But it was the preceding two days in the much cosier environs of The Invisible Dog that really captured the spirit of the event.
The talks were great—John Maeda! David Lowery!—but the real reason for going all the way to Brooklyn for this event was to hang out with the people. Old friends, new friends; just nice people all ‘round.
But it felt strange this year, and not just because it was the last time.
At the end of the second day, people were encouraged to spontaneously get up on stage, introduce themselves, and then introduce someone that they think is a great person, working on something interesting (that twist was Sam’s idea).
I didn’t get up on stage. The person I would’ve introduced wasn’t there. I wish she had been. Mind you, she would’ve absolutely hated being called out like that.
Chloe wasn’t there. Of course she wasn’t there. How could she be there?
But there was this stupid, stupid part of my brain that kept expecting to see her walk into the room. That stupid, stupid part of my brain that still expected that I’d spend Brooklyn Beta sitting next to Chloe because, after all, we always ended up sitting together.
(I think it must be the same stupid part of my brain that still expects to see her name pop up in my chat client every morning.)
By the time the third day rolled around in the bigger venue, I thought it wouldn’t be so bad, what with it not being in the same location. But that stupid, stupid part of my brain just wouldn’t give up. Every time I looked around the room and caught a glimpse of someone in the distance who had the same length hair as Chloe, or dressed like her, or just had a bag slung over hip just so …that stupid, stupid part of my brain would trigger a jolt of recognition, and then I’d have that horrible sinking feeling (literally, like something inside of me was sinking down) when the rational part of my brain corrected the stupid, stupid part.
I think that deep down, there’s a part of me—a stupid, stupid part of me—that still doesn’t quite believe that she’s gone.
Elon Musk talks engineering, the Fermi paradox, and getting your ass to Mars.
Honestly, I wouldn’t mind this drab, overcast weather if it weren’t for the fact that it means I’m missing out on ISS flyovers.
Accessible Mini Click photo show in @68MiddleSt.
Wearing the lovely jumper that @wordridden made for me.
Stuart has written some wise words about making privacy the differentiator that can take on Facebook and Google.
He also talks about Aral’s ind.ie project; all the things they’re doing right, and all things they could do better:
The ind.ie project is to open source as Brewdog are to CAMRA.
Adding http://www.wikihouse.cc/guide to my collection of design principles: http://principles.adactio.com/
Cascading Style Sheets turned 20 years old this week. Happy birthtime, CeeSusS!
Bruce interviewed Håkon about the creation of CSS, and it makes for fascinating reading. If you want to dig even deeper, here’s Håkon’s 1994 thesis comparing competing approaches to style sheets.
CSS gets a tough rap. I remember talking to Douglas Crockford about CSS. I’ll paraphrase his stance as “Kill it with fire!” To be fair, he was mostly talking about the lack of a decent layout system in CSS—something that’s only really getting remedied now.
Most of the flak directed at CSS comes from smart programmers, decrying its lack of power. As a declarative language, it lacks even the most basic features of even the simplest procedural language. How are serious programmers supposed to write their serious programmes with such a primitive feature set?
But I think this mindset misses out a crucial facet of understanding CSS: it’s not about us. By us, I mean professional web developers. And when I say it’s not about us, I mean it’s not only about us.
The web is for everyone. That doesn’t just mean that it’s for everyone to use—the web is for everyone to create. That means that the core building blocks of the web need to be learnable by everyone, not just programmers.
I get nervous when I see web browsers gaining powerful features that can only be accessed via a JavaScript API. Geolocation is one example: it doesn’t have any declarative equivalent to its JavaScript implementation. Counter-examples would be video
and audio
: you can use the JavaScript API to get exactly the behaviour you want, if you’ve got that level of knowledge …or you can use the video
and audio
elements if you’re okay with letting web browsers handle the complexity of display and playback.
I think that CSS hits a nice sweet spot, balancing learnability and power. I love the fact that every bit of CSS ever written comes down to the same basic pattern:
selector {
property: value;
}
That’s it!
How amazing is it that one simple pattern can scale to encompass a whole wide world of visual design variety?
Think about the revolution that CSS has gone through in recent years: OOCSS, SMACSS, BEM …these are fundamentally new ways of approaching front-end development, and yet none of these approaches required any changes to be made to the CSS specification. The power and flexibility was already available within its simple selector-property-value pattern.
Mind you, that modularity was compromised when we got things like named animations; a pattern that breaks out of the encapsulation model of CSS. Variables in CSS also break out of the modularity pattern.
Personally, I don’t think there’s any reason to have variables in the CSS language; it’s enough to have them in pre-processing tools. Variables add enormous value for developers, and no value at all for end users. As long as developers can use variables—and they can, with Sass and LESS—I don’t think we need to further complicate CSS.
Bert Bos wrote an exhaustive list of design principles for web standards. There’s some crossover with Tim Berners-Lee’s principles of design, with ideas such as modularity and robustness. Personally, I think that Bert and Håkon did a pretty damn good job of balancing principles like learnability, extensibility, longevity, interoperability and a host of other factors while still producing something powerful enough to scale for the whole web.
There’s one important phrase I want to highlight in the abstract of the 20 year old CSS proposal:
The proposed scheme provides a simple mapping between HTML elements and presentation hints.
Hints.
Every line of CSS you write is a suggestion. You are not dictating how the HTML should be rendered; you are suggesting how the HTML should be rendered. I find that to be a very liberating and empowering idea.
My only regret is that—twenty years on from the birth of CSS—web browsers are killing the very idea of user stylesheets. Along with “view source”, this feature really drove home the idea that professional web developers are not the only ones who have a say in what gets rendered in web browsers …and that the web truly is for everyone.
Lakoffian self-correction: if I’m about to talk about doing something “in the browser”, I try to catch myself and say “in browsers” instead.
Red book, green book. Thank you, @JasonSantaMaria, @Monteiro, @ABookApart.
Documenting common layout issues that can be solved with Flexbox. I like the fact that some of these can be used as enhancements e.g. sticky footer, input add-ons …the fallback in older browsers is perfectly acceptable.
Chicharrones.
I have no idea what I’m doing.
Reading http://bradfrostweb.com/blog/post/job-title-its-complicated/ and updating my job title at http://clearleft.com/is/
Science Hack Day San Francisco was held in the Github offices last weekend. It was brilliant!
This was the fifth Science Hack Day in San Francisco and the 40th worldwide. That’s truly incredible. I mean, I literally can’t believe it. When I organised the very first Science Hack Day back in 2010, I had no idea how far it would go. But Ariel has been indefatigable in making it a truly global event. She is amazing. And at this year’s San Francisco event, she outdid herself in putting together a fantastic cross-section of scientists, designers, and developers: paleontology, marine biology, geology, astronomy, particle physics, and many, many more disciplines were represented in the truly diverse attendees.
After an inspiring set of lightning talks on the first day, ideas started getting bounced around and the hacking began to take shape. I had a vague idea for—yet another—space-related hack. What clinched it was picking the brains of NASA’s Keri Bean. She’d help me get hold of the dataset I needed for my silly little hack.
So here’s the background…
There are many possibilities for human habitats in space: Stanford tori, O’Neill cylinders, Bernal spheres. Another idea, explored in science fiction, is hollowing out asteroids (Larry Niven’s bubbleworlds). Kim Stanley Robinson explores this idea in depth in his book 2312, where he describes the process of building an asteroid terrarium. The website of the book has a delightful walkthrough of the engineering processes involved. It’s not entirely implausible.
I wanted to make that idea approachable, so I thought about the kinds of people we might want to have living with us on the interior shell of a rotating hollowed-out asteroid. How about the people you follow on Twitter?
The only question that remains then is: which asteroid is the right one for you and your Twitter friends? Keri tracked down the motherlode of asteroid data and I started hacking the simplest of mashups—Twitter meets space rocks.
Here’s the result…
Give it your Twitter username and it will tell you exactly which one of the asteroids in the main belt is right for you (I considered adding an enterprise option that would tell you where you could store your social network in the cloud …the Oort cloud, that is).
Be default, your asteroid will have the population density of Earth, which is quite generously. But if you want a more sparsely-populated habitat—say, the population density of Australia—or a more densely-populated world—with something like the population density of Japan—then you will be assigned a larger or smaller asteroid accordingly.
You’ll also be told by how much you should increase or decrease the rotation of the asteroid to get one gee of centrifugal force on the interior. Figuring out the equations for calculating centrifugal force almost broke me, but luckily I had help from a rocket scientist and a particle physicist …I’m not even kidding. And I should point out that the calculations take some liberties—I’m assuming a spherical body, which is quite a stretch, given the lumpy nature of most asteroids.
At 13:37 on the second day, the demos began. Keri and I were first up.
Give Habitasteroids a whirl for yourself. It’s a silly little thing, but I quite like how it turned out.
Speaking of silly things …at some point in the proceedings, Keri put the call out for asteroid data to her fellow space enthusiasts on Twitter. They responded with asteroid-related puns.
@PlanetaryKeri So you’re not as investa’d in Ceres as the rest of the team? @motorbikematt @adactio #LordOfThePuns
— J.L. Galache (@JLGalache) October 5, 2014
@jlgalache @planetarykeri @motorbikematt @adactio Don’t Juno better than to make puns like that? @brianwolven
— lukemonster (@lukedones) October 5, 2014
@lukedones @JLGalache @PlanetaryKeri @motorbikematt @adactio Recruiting the rest of your astro Pallas to get in on the asteroid pun action?
— Brian Wolven (@brianwolven) October 5, 2014
@brianwolven @lukedones @JLGalache @PlanetaryKeri @motorbikematt @adactio Your puns give me the Hebe jeebies.
— brx0 (@brx0) October 5, 2014
@brx0 @brianwolven @jlgalache @planetarykeri @motorbikematt @adactio @paix120 At this point we’ve all been led Astraea :-(
— lukemonster (@lukedones) October 5, 2014
@brx0 @lukedones @JLGalache @PlanetaryKeri @motorbikematt @adactio Ida feeling that might happen eventually.
— Brian Wolven (@brianwolven) October 5, 2014
@brianwolven @lukedones @JLGalache @PlanetaryKeri @motorbikematt @adactio It’ like you’re Psyche or something.
— brx0 (@brx0) October 5, 2014
@lukedones @brx0 @JLGalache @PlanetaryKeri @motorbikematt @adactio @paix120 With this crew, at least the pun Themis pretty obvious.
— Brian Wolven (@brianwolven) October 5, 2014
@brianwolven @lukedones @brx0 @JLGalache @PlanetaryKeri @motorbikematt @adactio You guys sure know Alauda asteroid names.
— Renee (@paix120) October 5, 2014
@brianwolven @lukedones @brx0 @JLGalache @PlanetaryKeri @motorbikematt @adactio Europa creek if u’re Wikipedia’ing like me. Thisbe the end.
— Renee (@paix120) October 5, 2014
@paix120 @brianwolven @lukedones @JLGalache @PlanetaryKeri @motorbikematt @adactio The end? So this is the last Gaspra?
— brx0 (@brx0) October 5, 2014
@brx0 @paix120 @brianwolven @jlgalache @planetarykeri @motorbikematt @adactio We should probably stop at Gaspra before we reach Eros.
— lukemonster (@lukedones) October 5, 2014
@brianwolven @paix120 @lukedones @JLGalache @PlanetaryKeri @motorbikematt @adactio I Echo this sentiment.
— brx0 (@brx0) October 5, 2014
@lukedones @paix120 @brianwolven @JLGalache @PlanetaryKeri @motorbikematt @adactio Too late. Pandora’s box has been opened.
— brx0 (@brx0) October 5, 2014
@brx0 @lukedones @brianwolven @JLGalache @PlanetaryKeri @motorbikematt @adactio tho, someone might Summanus to the Psyche ward.
— Renee (@paix120) October 5, 2014
@paix120 @brx0 @brianwolven @jlgalache @planetarykeri @motorbikematt @adactio Have Merxia on us all!
— lukemonster (@lukedones) October 5, 2014
@paix120 @lukedones @brianwolven @JLGalache @PlanetaryKeri @motorbikematt @adactio Every other Twitter pun game Pales in comparison.
— brx0 (@brx0) October 5, 2014
They have nice asteroids though: @brianwolven, @lukedones, @paix120, @LGalache, @motorbikematt, @brx0.
Oh, and while Habitasteroids might be a silly little hack, WRANGLER just might work.
Back in Blighty, ahead of schedule.
Preparing for lift-off at one of New York’s nascent spaceports, looking forward to getting home.
I first met Jason at South By Southwest 2011, having admired his work from afar. Here’s a picture of me and Jason.
When I was giving a talk on digital preservation at Beyond Tellerrand in 2013 referencing Jason’s work, I used that picture.
Then Jason gave a talk at a conference where he referenced me referencing him. Here’s a picture of Jason presenting a picture of me presenting a picture of me and Jason.
At this year’s Decentralize Camp, I spoke about digital preservation again. Here’s a picture of me presenting a picture of Jason presenting a picture of me presenting a picture of me and Jason.
Then Jason gave another talk. Here’s a picture of Jason presenting a picture of me presenting a picture of Jason presenting a picture of me presenting a picture of me and Jason.
I referenced that when I spoke in Riga a few months back. Here’s a picture of me presenting a picture of Jason presenting a picture of me presenting a picture of Jason presenting a picture of me presenting a picture of me and Jason.
Finally, Jason gave a barnstorming talk at the final Brooklyn Beta yesterday. As we were both in the same room at the same time, Jason took the opportunity to draw a line under this back-and-forth with his final volley. Here’s a picture of me and Jason in front of a picture of me presenting a picture of Jason presenting a picture of me presenting a picture of Jason presenting a picture of me presenting a picture of me and Jason.
Bidding farewell to Brooklyn, and indeed @BrooklynBeta.
Heartfelt thanks to @Shiflett, @FictiveCameron and crew for many inspiring years.
Inside Brooklyn Beta.
Outside Brooklyn Beta.
From first meeting @textfiles to drawing a line under our game of “Presentation Inception” at @BrooklynBeta yesterday: A Pictorial History…
Many thanks to @Libbyn, @DDemaree, @BenjaminWelch, @GregVeen, and all the lovely @Typekit people for an excellent Brooklyn Beta evening out.
Sam Greenspan from the brilliant 99% Invisible podcast has created a Huffduffer feed based on his “You Should Listen To Friday” Tumblr blog.
If you have a Huffduffer account, add this to your collective.
And definitely subscribe to this RSS feed in your podcast app of choice.
Shani gets schooled in shuffleboard.
fsck, newly inked on @fehler’s arm.
Over a decade on from sliding doors and the box-model hack: @stop and @t in Brooklyn.
I was such a fanboy meeting David Lowery yesterday.
“Key Lime Pie is the best album ever!” I may have blurted.
David Lowery.
Jenn and Jason.
Geri and Simon.
John Maeda.
This is a great summation of the origins of Science Hack Day from Ariel.
All the marvellous hacks from Science Hack Day San Francisco being demoed at the end of the event.
Mine is the first one up, five minutes in.
Landing.
Take off.
Wheels down at JFK.
At San Francisco’s nascent spaceport preparing for a coast-to-coast jump to New York.
Spinning up the FTL drive.
My name is Jeremy and I am a boring front-end developer.
I hope that many of you will watch me on this journey, and follow in my wagon tracks as I leave the walled cities and strike out for the wilderness ahead.
The Android vs. iOS debate is one hinges around whether you think it makes more sense to target a (perceived) larger market, or target one that the technorati favor. But why choose? Building a good responsive web app has a series of benefits, the primary one being that you target users on every platform with one app. Every user. Every platform. All the time. Release whenever you want. A/B test with ease. Go, go go.
What a fantastic collection of creators!
In Toronado, eating a tamale from The Tamale Lady and discussing W3C specs over craft beers.
Doing San Francisco right.
My friends in San Francisco: let’s meet up for a drink and a chat at Toronado from 8pm.
Hope to see you there. Yes, you.
My phone has decided to stop working as a phone while I’m in the States—doesn’t connect to any network. It is effectively an iPod Touch now.
I feel that this is relevant to that discussion I had with Malarkey on his podcast about advertising.
A look back at how Twitter evolved over time, with examples of seemingly-trivial changes altering the nature of the discourse.
Kevin finishes with a timely warning for those of us building alternatives:
In the indieweb world we are just starting to connect sites together with webmentions, and we need to consider this history as we do.
Some thoughts on progressive enhancement, although I disagree with the characterisation of progressive enhancement as being the opposite choice to making “something flashy that pushes the web to it’s limits”—it’s entirely possible to make the flashiest, limit-pushing sites using progressive enhancement. After all…
it’s much more a mindset than a particular development technique.
Post-@ScienceHackDay relaxing with @NeilTyson’s Cosmos.
Well, that was fun
Dancing with dinosaurs and lasers!
Science!
A lovely hack from Science Hack Day San Francisco: get an idea of the size of CERN’s Large Hadron Collider by seeing it superimposed over your town.
Just demoed my @ScienceHackDay project: Habitasteroids!
Calculations.
My science advisors @PlanetaryKeri and Nathan help figure out centrifugal force calculations for asteroid terraria.
Space hacks on all the screens.
Morning entertainment for science nerds.
Putting the finishing touches to my @ScienceHackDay project.
Up and atom for day two of @ScienceHackDay.
It’s @ArielWaldman kicking off @ScienceHackDay San Francisco.
Arfon.
Breakfast time for science nerds.
Preparing for lift-off.
It’s impossible to predict the creations that will spring forth when people gather in the spirit of participation, collaboration, and benign anarchy at the next Science Hack Day, but the results are certain to be inspired, and inspiring.
Science Hack Day logos everywhere!
In the Oval Office.
Moon over the Bay Bridge.
Companies go out of business, get bought and change policies, so what if you had one place to originate all of your content then publish it out to those great social services? And hey, why not pull comments from those services back to your original post?
That’s the idea behind Indie Web Camp: have your own website be the canonical source of what your publish. But right now, getting all of the moving parts up and running requires a fair dollop of tech-savviness. That’s where Known comes in:
It’s similar to the WordPress model: you can create a blog on their servers, or you can download the software and host it on your own.
This post is a good run-down of what’s working well with Known, and what needs more work.
How the printing press led to the microscope, and chlorination transformed women’s fashion—Steven Johnson channels James Burke.
Beautiful visualisations of science and nature.
Made with love by a designer with a molecular biology degree.
Tim’s been running the numbers on how long it takes various browsers on various devices to parse JavaScript—in this case, jQuery. The time varies enormously depending on the device hardware.
Hanging out at the Mozilla office with @t.
Bow and bridge.
Breakfast at @t’s.
The Wheel of Avocado.
Cooking with @t: ingredients for triangle eggs.
Good morning, San Francisco.
You’re looking nice today, iconic San Francisco street corner.
Prepping for a coast-to-coast jump from Boston’s nascent spaceport to San Francisco for @ScienceHackDay.
Spinning up the FTL drive.
This is basically porn for me.
Bernal spheres, Stanford tori, and O’Neill cylinders, oh my!
I was chatting about polyfills recently with Bruce and Remy—who coined the term:
A polyfill, or polyfiller, is a piece of code (or plugin) that provides the technology that you, the developer, expect the browser to provide natively. Flattening the API landscape if you will.
I mentioned that I think that one of the earliest examples of what we would today call a polyfill was the IE7 script by Dean Edwards.
Dean wrote this (amazing) piece of JavaScript back when Internet Explorer 6 was king of the hill and Microsoft had stopped development of their browser entirely. It was a pretty shitty time in browserland back then. While other browsers were steaming ahead with browser support, Dean’s script pulled IE6 up by its bootstraps and made it understand CSS2.1 features. Crucially, you didn’t have to write your CSS any differently for the IE7 script to work—the classic hallmark of a polyfill.
Scott has a great post over on the Filament Group blog asking To Picturefill, or not to Picturefill?. Therein, he raises the larger issue of when to use polyfills of any kind. After all, every polyfill you use is a little bit of a tax that the end user must pay with a download.
Polyfills typically come at a cost to users as well, since they require users to download and execute JavaScript in order to work. Sometimes, frequently even, that cost outweighs the benefits that the polyfill would bring. For that reason, the question of whether or not to use any polyfill should be taken seriously.
Scott takes a very thoughtful approach to using any polyfill, and I try to do the same. I feel that it’s important to have an exit strategy for every polyfill you decide to use. After all, the whole point of a polyfill is that it’s a stop-gap measure until a particular feature is more widely supported.
And that’s where I run into one of the issues of working at an agency. At Clearleft, our time working with a client usually lasts a few months. At the end of that time, we’ll have delivered whatever the client needs: sometimes that’s design work; sometimes its design and a front-end pattern library.
Every now and then we get to revisit a project—like with Code for America—but that’s the exception rather than the rule. We’ve had to get very, very good at handover precisely because we won’t be the ones maintaining the code that we deliver (though we always try to budget in time to revisit the developers who are working with the code to answer any questions they might have).
That makes it very tricky to include a polyfill in our deliverables. We’d need to figure out a way of also including a timeline for revisiting that polyfill and evaluating when it’s time to drop it. That’s not an impossible task, but it’s much, much easier if you’re a developer working on a product (as opposed to a developer working at an agency). If you’re going to be the same person working on the code in the future—as well as working on it right now—it gets a lot easier to plan for evaluating polyfill usage further down the line. Set a recurring item in your calendar and you should be all set.
It’s a similar situation with vendor prefixes. Vendor prefixes were never intended to be a long-lasting part of any style sheet. Like polyfills, they’re supposed to be used with an exit strategy in mind: when the time is right, remove the prefixed styles, leaving only the unprefixed standardised CSS. Again, that’s a lot easier to do if you’re working on a product and you know that you’ll be the one revisiting the CSS later on. That’s harder to do at an agency where you’re handing over CSS to someone else.
I’m quite reluctant to use any vendor prefixes at all—which is at should be; vendor prefixes should not be used lightly. Sometimes they’re unavoidable, but that shouldn’t stop us thinking about how to remove them at a later date.
I’m mostly just thinking out loud here. I guess my point is that certain front-end development techniques and technologies feel like they’re better suited to product work rather than agency work. Although I’m sure there are plenty of counter-examples out there too of tools that really fit the agency model and are less useful for working on the same product over a long period.
But even though the agency world and the product world are very different in lots of ways, both of them require us to think about the future. How will long will the code you’re writing today last? And do you have a plan for when it needs updating or replacing?
Turns out that Brian LeRoux and I gave the same answer to this question:
I think I just saved you a click.
Bostonians: what’s a good lunch spot somewhere near South Station?
Related: want to meet up for lunch?
Bidding farewell to Providence after a most-excellent @ArtifactConf.
This is what Scott Jenson has been working on—a first stab at just-in-time interactions by having physical devices broadcasting URLs.
Walk up and use anything
This is fascinating—it looks like there might be an entirely practical reason for Microsoft to skip having a version 9 of Windows …and it’s down to crappy pattern-matching code that’s supposed to target Windows 95 and 98.
This is exactly like the crappy user-agent sniffing that forced browsers to lie in their user-agent strings.
A rainy day in Providence.
Playing around with styling checkboxes.
Incredibly, you have to manually download and run this patch for Shellshock on OS X: it’s not being pushed as a security update.
But the new U2 album? That’s being pushed to everyone.
Tweaking my indieweb posting interface.
Mobile strategy: “Add to Home Screen”.