
Hello, hydrant.
5th | 10th | 15th | 20th | 25th | ||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
12am | ||||||||||||||||||||||||||||
4am | ||||||||||||||||||||||||||||
8am | ||||||||||||||||||||||||||||
12pm | ||||||||||||||||||||||||||||
4pm | ||||||||||||||||||||||||||||
8pm |
Hello, hydrant.
Silent Running.
Accessible HAL: audio output with text fallback.
Building a planet-sized telescope suggests all sorts of practical difficulties.
Lobster tail on orzo with a Parmesan crisp.
A phalanx of swans.
We don’t take our other valuables with us when we travel—we leave the important stuff at home, or in a safe place. But Facebook and Google don’t give us similar control over our valuable data. With these online services, it’s all or nothing.
We need a ‘trip mode’ for social media sites that reduces our contact list and history to a minimal subset of what the site normally offers.
A nice look at the fallbacks that are built into CSS.
Hey @han, happy birthday!
Ziggy.
Brad and the beers.
Andy’s sushi. 🍣
Sushi sample.
Going to Pittsburgh. brb
Hey @Mathowie, we’re over halfway there: https://adactio.com/journal/11937
cc. @LongNow
It has been exactly six years to the day since I instantiated this prediction:
The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.
It is exactly five years to the day until the prediction condition resolves to a Boolean true
or false
.
If it resolves to true
, The Bletchly Park Trust will receive $1000.
If it resolves to false
, The Internet Archive will receive $1000.
Much as I would like Bletchley Park to get the cash, I’m hoping to lose this bet. I don’t want my pessimism about URL longevity to be rewarded.
So, to recap, the bet was placed on
02011-02-22
It is currently
02017-02-22
And the bet times out on
02022-02-22.
Rich has posted a sneak peek of one part of his book on Ev’s blog.
We have a tradition here at Clearleft of having the occasional lunchtime braindump. They’re somewhat sporadic, but it’s always a good day when there’s a “brown bag” gathering.
When Google’s AMP format came out and I had done some investigating, I led a brown bag playback on that. Recently Mark did one on Fractal so that everyone knew how work on that was progressing.
Today Richard gave us a quick brown bag talk on variable web fonts. He talked us through how these will work on the web and in operating systems. We got a good explanation of how these fonts would get designed—the type designer designs the “extreme” edges of size, weight, or whatever, and then the file format itself can extrapolate all the in-between stages. So, in theory, one single font file can hold hundreds, thousands, or hundreds of thousands of potential variations. It feels like switching from bitmap images to SVG—there’s suddenly much greater flexibility.
A variable font is a single font file that behaves like multiple fonts.
There were a couple of interesting tidbits that Rich pointed out…
While this is a new file format, there isn’t going to be a new file extension. These will be .ttf
files, and so by extension, they can be .woff
and .woff2
files too.
This isn’t some proposed theoretical standard: an unprecedented amount of co-operation has gone into the creation of this format. Adobe, Apple, Google, and Microsoft have all contributed. Agreement is the hardest part of any standards process. Once that’s taken care of, the technical solution follows quickly. So you can expect this to land very quickly and widely.
This technology is landing in web browsers before it lands in operating systems. It’s already available in the Safari Technology Preview. That means that for a while, the very best on-screen typography will be delivered not in eBook readers, but in web browsers. So if you want to deliver the absolute best reading experience, look to the web.
And here’s the part that I found fascinating…
We can currently use numbers for the font-weight
property in CSS. Those number values increment in hundreds: 100, 200, 300, etc. Now with variable fonts, we can start using integers: 321, 417, 183, etc. How fortuitous that we have 99 free slots between our current set of values!
Well, that’s no accident. The reason why the numbers were originally specced in increments of 100 back in 1996 was precisely so that some future sci-fi technology could make use of the ranges in between. That’s some future-friendly thinking! And as Håkon wrote:
One of the reasons we chose to use three-digit numbers was to support intermediate values in the future. And the future is now :)
Needless to say, variable fonts will be covered in Richard’s forthcoming book.
Atomic.
(thanks, @brad_frost)
Mike lists five tool skills he looks for in a designer (not that every designer needs to have all five):
Swap the first one out for some markup and CSS skills, and I reckon you’ve got a pretty good list for developers too.
Tetris in your browser. Visit it once and it works offline (if your browser supports service workers) so go ahead and add it to your home screen.
This example of using background sync looks like it’s specific to Twilio, but the breakdown of steps is broad enough to apply to many situations:
On the page we need to:
- Register a Service Worker
- Intercept the “submit” event for our message form
- Place the message details into IndexedDB, an in browser database
- Register the Service Worker to receive a “sync” event
Then, in the Service Worker we need to:
- Listen for sync events
- When a sync event is received, retrieve the messages from IndexedDB
- For each message, send a request to our server to send the message
- If the message is sent successfully, then remove the message from IndexedDB
And that’s it.
I’m crap at object-oriented programming (probably because I don’t get get enough practice), but I’ve had a quick read through this and it looks like a nice clear primer. I shall return and peruse in more depth next time I’m trying to remember how to do all this class-based stuff.
Tim Bray lists the options available to a technically-minded person thinking about their career path …but doesn’t mention the option of working at an agency.
Some good long-zoom observations in here:
The bad news that it’s a lot of work. We’re a young profession and we’re still working out our best practices, so the ground keeps changing under you; it doesn’t get easier as the decades go by.
The good news is that it doesn’t get harder either. Once you learn to stop expecting your knowledge to stay fresh, the pace of innovation doesn’t feel to me like it’s much faster (or slower) now than it was in 1987 or 1997 or 2007. More good news: The technology gets better. Seriously, we are so much better at building software now than we used to be in any of those other years ending in 7.
According to this, the forthcoming Clearleft redesign will be totally on fleek.
When Aaron talks, I listen. This time he’s talking about digital (and analogue) preservation, and how that can clash with licensing rules.
It is time for the sector to pick a fight with artists, and artist’s estates and even your donors. It is time for the sector to pick a fight with anyone that is preventing you from being allowed to have a greater — and I want to stress greater, not total — license of interpretation over the works which you are charged with nurturing and caring for.
It is time to pick a fight because, at least on bad days, I might even suggest that the sector has been played. We all want to outlast the present, and this is especially true of artists. Museums and libraries and archives are a pretty good bet if that’s your goal.
Images, videos, sounds, and 3D models are now available from the European Space Agency under a Creative Commons Attribution Share-alike license.
Reading Deep Sea and Foreign Going by Rose George.
What an excellent idea! A weekly round-up in audio form of indie web and homebrew website news. Nice and short.
I really enjoyed teaching in Porto last week. It was like having a week-long series of CodeBar sessions.
Whenever I’m teaching at CodeBar, I like to be paired up with people who are just starting out. There’s something about explaining the web and HTML from first principles that I really like. And people often have lots and lots of questions that I enjoy answering (if I can). At CodeBar—and at The New Digital School—I found myself saying “Great question!” multiple times. The really great questions are the ones that I respond to with “I don’t know …let’s find out!”
CodeBar is always a very rewarding experience for me. It has given me the opportunity to try teaching. And having tried it, I can now safely say that I like it. It’s also a great chance to meet people from all walks of life. It gets me out of my bubble.
I can’t remember when I was first paired up with Amber at CodeBar. It must have been sometime last year. I do remember that she had lots of great questions—at some point I found myself explaining how hexadecimal colours work.
I was impressed with Amber’s eagerness to learn. I also liked that she was making her own website. I told her about Homebrew Website Club and she started coming along to that (along with other CodeBar people like Cassie and Alice).
I’ve mentioned to multiple CodeBar students that there’s pretty much an open-door policy at Clearleft when it comes to shadowing: feel free to come along and sit with a front-end developer while they’re working on client projects. A few people have taken up the offer and enjoyed observing myself or Charlotte at work. Amber was one of those people. Again, I was very impressed with her drive. She’s got a full-time job (with sometimes-crazy hours) but she’s so determined to get into the world of web design and development that she’s willing to spend her free time visiting Clearleft to soak up the atmosphere of a design studio.
We’ve decided to turn this into something more structured. Amber and I will get together for a couple of hours once a week. She’s given me a list of some of the areas she wants to explore, and I think it’s a fine-looking list:
- I want to gather base, structural knowledge about the web and all related aspects. Things seem to float around in a big cloud at the moment.
- I want to adhere to best practices.
- I want to learn more about what direction I want to go in, find a niche.
- I’d love to opportunity to chat with the brilliant people who work at Clearleft and gain a broad range of knowledge from them.
My plan right now is to take a two-track approach: one track about the theory, and another track about the practicalities. The practicalities will be HTML, CSS, JavaScript, and related technologies. The theory will be about understanding the history of the web and its strengths and weaknesses as a medium. And I want to make sure there’s plenty of UX, research, information architecture and content strategy covered too.
Seeing as we’ll only have a couple of hours every week, this won’t be quite like the masterclass I just finished up in Porto. Instead I imagine I’ll be laying some groundwork and then pointing to topics to research. I guess it’s a kind of homework. For example, after we talked today, I set Amber this little bit of research for the next time we meet: “What is the difference between the internet and the World Wide Web?”
I’m excited to see where this will lead. I find Amber’s drive and enthusiasm very inspiring. I also feel a certain weight of responsibility—I don’t want to enter into this lightly.
I’m not really sure what to call this though. Is it mentorship? Or is it coaching? Or training? All of the above?
Whatever it is, I’m looking forward to documenting the journey. Amber will be writing about it too. She is already demonstrating a way with words.
The latest excellent missive from The History Of The Web—A Brief History of Hypertext—leads back to this great article by Alex Wright on Paul Otlet’s Mundaneum.
This sign made me hear Alan Partridge in my head.
(and, by extension, @PaulRobertLloyd)
Jake is absolutely spot-on here. There’s been a lot of excited talk about adding an h
element to HTML but it all seems to miss the question of why the currently-specced outline algorithm hasn’t been implemented.
This is a common mistake in standards discussion — a mistake I’ve made many times before. You cannot compare the current state of things, beholden to reality, with a utopian implementation of some currently non-existent thing.
If you’re proposing something almost identical to something that failed, you better know why your proposal will succeed where the other didn’t.
Jake rightly points out that the first step isn’t to propose a whole new element; it’s to ask “Why haven’t browsers implemented the outline for sectioned headings?”
(I added a small historical note in the comments pointing to the first occurrence of this proposal way back in 1991.)
Feeling pretty happy with these five days of teaching:
https://adactio.com/journal/tags/teaching%20porto%20masterclass
Whiteboard sketches.
Monday morning meeting.
Porto.
Bifana and beer.
Time for a pastel da nata. Again.
A podcast chat in which I ramble on about web stuff.
Changing our ways of thinking and doing isn’t easy. Sometimes it’s necessary though, and the first step on this journey is to let go. Let go of our imaginary feel of control. Forget the boundaries presented by our tools and ways of thinking. Break out of the silos we’ve created.
Harry clearly outlines the performance problems of Base64 encoding images in stylesheets. He’s got a follow-up post with sample data.
Seafood stew.
Pork.
Wagyu beef.
Cheese plate.
Dessert.
The view.
Beetroot.
Eel and veal.
Cod and cow.
Cuttlefish.
On the beach.
For the final day of the week-long masterclass, I had no agenda. This was a time for the students to work on their own projects, but I was there to answer any remaining questions they might have.
As I suspected, the people with the most interest and experience in development were the ones with plenty of questions. I was more than happy to answer them. With no specific schedule for the day, we were free to merrily go chasing down rabbit holes.
SVG? Sure, I’d be happy to talk about that. More JavaScript? My pleasure! Databases? Not really my area of expertise, but I’m more than willing to share what I know.
It was a fun day. The centrepiece was a most excellent lunch across the river at a really traditional seafood place.
At the very end of the day, after everyone else had gone, I sat down with Tiago to discuss how the week went. Overall, I was happy. I was nervous going into this masterclass—I had never done a whole week of teaching—but based on the feedback I got, I think I did okay. There were times when I got impatient, and I wish I could turn back the clock and erase those moments. I noticed that those moments tended to occur when it was time for hands-on-keyboards coding: “no, not like that—like this!” I need to get better at handling those situations. But when we working on paper, or having stand-up discussions, or when I was just geeking out on a particular topic, everything felt quite positive.
All in all, this week has been a great experience. I know it sounds like a cliché, but I felt it was a real honour and a privilege to be involved with the New Digital School. I’ve enjoyed doing hands-on teaching, and I’d like to do more of it.
A useful tool to help you generate a manifest file, icons, and a service worker for your progressive web appsite.
Seafood rice.
Goose barnacles.
Top. Fish.
Carolyn is amazing. And you can support her.
Ben made a music video of the recent Clearleft outing to New York.
End of the working week in Porto.
Squid and shrimp.
A tower of seafood courtesy of @TiagoPedras.
Workshop artefacts.
Markup.
Ever wondered what the most commonly used HTML elements are?
Day one covered HTML (amongst other things), day two covered CSS, and day three covered JavaScript. Each one of those days involved a certain amount of hands-on coding, with the students getting their hands dirty with angle brackets, curly braces, and semi-colons.
Day four was a deliberate step away from all that. No more laptops, just paper. Whereas the previous days had focused on collaboratively working on a single document, today I wanted everyone to work on a separate site.
The sites were generated randomly. I made five cards with types of sites on them: news, social network, shopping, travel, and learning. Another five cards had subjects: books, music, food, pets, and cars. And another five cards had audiences: students, parents, the elderly, commuters, and teachers. Everyone was dealt a random card from each deck, resulting in briefs like “a travel site about food for the elderly” or “a social network about music for commuters.”
For a bit of fun, the first brainstorming exercise (run as a 6-up) was to come with potential names for this service—4 minutes for 6 ideas. Then we went around the table, shared the ideas, got feedback, and settled on the names.
Now I asked everyone to come up with a one-sentence mission statement for their newly-named service. This was a good way of teasing out the most important verbs and nouns, which led nicely into the next task: answering the question “what is the core functionality?”
If that sounds familiar, it’s because it’s the first part of the three-step process I outlined in Resilient Web Design:
We did some URL design, figuring out what structures would make sense for straightforward GET
requests, like:
/things
/things/ID
Then, once it was clear what the primary “thing” was (a car, a book, etc.), I asked them to write down all the pieces that might appear on such a page; one post-it note per item e.g. “title”, “description”, “img”, “rating”, etc.
The next step involved prioritisation. They took those post-it notes and put them on the wall, but they had to put them in a vertical line from top to bottom in decreasing order of importance. This can be a challenge, but it’s better to solve these problems now rather than later.
Okay. I know asked them to “mark up” those vertical lists of post-it notes: writing HTML tag names by each one. By doing this before doing any visual design, it meant they were thinking about the meaning of the content first.
After that, we did a good ol’ fashioned classic 6-up sketching exercise, followed by critique (including a “designated dissenter” for each round). At this point, I was encouraging them to go crazy with ideas—they already had the core functionality figured out (with plain ol’ client/server requests and responses) so they could all the bells and whistles they wanted on top of that.
We finished up with a discussion of some of those bells and whistles, and how they could be used to improve the user experience: Ajax, geolocation, service workers, notifications, background sync …the sky’s the limit.
It was a whirlwind tour for just one day but I think it helped emphasise the importance of thinking about the fundamentals before adding enhancements.
This marked the end of the structured masterclass lessons. Tomorrow I’m around to answer any miscellaneous questions (if I can) and chat to the students individually while they work on their term projects.
Some proposed design principles for web developers:
- Focus on the User
- Focus on Quality
- Keep It Simple
- Think Long-Term (and Beware of Fads)
- Don’t Repeat Yourself (aka One Cannot Not Maintain)
- Code Responsibly
- Know Your Field
Sardines.
This sandwich was delicious and I have no idea what was in it. I speak no Portuguese and the café owner spoke no English.
Na Petiscos.
Much of our courage and support comes from the people we read and talk to and love online, often on the very networks that expose us—and our friends—to genuine enemies of freedom and peace. We have to keep connected, but we don’t have to play on their terms.
Churchill, as it turns out, had some pretty solid ideas on SETI.
Churchill was a science enthusiast and advocate, but he also contemplated important scientific questions in the context of human values. Particularly given today’s political landscape, elected leaders should heed Churchill’s example: appoint permanent science advisers and make good use of them.
Day two ended with a bit of a cliffhanger as I had the students mark up a document, but not yet style it. In the morning of day three, the styling began.
Rather than just treat “styling” as one big monolithic task, I broke it down into typography, colour, negative space, and so on. We time-boxed each one of those parts of the visual design. So everyone got, say, fifteen minutes to write styles relating to font families and sizes, then another fifteen minutes to write styles for colours and background colours. Bit by bit, the styles were layered on.
When it came to layout, we closed the laptops and returned to paper. Everyone did a quick round of 6-up sketching so that there was plenty of fast iteration on layout ideas. That was followed by some critique and dot-voting of the sketches.
Rather than diving into the CSS for layout—which can get quite complex—I instead walked through the approach for layout; namely putting all your layout styles inside media queries. To explain media queries, I first explained media types and then introduced the query part.
I felt pretty confident that I could skip over the nitty-gritty of media queries and cross-device layout because the next masterclass that will be taught at the New Digital School will be a week of responsive design, taught by Vitaly. I just gave them a taster—Vitaly can dive deeper.
By lunch time, I felt that we had covered CSS pretty well. After lunch it was time for the really challenging part: JavaScript.
The reason why I think JavaScript is challenging is that it’s inherently more complex than HTML or CSS. Those are declarative languages with fairly basic concepts at heart (elements, attributes, selectors, etc.), whereas an imperative language like JavaScript means entering the territory of logic, loops, variables, arrays, objects, and so on. I really didn’t want to get stuck in the weeds with that stuff.
I focused on the combination of JavaScript and the Document Object Model as a way of manipulating the HTML and CSS that’s already inside a browser. A lot of that boils down to this pattern:
When (some event happens), then (take this action).
We brainstormed some examples of this e.g. “When the user submits a form, then show a modal dialogue with an acknowledgement.” I then encouraged them to write a script …but I don’t mean a script in the JavaScript sense; I mean a script in the screenwriting or theatre sense. Line by line, write out each step that you want to accomplish. Once you’ve done that, translate each line of your English (or Portuguese) script into JavaScript.
I did quick demo as a proof of concept (which, much to my surprise, actually worked first time), but I was at pains to point out that they didn’t need to remember the syntax or vocabulary of the script; it was much more important to have a clear understanding of the thinking behind it.
With the remaining time left in the day, we ran through the many browser APIs available to JavaScript, from the relatively simple—like querySelector
and Ajax—right up to the latest device APIs. I think I got the message across that, using JavaScript, there’s practically no limit to what you can do on the web these days …but the trick is to use that power responsibly.
At this point, we’ve had three days and we’ve covered three layers of web technologies: HTML, CSS, and JavaScript. Tomorrow we’ll take a step back from the nitty-gritty of the code. It’s going to be all about how to plan and think about building for the web before a single line of code gets written.
Porto at night.
A sweet CSS tutorial that Cassie put together for the Valentine’s Day Codebar.
The second day in this week-long masterclass was focused on CSS. But before we could get stuck into that, there were some diversions and tangents brought on by left-over questions from day one.
This was not a problem. Far from it! The questions were really good. Like, how does a web server know that someone has permission to carry out actions via a POST
request? What a perfect opportunity to talk about state! Cue a little history lesson on the web’s beginning as a deliberately stateless medium, followed by the introduction of cookies …for good and ill.
We also had a digression about performance, file sizes, and loading times—something I’m always more than happy to discuss. But by mid-morning, we were back on track and ready to tackle CSS.
As with the first day, I wanted to take a “long zoom” look at design and the web. So instead of diving straight into stylesheets, we first looked at the history of visual design: cave paintings, hieroglyphs, illuminated manuscripts, the printing press, the Swiss school …all of them examples of media where the designer knows where the “edges” of the canvas lie. Not so with the web.
So to tackle visual design on the web, I suggested separating layout from all the other aspects of visual design: colour, typography, contrast, negative space, and so on.
At this point we were ready to start thinking in CSS. I started by pointing out that all CSS boils down to one pattern:
selector {
property: value;
}
The trick, then, is to convert what you want into that pattern. So “I want the body of the page to be off-white with dark grey text” in English is translated into the CSS:
body {
background-color: rgb(225,225,255);
color: rgb(51,51,51);
}
…and so one for type, contrast, hierarchy, and more.
We started applying styles to the content we had collectively marked up with post-it notes on day one. Then the students split into groups of two to create an HTML document each. Tomorrow they’ll be styling that document.
There were two important links that come up over the course of day two:
If all goes according to plan, we’ll be tackling the third layer of the web technology stack tomorrow: JavaScript.
In Porto’s Casa da Música for an orchestra-accompanied screening of the Peter Thiel biopic.
Port and pastel de nata.
Workshopping.
At the Lusitana restaurant in the Matosinhos fish market, you choose your fish from a market stall and they grill it right then and there. 🐟
Mackerel.
Little fish.
Big fish.
Today was the first day of the week long “masterclass” I’m leading here at The New Digital School in Porto.
When I was putting together my stab-in-the-dark attempt to provide an outline for the week, I labelled day one as “How the web works” and gave this synopsis:
The internet and the web; how browsers work; a history of visual design on the web; the evolution of HTML and CSS.
There ended up being less about the history of visual design and CSS (we’ll cover that tomorrow) and more about the infrastructure that the web sits upon. Before diving into the way the web works, I thought it would be good to talk about how the internet works, which led me back to the history of communication networks in general. So the day started from cave drawings and smoke signals, leading to trade networks, then the postal system, before getting to the telegraph, and then telephone networks, the ARPANET, and eventually the internet. By lunch time we had just arrived at the birth of the World Wide Web at CERN.
It wasn’t all talk though. To demonstrate a hub-and-spoke network architecture I had everyone write down someone else’s name on a post-it note, then stand in a circle around me, and pass me (the hub) those messages to relay to their intended receiver. Later we repeated this exercise but with a packet-switching model: everyone could pass a note to their left or to their right. The hub-and-spoke system took almost a minute to relay all six messages; the packet-switching version took less than 10 seconds.
Over the course of the day, three different laws came up that were relevant to the history of the internet and the web:
The value of a network is proportional to the square of the number of users.
Be conservative in what you send, be liberal in what you accept.
Ninety percent of everything is crap.
There were also references to the giants of hypertext: Ted Nelson, Vannevar Bush, and Douglas Engelbart—for a while, I had the mother of all demos playing silently in the background.
After a most-excellent lunch in a nearby local restaurant (where I can highly recommend the tripe), we started on the building blocks of the web: HTTP, URLs, and HTML. I pulled up the first ever web page so that we could examine its markup and dive into the wonder of the A
element. That led us to the first version of HTML which gave us enough vocabulary to start marking up documents: p
, h1
-h6
, ol
, ul
, li
, and a few others. We went around the room looking at posters and other documents pinned to the wall, and starting marking them up by slapping on post-it notes with opening and closing tags on them.
At this point we had covered the anatomy of an HTML element (opening tags, closing tags, attribute names and attribute values) as well as some of the history of HTML’s expanding vocabulary, including elements added in HTML5 like section
, article
, and nav
. But so far everything was to do with marking up static content in a document. Stepping back a bit, we returned to HTTP, and talked about difference between GET
and POST
requests. That led in to ways of sending data to a server, which led to form
fields and the many types of input
s at our disposal: text
, password
, radio
, checkbox
, email
, url
, tel
, datetime
, color
, range
, and more.
With that, the day drew to a close. I feel pretty good about what we covered. There was a lot of groundwork, and plenty of history, but also plenty of practical information about how browsers interpret HTML.
With the structural building blocks of the web in place, tomorrow is going to focus more on the design side of things.
In which I attempt to answer some questions raised in the reading of Resilient Web Design.
Tripas.
Lunch break.
Lulas.
Good morning, Porto.
When I heard about Universal JavaScript apps (a.k.a. isomorphic JavaScript), despite the “framework hotness”, I saw real value for accessibility and performance together. With this technique, a JavaScript app is rendered as a complete HTML payload from the server using Node.js, which is then upgraded as client resources download and execute. All of a sudden your Angular app could be usable a lot sooner, even without browser JS. Bells started going off in my head: “this could help accessible user experience, too!”
February is shaping up to be a busy travel month. I’ve just come back from spending a week in New York as part of a ten-strong Clearleft expedition to this year’s Interaction conference.
There were some really good talks at the event, but alas, the muti-track format made it difficult to see all of them. Continuous partial FOMO was the order of the day. Still, getting to see Christina Xu and Brenda Laurel made it all worthwhile.
To be honest, the conference was only part of the motivation for the trip. Spending a week in New York with a gaggle of Clearlefties was its own reward. We timed it pretty well, being there for the Superb Owl, and for a seasonal snowstorm. A winter trip to New York just wouldn’t be complete without a snowball fight in Central Park.
Funnily enough, I’m going to back in New York in just three weeks’ time for AMP conf at the start of March. I’ve been invited along to be the voice of dissent on a panel—a brave move by the AMP team. I wonder if they know what they’re letting themselves in for.
Before that though, I’m off to Porto for a week. I’ll be teaching at the New Digital School, running a masterclass in progressive enhancement:
In this masterclass we’ll dive into progressive enhancement, a layered approach to building for the web that ensures access for all. Content, structure, presentation, and behaviour are each added in a careful, well-thought out way that makes the end result more resilient to the inherent variability of the web.
I must admit I’ve got a serious case of imposter syndrome about this. A full week of teaching—I mean, who am I to teach anything? I’m hoping that my worries and nervousness will fall by the wayside once I start geeking out with the students about all things web. I’ve sorta kinda got an outline of what I want to cover during the week, but for the most part, I’m winging it.
I’ll try to document the week as it progresses. And you can certainly expect plenty of pictures of seafood and port wine.
Going to Porto. brb
A new media query that will help prevent you making your users hurl.
A nice straightforward introduction to web development for anyone starting from scratch.
Rowena, Jon, and James.
It’s the day after the snowstorm and all the dogs in New York are wearing little booties and I am dead with the cuteness.
Watching the ProPublicans.
Here’s one of them new-fangled variable fonts that’re all the rage. And this one’s designed by David Berlow. And it’s free!
Tagliatelle bolognese.
Spotted Ice-T in a club last night and all I could think of was @icetsvu.
The transcript of a really great—and entertaining—talk on performance by Wilto. I may have laughed out loud at points.
Subway.
Ramen. 🍜
Gyoza.
Central Park.
Thataway.
The remainder of the @Clearleft expedition to Manhattan.
In retrospect, we may have resorted to cannibalism a little too quickly.
Breughelesque.
Taking the subway to Central Park with @RowenaKP to go build a snowman or whatever.
The texture here is shockingly realistic.
To any #ixd17 attendees stranded in New York because of the snow: the @Clearleft delegation has spare AirBnB beds.
Mi casa, su casa.
Snowfall.
What a great project! A newsletter that focuses on stories from the web’s history, each one adding to an ongoing timeline (a bit like John’s hypertext history).
Snowy day in Soho.
Just like many people develop with an average connection speed in mind, many people have a fixed view of who a user is. Maybe they think there are customers with a lot of money with fast connections and customers who won’t spend money on slow connections. That is, very roughly speaking, perhaps true on average, but sites don’t operate on average, they operate in particular domains.
The Django.
Cocktails and jazz at The Django.
Glad I could help @Cennydd successfully pronounce @baconmeteor’s namecheck in his talk.
Really good advice for anyone thinking of releasing a polyfill into the world.
Ursula!
I’ve recorded each chapter of Resilient Web Design as MP3 files that I’ve been releasing once a week. The final chapter is recorded and released so my audio work is done here.
If you want subscribe to the podcast, pop this RSS feed into your podcast software of choice. Or use one of these links:
Or if you can have it as one single MP3 file to listen to as an audio book. It’s two hours long.
So, for those keeping count, the book is now available as HTML, PDF, EPUB, MOBI, and MP3.
Really enjoyed @xuhulk’s talk at Interaction ’17.
Wrapped in cloud.
Here’s a nice little service from Remy that works sorta like Readability. Pass it a URL in a query string and it will generate a version without all the cruft around the content.
“I want this on a T-shirt!” says @JenSimmons.
Getting an impromptu masterclass in CSS a Grid Layout from @JenSimmons. In a bar.
Cuneiform.
Bach.
Morse.
Philip Ball certainly has a way with words.
Reading The Separation by Christopher Priest.
Going to New York. brb
Testing https://resilientwebdesign.com
Are you an EU/EEA national living in the UK? Worried about your rights and options post-Brexit?
Alex has an organised an event at 68 Middle Street for March 16th with an immigration advisor, The £5 ticket fee is refundable after the event or you can donate it to charity.
The largest complaint by far is that the URLs for AMP links differ from the canonical URLs for the same content, making sharing difficult. The current URLs are a mess.
This is something that the Google gang are aware of, and they say they’re working on a fix. But this post points out some other misgivings with AMP, like its governance policy:
This keeps the AMP HTML specification squarely in the hands of Google, who will be able to take it in any direction that they see fit without input from the community at large. This guise of openness is perhaps even worse than the Apple News Format, which at the very least does not pretend to be an open standard.
Phil describes the process of implementing the holy grail of web architecture (which perhaps isn’t as difficult as everyone seems to think it is):
I have been experimenting with something that seemed obvious to me for a while. A web development model which gives a pre-rendered, ready-to-consume, straight-into-the-eyeballs web page at every URL of a site. One which, once loaded, then behaves like a client-side, single page app.
Now that’s resilient web design!
I like Mike’s “long zoom” view here where the glass is half full and half empty:
Several years from now, I want to be able to look back on this time the same way people look at other natural disasters. Without that terrible earthquake, we would have never improved our building codes. Without that terrible flood, we would have never built those levees. Without that terrible hurricane, we would have never rebuilt this amazing city. Without that terrible disease, we would have never developed antibodies against it.
It doesn’t require giving any credit to the disaster. The disaster will always be a complete fucking disaster. But it does involve using the disaster as an opportunity to take a hard look at what got us here and rededicate our energy towards things that will get us out.
Trying to sit in a quiet corner of this pub and read my book and not interrupt the people discussing @NealStephenson and @GreatDismal.
Having a pint in The Foundry, thinking of @bobbie and @annapickard.
It strikes me that Garrett’s site has become a valuable record of the human condition with its mix of two personal stories—one relating to his business and the other relating to his health—both of them communicated clearly through great writing.
Have a read back through the archive and I think you’ll share my admiration.
I’m heading to New York tomorrow for a week (to attend Interaction ’17).
New York friends: we should meet up for coffee and despair.
A weekly list of short, concrete actions to defend the weak, rebuild civic institutions, and fight right-wing extremism. For UK people.
Subscribed.
“Oh”, I observed in the @Clearleft office, “It’s the final teabag.”
And now that song is on the stereo.
A gorgeous visualisation of satellites in Earth orbit. Click around to grasp the scale of the network.