Replying to a tweet from @TejasKumar_
She’s smart and perceptive.
5th | 10th | 15th | 20th | 25th | 30th | |||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
12am | ||||||||||||||||||||||||||||||
4am | ||||||||||||||||||||||||||||||
8am | ||||||||||||||||||||||||||||||
12pm | ||||||||||||||||||||||||||||||
4pm | ||||||||||||||||||||||||||||||
8pm |
She’s smart and perceptive.
W00t!!!
Shout out to Requestmap from @Paul_Irish …something I’d love to see in dev tools:
You should demand your money back!
It was a real honour to be interviewed by @SaronYitbarek for the Command Line Heroes podcast live on stage at View Source—she’s so great!
Getting some great advice from @SoMelanieSaid at View Source for the dark mode styles I added to my site at Indie Web Camp Amsterdam yesterday.
Checked in at Poki
📸😁
This is nice.
Fun, fun, fun!
A thousand likes doesn’t look much bigger than one, and this becomes important when considering the form of negativity on social media.
There is no feature for displeasure on social media, so if a person wants to express that, they must write. Complaints get wrapped in language, and language is always specific.
Well, actually, this more of a comment than a question…
See you soon!
Hello Amsterdam!
❤️❤️❤️
Obrigado, Tiago!
An excerpt from the book Rethinking Consciousness by Michael S. A. Graziano, which looks like an interesting companion piece to Peter Godfrey-Smith’s excellent Other Minds.
Also, can I just say how nice this reading experience is—the typography, the arresting image …I like it.
Decomputerization doesn’t mean no computers. It means that not all spheres of life should be rendered into data and computed upon. Ubiquitous “smartness” largely serves to enrich and empower the few at the expense of the many, while inflicting ecological harm that will threaten the survival and flourishing of billions of people.
Reading Exhalation by Ted Chiang.
Going to Amsterdam. brb
Checked in at Taberna Los Castizos. Ham night! — with Jessica
Checked in at Lamiak. Gilda monster! — with Jessica
Sitting by the square.
Checked in at Cervecería San Andrés. Orejas a la plancha — with Jessica
Oh, wow—that’s great! Can’t wait to read this book!
The line-up for this Thursday’s Generate CSS conference in London looks really, really good …with the glaring exception of the closing keynote.
To make up for that, use the discount code JEREMY10 to get £25 off at GenerateConf.com
Checked in at La Casa del Bacalao. Tapas! — with Jessica
Checked in at Federal Café 2. Coffee — with Jessica
Checked in at Que Trabaje Rita!. Cerveza — with Jessica
Going to Madrid. brb
Thank you—that’s lovely feedback!
Sorry I couldn’t stick around longer to chat.
Thistle and crag.
Good catch—updated!
Checked in at The Waverley. Scottish session — with Jessica
There’s so much history in Edinburgh…
Somebody in this coffee shop is doing that @AlanCumming in Goldeneye thing with his pen; if and when I snap, I think it’s safe to say that no jury would convict me.
Out’n’about in Edinburgh.
Checked in at Sandy Bell’s. Listening to tunes — with Jessica
For the offline page on my website, I’ve been using a mixture of the Cache API and the localStorage
API. My service worker script uses the Cache API to store copies of pages for offline retrieval. But I used the localStorage
API to store metadata about the page—title, description, and so on. Then, my offline page would rifle through the pages stored in a cache, and retreive the corresponding metadata from localStorage
.
It all worked fine, but as soon as I read Remy’s post about the forehead-slappingly brilliant technique he’s using, I knew I’d be switching my code over. Instead of using localStorage
—or any other browser API—to store and retrieve metadata, he uses the pages themselves! Using the Cache API, you can examine the contents of the pages you’ve stored, and get at whatever information you need:
I realised I didn’t need to store anything. HTML is the API.
Refactoring the code for my offline page felt good for a couple of reasons. First of all, I was able to remove a dependency—localStorage
—and simplify the JavaScript. That always feels good. But the other reason for the warm fuzzies is that I was able to use data instead of metadata.
Many years ago, Cory Doctorow wrote a piece called Metacrap. In it, he enumerates the many issues with metadata—data about data. The source of many problems is when the metadata is stored separately from the data it describes. The data may get updated, without a corresponding update happening to the metadata. Metadata tends to rot because it’s invisible—out of sight and out of mind.
In fact, that’s always been at the heart of one of the core principles behind microformats. Instead of duplicating information—once as data and again as metadata—repurpose the visible data; mark it up so its meta-information is directly attached to the information itself.
So if you have a person’s contact details on a web page, rather than repeating that information somewhere else—in the head
of the document, say—you could instead attach some kind of marker to indicate which bits of the visible information are contact details. In the case of microformats, that’s done with class
attributes. You can mark up a page that already has your contact information with classes from the h-card microformat.
Here on my website, I’ve marked up my blog posts, articles, and links using the h-entry microformat. These classes explicitly mark up the content to say “this is the title”, “this is the content”, and so on. This makes it easier for other people to repurpose my content. If, for example, I reply to a post on someone else’s website, and ping them with a webmention, they can retrieve my post and know which bit is the title, which bit is the content, and so on.
When I read Remy’s post about using the Cache API to retrieve information directly from cached pages, I knew I wouldn’t have to do much work. Because all of my posts are already marked up with h-entry classes, I could use those hooks to create a nice offline page.
The markup for my offline page looks like this:
<h1>Offline</h1>
<p>Sorry. It looks like the network connection isn’t working right now.</p>
<div id="history">
</div>
I’ll populate that “history” div
with information from a cache called “pages” that I’ve created using the Cache API in my service worker.
I’m going to use async
/await
to do this because there are lots of steps that rely on the completion of the step before. “Open this cache, then get the keys of that cache, then loop through the pages, then…” All of those then
s would lead to some serious indentation without async
/await
.
All async
functions have to have a name—no anonymous async
functions allowed. I’m calling this one listPages
, just like Remy is doing. I’m making the listPages
function execute immediately:
(async function listPages() {
...
})();
Now for the code to go inside that immediately-invoked function.
I create an array called browsingHistory
that I’ll populate with the data I’ll use for that “history” div
.
const browsingHistory = [];
I’m going to be parsing web pages later on, so I’m going to need a DOM parser. I give it the imaginative name of …parser
.
const parser = new DOMParser();
Time to open up my “pages” cache. This is the first await
statement. When the cache is opened, this promise will resolve and I’ll have access to this cache using the variable …cache
(again with the imaginative naming).
const cache = await caches.open('pages');
Now I get the keys of the cache—that’s a list of all the page requests in there. This is the second await
. Once the keys have been retrieved, I’ll have a variable that’s got a list of all those pages. You’ll never guess what I’m calling the variable that stores the keys of the cache. That’s right …keys
!
const keys = await cache.keys();
Time to get looping. I’m getting each request in the list of keys using a for
/of
loop:
for (const request of keys) {
...
}
Inside the loop, I pull the page out of the cache using the match()
method of the Cache API. I’ll store what I get back in a variable called response
. As with everything involving the Cache API, this is asynchronous so I need to use the await
keyword here.
const response = await cache.match(request);
I’m not interested in the headers of the response. I’m specifically looking for the HTML itself. I can get at that using the text()
method. Again, it’s asynchronous and I want this promise to resolve before doing anything else, so I use the await
keyword. When the promise resolves, I’ll have a variable called html
that contains the body of the response.
const html = await response.text();
Now I can use that DOM parser I created earlier. I’ve got a string of text in the html
variable. I can generate a Document Object Model from that string using the parseFromString()
method. This isn’t asynchronous so there’s no need for the await
keyword.
const dom = parser.parseFromString(html, 'text/html');
Now I’ve got a DOM, which I have creatively stored in a variable called …dom
.
I can poke at it using DOM methods like querySelector
. I can test to see if this particular page has an h-entry on it by looking for an element with a class
attribute containing the value “h-entry”:
if (dom.querySelector('.h-entry h1.p-name') {
...
}
In this particular case, I’m also checking to see if the h1
element of the page is the title of the h-entry. That’s so that index pages (like my home page) won’t get past this if
statement.
Inside the if
statement, I’m going to store the data I retrieve from the DOM. I’ll save the data into an object called …data
!
const data = new Object;
Well, the first piece of data isn’t actually in the markup: it’s the URL of the page. I can get that from the request
variable in my for
loop.
data.url = request.url;
I’m going to store the timestamp for this h-entry. I can get that from the datetime
attribute of the time
element marked up with a class of dt-published
.
data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));
While I’m at it, I’m going to grab the human-readable date from the innerText
property of that same time.dt-published
element.
data.published = dom.querySelector('.h-entry .dt-published').innerText;
The title of the h-entry is in the innerText
of the element with a class of p-name
.
data.title = dom.querySelector('.h-entry .p-name').innerText;
At this point, I am actually going to use some metacrap instead of the visible h-entry content. I don’t output a description of the post anywhere in the body
of the page, but I do put it in the head
in a meta
element. I’ll grab that now.
data.description = dom.querySelector('meta[name="description"]').getAttribute('content');
Alright. I’ve got a URL, a timestamp, a publication date, a title, and a description, all retrieved from the HTML. I’ll stick all of that data into my browsingHistory
array.
browsingHistory.push(data);
My if
statement and my for
/in
loop are finished at this point. Here’s how the whole loop looks:
for (const request of keys) {
const response = await cache.match(request);
const html = await response.text();
const dom = parser.parseFromString(html, 'text/html');
if (dom.querySelector('.h-entry h1.p-name')) {
const data = new Object;
data.url = request.url;
data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));
data.published = dom.querySelector('.h-entry .dt-published').innerText;
data.title = dom.querySelector('.h-entry .p-name').innerText;
data.description = dom.querySelector('meta[name="description"]').getAttribute('content');
browsingHistory.push(data);
}
}
That’s the data collection part of the code. Now I’m going to take all that yummy information an output it onto the page.
First of all, I want to make sure that the browsingHistory
array isn’t empty. There’s no point going any further if it is.
if (browsingHistory.length) {
...
}
Within this if
statement, I can do what I want with the data I’ve put into the browsingHistory
array.
I’m going to arrange the data by date published. I’m not sure if this is the right thing to do. Maybe it makes more sense to show the pages in the order in which you last visited them. I may end up removing this at some point, but for now, here’s how I sort the browsingHistory
array according to the timestamp
property of each item within it:
browsingHistory.sort( (a,b) => {
return b.timestamp - a.timestamp;
});
Now I’m going to concatenate some strings. This is the string of HTML text that will eventually be put into the “history” div
. I’m storing the markup in a string called …markup
(my imagination knows no bounds).
let markup = '<p>But you still have something to read:</p>';
I’m going to add a chunk of markup for each item of data.
browsingHistory.forEach( data => {
markup += `
<h2><a href="${ data.url }">${ data.title }</a></h2>
<p>${ data.description }</p>
<p class="meta">${ data.published }</p>
`;
});
With my markup assembled, I can now insert it into the “history” part of my offline page. I’m using the handy insertAdjacentHTML()
method to do this.
document.getElementById('history').insertAdjacentHTML('beforeend', markup);
Here’s what my finished JavaScript looks like:
<script>
(async function listPages() {
const browsingHistory = [];
const parser = new DOMParser();
const cache = await caches.open('pages');
const keys = await cache.keys();
for (const request of keys) {
const response = await cache.match(request);
const html = await response.text();
const dom = parser.parseFromString(html, 'text/html');
if (dom.querySelector('.h-entry h1.p-name')) {
const data = new Object;
data.url = request.url;
data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));
data.published = dom.querySelector('.h-entry .dt-published').innerText;
data.title = dom.querySelector('.h-entry .p-name').innerText;
data.description = dom.querySelector('meta[name="description"]').getAttribute('content');
browsingHistory.push(data);
}
}
if (browsingHistory.length) {
browsingHistory.sort( (a,b) => {
return b.timestamp - a.timestamp;
});
let markup = '<p>But you still have something to read:</p>';
browsingHistory.forEach( data => {
markup += `
<h2><a href="${ data.url }">${ data.title }</a></h2>
<p>${ data.description }</p>
<p class="meta">${ data.published }</p>
`;
});
document.getElementById('history').insertAdjacentHTML('beforeend', markup);
}
})();
</script>
I’m pretty happy with that. It’s not too long but it’s still quite readable (I hope). It shows that the Cache API and the h-entry microformat are a match made in heaven.
If you’ve got an offline strategy for your website, and you’re using h-entry to mark up your content, feel free to use that code.
If you don’t have an offline strategy for your website, there’s a book for that.
Going to Edinburgh. brb
Reading The Science of Storytelling by Will Storr.
There seems to be a tendency to repurpose existing solutions to other people’s problems. I propose that this is the main cause of the design sameness that we encounter on the web (and in apps) today. In our (un)conscious attempts to reduce the effort needed to do our work, we’ve become experts in choosing rather than in thinking.
A very thoughtful piece from Stephen.
When we use existing solutions or patterns, we use a different kind of thinking. Our focus is on finding which pattern will work for us. Too quickly, we turn our attention away from closely examining the problem.
Frank yearns for just-in-time computing:
With each year that goes by, it feels like less and less is happening on the device itself. And the longer our work maintains its current form (writing documents, updating spreadsheets, using web apps, responding to emails, monitoring chat, drawing rectangles), the more unnecessary high-end computing seems. Who needs multiple computers when I only need half of one?
When I liveblogged Jason’s talk at An Event Apart in Chicago, I included this bit of reporting:
Jason proceeds to relate a long and involved story about buying burritos online from Chipotle.
Well, here is that story. It’s a good one, with some practical takeaways (if you’ll pardon the pun):
- Use HTML5 input features
- Support autofill
- Make autofill part of your test plans
Having a very productive Homebrew Website Club, implementing @rem’s brilliant idea for offline pages:
https://remysharp.com/2019/09/05/offline-listings
h-entry + service worker = offline success!
A look at the ubiquitous computing work that Bret Victor has been doing over the past few years at Dynamicland.
A bit of a tangent, but I love this description of reading maps:
Map reading is a complex and uniquely human skill, not at all obvious to a young child. You float out of your body and into the sky, leaving behind the point of view you’ve been accustomed to all your life. Your imagination turns squiggly blue lines and green shading into creeks, mountains, and forests seen from above. Bringing it all together in your mind’s eye, you can picture the surroundings.
Looking forward to seeing you there, Chris!
❤️
It’s Homebrew Website Club Brighton this evening in the @Clearleft HQ at 6pm:
https://indieweb.org/events/2019-09-19-homebrew-website-club
Come and work on your website (or get some writing done).
Medice, cura te ipsum.
Yes.
Unkind.
Your remark was unkind.
Yes. Unkind.
That is unkind.
Checked in at Jolly Brewer. Fiddles at the ready 🎻 — with Jessica
The transcript of Andy’s talk from this year’s State Of The Browser conference.
I don’t think using scale as an excuse for over-engineering stuff—especially CSS—is acceptable, even for huge teams that work on huge products.
Reading on the beach. 📖 🌊
We choose whether our work stays alive on the internet. As long as we keep our hosting active, our site remains online. Compare that to social media platforms that go public one day and bankrupt the next, shutting down their app and your content along with it.
But the real truth is that as long as we’re putting our work in someone else’s hands, we forfeit our ownership over it. When we create our own website, we own it – at least to the extent that the internet, beautiful in its amorphous existence, can be owned.
When your only tool seems like a smartphone, everything looks like an app.
Amber writes on Ev’s blog about products that deliberately choose to be dependent on smartphone connectivity:
We read service outage stories like these seemingly every week, and have become numb to the fundamental reality: The idea of placing the safety of yourself, your child, or another loved one in the hands of an app dependent on a server you cannot touch, control, or know the status of, is utterly unacceptable.
See you there!
Back in the late 2000s, I used to go to Copenhagen every for an event called Reboot. It was a fun, eclectic mix of talks and discussions, but alas, the last one was over a decade ago.
It was organised by Thomas Madsen-Mygdal. I hadn’t seen Thomas in years, but then, earlier this year, our paths crossed when I was back at CERN for the 30th anniversary of the web. He got a real kick out of the browser recreation project I was part of.
I few months ago, I got an email from Thomas about the new event he’s running in Copenhagen called Techfestival. He was wondering if there was some way of making the WorldWideWeb project part of the event. We ended up settling on having a stand—a modern computer running a modern web browser running a recreation of the first ever web browser from almost three decades ago.
So I showed up at Techfestival and found that the computer had been set up in a Shoreditchian shipping container. I wasn’t exactly sure what I was supposed to do, so I just hung around nearby until someone wandering by would pause and start tentatively approaching the stand.
“Would you like to try the time machine?” I asked. Nobody refused the offer. I explained that they were looking at a recreation of the world’s first web browser, and then showed them how they could enter a URL to see how the oldest web browser would render a modern website.
Lots of people entered facebook.com
or google.com
, but some people had their own websites, either personal or for their business. They enjoyed seeing how well (or not) their pages held up. They’d take photos of the screen.
People asked lots of questions, which I really enjoyed answering. After a while, I was able to spot the themes that came up frequently. Some people were confusing the origin story of the internet with the origin story of the web, so I was more than happy to go into detail on either or both.
The experience helped me clarify in my own mind what was exciting and interesting about the birth of the web—how much has changed, and how much and stayed the same.
All of this very useful fodder for a conference talk I’m putting together. This will be a joint talk with Remy at the Fronteers conference in Amsterdam in a couple of weeks. We’re calling the talk How We Built the World Wide Web in Five Days:
The World Wide Web turned 30 years old this year. To mark the occasion, a motley group of web nerds gathered at CERN, the birthplace of the web, to build a time machine. The first ever web browser was, confusingly, called WorldWideWeb. What if we could recreate the experience of using it …but within a modern browser! Join (Je)Remy on a journey through time and space and code as they excavate the foundations of Tim Berners-Lee’s gloriously ambitious and hacky hypertext system that went on to conquer the world.
Neither of us is under any illusions about the nature of a joint talk. It’s not half as much work; it’s more like twice the work. We’ve both seen enough uneven joint presentations to know what we want to avoid.
We’ve been honing the material and doing some run-throughs at the Clearleft HQ at 68 Middle Street this week. The talk has a somewhat unusual structure with two converging timelines. I think it’s going to work really well, but I won’t know until we actually deliver the talk in Amsterdam. I’m excited—and a bit nervous—about it.
Whether it’s in a shipping container in Copenhagen or on a stage in Amsterdam, I’m starting to realise just how much I enjoy talking about web history.
Click around the site a bit and you’ll find yourself tied to an endless string of hyperlinks, hopping from one page to the next, with no real rhyme or reason to tie them altogether. It is almost pure web id, unleashed structurally to engage your curiosity and make use of the web’s most primal feature: the link.
Drag this to your browser’s bookmark bar now!
Such a useful quick check for resilience—this bookmarklet shows you a side-by-side comparison of a site with JavaScript enabled and disabled.
This is great—thank you!
Yay! Will you be popping down, then?
This is such a great little web book from Chris Ferdinandi that you can read online for free.
I fully expect my personal website to outlive Twitter and as such have decided to take full ownership of the content I’ve posted there. In true IndieWeb fashion, I’m taking ownership of my data.
Given the stellar line-up of @FinchConf next week—@RachelAndrew! @HJ_Chen! @CassieCodes! @vlh! @SaraSoueidan! @LeonieWatson!—I’m amazed there are still tickets available, but there are!
Get in there: https://ti.to/finchconf/ffe
One last Chicago dog before I leave.
Reading The Raven Tower by Ann Leckie.
Suka’s being shy.
Checked in at Cobh / An Cóbh. Hometown — with Jessica
Oh, that is wonderful news!!! I am so happy for you!
Checked in at Kinsale Harbour. with Jessica
The Jevons Paradox in action:
Faster networks should fix our performance problems, but so far, they have had an interesting if unintentional impact on the web. This is because historically, faster network speed has enabled developers to deliver more code to users—in particular, more JavaScript code.
And because it’s JavaScript we’re talking about:
Even if folks are on a new fast network, they’re very likely choking on the code we’re sending, rendering the potential speed improvements of 5G moot.
The longer I spend in this field, the more convinced I am that web performance is not a technical problem; it’s a people problem.
Murphy’s
Going to Cork, like. brb
Checked in at Jolly Brewer. Session — with Jessica
You won’t want to miss the fantastic talk that @CassieCodes is giving at Generate CSS in London on September 26th—and you can get 10% off the ticket price with the code JEREMY10:
I had a very rewarding evening at @CodebarBrighton yesterday working with Claire:
https://twitter.com/wearno13/status/1171531695436075009
Making web pages is kinda awesome!
Aw, thank you, Claire! It was my pleasure!
I’ve seen @rem coding and I can attest that it is indeed livid.
The video of a talk in which Mark discusses pace layers, dogs, and design systems. He concludes:
It’s true many design systems are the blueprints for manufacturing and large scale application. But in almost every instance I can think of, once you move from design to manufacturing the horse has bolted. It’s very difficult to move back into design because the results of the system are in the wild. The more strict the system, the less able you are to change it. That’s why broad principles, just enough governance, and directional examples are far superior to locked-down cookie cutters.
Spent the afternoon over at @TheSkiff scheming with @Rem. I’m biased but I think our joint @FronteersConf talk is gonna be gooood.
https://fronteers.nl/congres/2019/speakers#jeremy-keith
Now onwards to @CogApp for @CodebarBrighton!
W00t! See you there!
The Request Map Generator is a terrific tool. It’s made by Simon Hearne and uses the WebPageTest API.
You pop in a URL, it fetches the page and maps out all the subsequent requests in a nifty interactive diagram of circles, showing how many requests third-party scripts are themselves generating. I’ve found it to be a very effective way of showing the impact of third-party scripts to people who aren’t interested in looking at waterfall diagrams.
I was wondering… Wouldn’t it be great if this were built into browsers?
We already have a “Network” tab in our developer tools. The purpose of this tab is to show requests coming in. The browser already has all the information it needs to make a diagram of requests in the same that the request map generator does.
In Firefox, there’s a little clock icon in the bottom left corner of the “Network” tab. Clicking that shows a pie-chart view of requests. That’s useful, but I’d love it if there were the option to also see the connected circles that the request map generator shows.
Just a thought.
When you ever had to fix just a few lines of CSS and it took two hours to get an ancient version of Gulp up and running, you know what I’m talking about.
I feel seen.
When everything works, it feels like magic. When something breaks, it’s hell.
I concur with Bastian’s advice:
I have a simple rule of thumb when it comes to programming:
less code === less potential issues
And this observation rings very true:
This dependency hell is also the reason why old projects are almost like sealed capsules. You can hardly let a project lie around for more than a year, because afterwards it’s probably broken.
Get an idea of how much your website is contributing to the climate crisis.
In total, the internet produces 2% of global carbon emissions, roughly the same as that bad boy of climate change, the aviation industry.
A handy tool for tweaking the animations in your SVGs.
We should work toward a universal linked information system, in which generality and portability are more important than fancy graphics techniques and complex extra facilities.
Checked in at Aamans Copenhagen Airport. Smørrebrød for breakfast
I’ve spent the past few days being a booth boy for Project Nexus.
Six UX lessons from game design:
- Story vs Narrative (Think in terms of story arcs)
- Games are fractal (Break up the journey from big to small to tiny)
- Learning loop (figure out your core mechanic)
- Affordances (Prompt for known loops)
- Hintiness (Move to new loops)
- Pacing (Be sure to start here)
What do you mean?
Progressive Web App propaganda.
Remnant.
Sky trap.
Ballardian.
An interesting proposal to allow websites to detect certain SMS messages. The UX implications are fascinating.
See how an Enigma machine works …and interact with it.
Letters to be encrypted enter at the boundary, move through the wire matrix, and exit.
I got an email recently from a young person looking to get into web development. They wanted to know what languages they should start with, whether they should a Mac or a Windows PC, and what some places to learn from.
I wrote back, saying this about languages:
For web development, start with HTML, then CSS, then JavaScript (and don’t move on to JavaScript too quickly—really get to grips with HTML and CSS first).
And this is what I said about hardware and software:
It doesn’t matter whether you use a Mac or a Windows PC, as long as you’ve got an internet connection, some web browsers (Chrome, Firefox, for example) and a text editor. There are some very good free text editors available for Mac and PC:
For resources, I had a trawl through links I’ve tagged with “learning” and “html” and sent along some links to free online tutorials:
After sending that email, I figured that this list might be useful to anyone else looking to start out in web development. If you know of anyone in that situation, I hope this list might help.
Play around with this variable font available soon from Google Fonts in monospaced and sans-serif versions.
This is brilliant technique by Remy!
If you’ve got a custom offline page that lists previously-visited pages (like I do on my site), you don’t have to choose between localStorage
or IndexedDB
—you can read the metadata straight from the HTML of the cached pages instead!
This seems forehead-smackingly obvious in hindsight. I’m totally stealing this.
If you’re at Techfestival.co in Copenhagen, drop in to this shipping container where I’ll be demoing WorldWideWeb.cern.ch
Ooh, thank you for those recommendations!
Going to Copenhagen. brb
Eating toast with local Songhive Honey.
I know a number of people who blog as a way to express themselves, for expression’s sake, rather than for anyone else wanting to read it. It’s a great way to have a place to “scream into the void” and share your thoughts.
Checked in at Jolly Brewer. Tunes! — with Jessica
My dealer has me sorted for bees’n’whizz.
🐝
Anyone else going to View Source in Amsterdam coming to Indie Web Camp too?
https://indieweb.org/2019/Amsterdam
@ASpittel @TejasKumar @HJChen @DasSurma @LadyAdaKing @Rumyra @hdv @RachelAndrew @Torgo @MikeTaylr @SeaOtta ?
Doh! Thanks for that—fixed!
If you haven’t done so already, you should really switch to Firefox.
Then encourage your friends and family to switch to Firefox too.
It looks (a more complex version of) fragmention might be coming to Chrome.
I see—and appreciate—what you’ve done with traintimes.org.uk (and I know what you mean when it feels like pissing in the wind). Keep fighting the good fight!
See also: fragmentions.
https://indieweb.org/fragmention
I’ve got this implemented on adactio.com and ResilientWebDesign.com
The new editorial project from David Byrne, as outlined in his recent Long Now talk.
Through stories of hope, rooted in evidence, Reasons to be Cheerful aims to inspire us all to be curious about how the world can be better, and to ask ourselves how we can be part of that change.
An interesting comparison between Facebook and tenements. Cram everybody together into one social network and the online equivalents of cholera and typhoid soon emerge.
The airless, lightless confines of these networks has a worrying tendency to amplify the most extreme content that takes root, namely that of racists, xenophobes, and conspiracists (which, ironically, includes anti-vaxxers.)
Making the case for moving your navigation to the bottom of the screen on mobile:
Phones are getting bigger, and some parts of the screen are easier to interact with than others. Having the hamburger menu at the top provides too big of an interaction cost, and we have a large number of amazing mobile app designs that utilize the bottom part of the screen. Maybe it’s time for the web design world to start using these ideas on websites as well?
The way you build web pages—using IntersectionObserver
, for example—can have a direct effect on the climate emergency.
Webpages can be good citizens of battery life.
It’s important to measure the battery impact in Web Inspector and drive those costs down.
It would be great to see you there!
Spot the part of the month where I was at sea, cut off from the internet.
The best RSS reader ever is back. This makes me happy.
Ignore the clickbaity headline and have a read of Whitney Kimball’s obituaries of Friendster, MySpace, Bebo, OpenSocial, ConnectU, Tribe.net, Path, Yik Yak, Ello, Orkut, Google+, and Vine.
I’m sure your content on Facebook, Twitter, and Instagram is perfectly safe.
As a resident of Brighton—home to the most beautiful of bandstands—this bit of background to their history is fascinating.
An excellent suggestion!
The ellipsis is the new hamburger.
It’s disappointing that Apple, supposedly a leader in interface design, has resorted to such uninspiring, and I’ll dare say, lazy design in its icons. I don’t claim to be a usability expert, but it seems to me that icons should represent a clear intention, followed by a consistent action.