Tags: micro
316
Sunday, April 9th, 2023
Monday, September 26th, 2022
Fermented Code: Modelling the Microbial Through Miso - Serpentine Galleries
Y’know, I started reading this great piece by Claire L. Evans thinking about its connections to systems thinking, but I ended up thinking more about prototyping. And microbes.
Sunday, January 9th, 2022
Friendly Indie micro-publishers
From Patrick Tanguay:
A list of small micro-publishers — most of them run by one person — putting out great content through their websites, newsletters, and podcasts.
Wednesday, December 1st, 2021
Webrise
Prompted by my talk, The State Of The Web, Brian zooms out to get some perspective on how browser power is consolidated.
The web is made of clients and servers. There’s a huge amount of diversity in the server space but there’s very little diversity when it comes to clients because making a browser has become so complex and expensive.
But Brian hopes that this complexity and expense could be distributed amongst a large amount of smaller players.
10 companies agreeing to invest $10k apiece to advance and maintain some area of shared interest is every bit as useful as 1 agreeing to invest $100k generally. In fact, maybe it’s more representative.
We believe that there is a very long tail of increasingly smaller companies who could do something, if only they coordinated to fund it together. The further we stretch this out, the more sources we enable, the more its potential adds up.
Sunday, August 8th, 2021
Browsers
I mentioned recently that there might be quite a difference in tone between my links and my journal here on my website:
’Sfunny, when I look back at older journal entries they’re often written out of frustration, usually when something in the dev world is bugging me. But when I look back at all the links I’ve bookmarked the vibe is much more enthusiastic, like I’m excitedly pointing at something and saying “Check this out!” I feel like sentiment analyses of those two sections of my site would yield two different results.
My journal entries have been even more specifically negative of late. I’ve been bitchin’ and moanin’ about web browsers. But at least I’m an equal-opportunities bitcher and moaner.
- Mozilla, I complained about your Facebook Container extension for Firefox.
- Apple, I complained about the ridiculous way Safari’s update cycle is tied to operating system.
- Google, I complained about the way a breaking change was rolled out in Chrome (and the implications for future breaking changes).
- Microsoft, you got off lightly. But please consider any of my criticisms of Chrome to apply to Edge too, seeing as they’re basically the same now.
I wish my journal weren’t so negative, but my mithering behaviour has been been encouraged. On more than one occasion, someone I know at a browser company has taken me aside to let me know that I should blog about any complaints I might have with their browser. It sounds counterintuitive, I know. But these blog posts can give engineers some ammunition to get those issues prioritised and fixed.
So my message to you is this: if there’s something about a web browser that you’re not happy with (or, indeed, if there’s something you’re really happy with), take the time to write it down and publish it.
Publish it on your website. You could post your gripes on Twitter but whinging on Jack’s website is just pissing in the wind. And I suspect you also might put a bit more thought into a blog post on your own site.
I know it’s a cliché to say that browser makers want to hear from developers—and I’m often cynical about it myself—but they really do want to know what we think. Share your thoughts. I’ll probably end up linking to what you write.
Wednesday, July 7th, 2021
Back to the Bad Old Days of the Web – Jorge Arango
We’ve enjoyed a relatively long period when we didn’t have to think about which browser to use. Alas, that period is ending: I must now keep Chrome running all the time, much like I needed that PC in the early 2000s.
Monday, June 28th, 2021
ReCoil
On the Coil developers site there’s a page proudly answering the question who is web monetized?
You’ll some familiar sites in there: CSS Tricks, A List Apart, and even this humble website, adactio.com.
But lest you think that this social proof is in any way an endorsement, I should probably clarify what my experience with Coil has been like.
Coil itself is grand. You get an identifier and you add it to your website in a meta
element, much like you would do with indie web endpoints for webmentions or micropub.
The problem is with how you then actually get hold of any money that is owed to you from micropayments. Coil doesn’t handle this directly. You have to set up a “wallet” with a third-party service and therein lies the problem.
They are all terrible.
I’m not talking about the hoops you have to jump through to set up an account. I get it. This is scary financial stuff so of course I’ll need to scan my passport and hand over loads of information (more than is needed to open an actual bank account with, say, Monzo).
No, the problem is the stench of crypto.
I tried Stronghold for a while. They really, really don’t want you to use boring old-fashioned currencies like the euro or the pound. There’s also Gatehub. Same. And there’s Uphold. Also a shell game.
I’ve been using Coil and Uphold for a while now, and I’ve amassed a grand total of £6.06 — woo-hoo! So I log into my account and attempt to transfer that sweet, sweet monetisation and …I can’t.
The amount needs to be greater than or equal to £11.53 GBP
But I can still exchange that £6.06 for magic beans like Bitcoin, XRP, and Ether.
The whole thing smells of grift and it feels icky to be in any way associated with it. I understand why Coil needs to partner with existing payment providers, but it would be nice if just one of them weren’t propping up ponzi schemes. If anyone has found a way to get web monetisation to work without needing like you need to take a shower afterwards, I’d love to hear about it.
Monday, December 14th, 2020
History of the Web - YouTube
I really enjoyed this trip down memory lane with Chris:
From the Web’s inception, an ancient to contemporary history of the Web.
Thursday, August 20th, 2020
Star Trek: The Motion Picture | Typeset In The Future
The latest edition in this wonderful series of science-fictional typography has some truly twisty turbolift tangents.
Tuesday, July 14th, 2020
Ariel Waldman: The colorful critter world of microbes in Antarctica | TED Talk
Ariel gave a TED talk and it’s mind-blowingly good!
Thursday, July 9th, 2020
Implementors
The latest newsletter from The History Of The Web is a good one: The Browser Engine That Could. It’s all about the history of browsers and more specifically, rendering engines.
Jay quotes from a 1992 email by Tim Berners-Lee when there was real concern about having too many different browsers. But as history played out, the concern shifted to having too few different browsers.
I wrote about this—back when Edge switched to using Chromium—in a post called Unity where I compared it to political parties:
If you have hundreds of different political parties, that’s not ideal. But if you only have one political party, that’s very bad indeed!
I talked about this some more with Brian and Stuart on the Igalia Chats podcast: Web Ecosystem Health (here’s the mp3 file).
In the discussion we dive deeper into the naunces of browser engine diversity; how it’s not the numbers that matter, but representation. The danger with one dominant rendering engine is that it would reflect one dominant set of priorities.
I think we’re starting to see this kind of battle between different sets of priorities playing out in the browser rendering engine landscape.
Webkit published a list of APIs they won’t be implementing in their current form because of security concerns around fingerprinting. Mozilla is taking the same stand. Google is much more gung-ho about implementing those APIs.
I think it’s safe to say that every implementor wants to ship powerful APIs and ensure security and privacy. The issue is with which gets priority. Using the language of principles and priorities, you could crudely encapsulate Apple and Mozilla’s position as:
Privacy, even over capability.
That design principle would pass the reversibility test. In fact, Google’s position might be represented as:
Capability, even over privacy.
I’m not saying Apple and Mozilla don’t value powerful APIs. I’m not saying Google doesn’t value privacy. I’m saying that Google’s priorities are different to Apple’s and Mozilla’s.
Alas, Alex is saying that Apple and Mozilla don’t value capability:
There is a contingent of browser vendors today who do not wish to expand the web platform to cover adjacent use-cases or meaningfully close the relevance gap that the shift to mobile has opened.
That’s very disappointing. It’s a cheap shot. As cheap as saying that, given Google’s business model, Chrome wouldn’t want to expand the web platform to provide better privacy and security.
Monday, June 15th, 2020
Igalia Chats: Web Ecosystem Health with Jeremy Keith and Stuart Landridge
Myself and Stuart had a chat with Brian about browser engine diversity.
Here’s the audio file if you’d like to huffduff it.
Friday, May 15th, 2020
New PDF Preview, Better Web Publishing, Improved Editing - iA Writer: The Focused Writing App
I think this one single feature is going to get me to switch to iA Writer:
For starters, we added Micropub support. This means you can publish to Micro.blog and other IndieWeb tools.
Monday, February 3rd, 2020
Old CSS, new CSS / fuzzy notepad
I absolutely love this in-depth history of the web, written in a snappy, snarky tone.
In the beginning, there was no CSS.
This was very bad.
Even if you—like me—lived through all this stuff, I guarantee there’ll still be something in here you didn’t know.
Tuesday, January 7th, 2020
Life Under The Ice
Here’s the latest wonderful project from Ariel—explore microscopic specimens from Antarctica:
The collected Antarctic microbes were found living within glaciers, under the sea ice, next to frozen lakes, and in subglacial ponds.
Beautiful!
Tuesday, October 22nd, 2019
The IndieWeb Movement: Owning Your Data and Being the Change You Want to See in the Web · Jamie Tanna
A great introduction to indie web building blocks from Jamie.
Wednesday, October 2nd, 2019
Same-Site Cookies By Default | text/plain
This is good news. I have third-party cookies disabled in my browser, and I’m very happy that it will become the default.
It’s hard to believe that we ever allowed third-party cookies and scripts in the first place. Between them, they’re responsible for the worst ills of the World Wide Web.
Saturday, September 21st, 2019
Going offline with microformats
For the offline page on my website, I’ve been using a mixture of the Cache API and the localStorage
API. My service worker script uses the Cache API to store copies of pages for offline retrieval. But I used the localStorage
API to store metadata about the page—title, description, and so on. Then, my offline page would rifle through the pages stored in a cache, and retreive the corresponding metadata from localStorage
.
It all worked fine, but as soon as I read Remy’s post about the forehead-slappingly brilliant technique he’s using, I knew I’d be switching my code over. Instead of using localStorage
—or any other browser API—to store and retrieve metadata, he uses the pages themselves! Using the Cache API, you can examine the contents of the pages you’ve stored, and get at whatever information you need:
I realised I didn’t need to store anything. HTML is the API.
Refactoring the code for my offline page felt good for a couple of reasons. First of all, I was able to remove a dependency—localStorage
—and simplify the JavaScript. That always feels good. But the other reason for the warm fuzzies is that I was able to use data instead of metadata.
Many years ago, Cory Doctorow wrote a piece called Metacrap. In it, he enumerates the many issues with metadata—data about data. The source of many problems is when the metadata is stored separately from the data it describes. The data may get updated, without a corresponding update happening to the metadata. Metadata tends to rot because it’s invisible—out of sight and out of mind.
In fact, that’s always been at the heart of one of the core principles behind microformats. Instead of duplicating information—once as data and again as metadata—repurpose the visible data; mark it up so its meta-information is directly attached to the information itself.
So if you have a person’s contact details on a web page, rather than repeating that information somewhere else—in the head
of the document, say—you could instead attach some kind of marker to indicate which bits of the visible information are contact details. In the case of microformats, that’s done with class
attributes. You can mark up a page that already has your contact information with classes from the h-card microformat.
Here on my website, I’ve marked up my blog posts, articles, and links using the h-entry microformat. These classes explicitly mark up the content to say “this is the title”, “this is the content”, and so on. This makes it easier for other people to repurpose my content. If, for example, I reply to a post on someone else’s website, and ping them with a webmention, they can retrieve my post and know which bit is the title, which bit is the content, and so on.
When I read Remy’s post about using the Cache API to retrieve information directly from cached pages, I knew I wouldn’t have to do much work. Because all of my posts are already marked up with h-entry classes, I could use those hooks to create a nice offline page.
The markup for my offline page looks like this:
<h1>Offline</h1>
<p>Sorry. It looks like the network connection isn’t working right now.</p>
<div id="history">
</div>
I’ll populate that “history” div
with information from a cache called “pages” that I’ve created using the Cache API in my service worker.
I’m going to use async
/await
to do this because there are lots of steps that rely on the completion of the step before. “Open this cache, then get the keys of that cache, then loop through the pages, then…” All of those then
s would lead to some serious indentation without async
/await
.
All async
functions have to have a name—no anonymous async
functions allowed. I’m calling this one listPages
, just like Remy is doing. I’m making the listPages
function execute immediately:
(async function listPages() {
...
})();
Now for the code to go inside that immediately-invoked function.
I create an array called browsingHistory
that I’ll populate with the data I’ll use for that “history” div
.
const browsingHistory = [];
I’m going to be parsing web pages later on, so I’m going to need a DOM parser. I give it the imaginative name of …parser
.
const parser = new DOMParser();
Time to open up my “pages” cache. This is the first await
statement. When the cache is opened, this promise will resolve and I’ll have access to this cache using the variable …cache
(again with the imaginative naming).
const cache = await caches.open('pages');
Now I get the keys of the cache—that’s a list of all the page requests in there. This is the second await
. Once the keys have been retrieved, I’ll have a variable that’s got a list of all those pages. You’ll never guess what I’m calling the variable that stores the keys of the cache. That’s right …keys
!
const keys = await cache.keys();
Time to get looping. I’m getting each request in the list of keys using a for
/of
loop:
for (const request of keys) {
...
}
Inside the loop, I pull the page out of the cache using the match()
method of the Cache API. I’ll store what I get back in a variable called response
. As with everything involving the Cache API, this is asynchronous so I need to use the await
keyword here.
const response = await cache.match(request);
I’m not interested in the headers of the response. I’m specifically looking for the HTML itself. I can get at that using the text()
method. Again, it’s asynchronous and I want this promise to resolve before doing anything else, so I use the await
keyword. When the promise resolves, I’ll have a variable called html
that contains the body of the response.
const html = await response.text();
Now I can use that DOM parser I created earlier. I’ve got a string of text in the html
variable. I can generate a Document Object Model from that string using the parseFromString()
method. This isn’t asynchronous so there’s no need for the await
keyword.
const dom = parser.parseFromString(html, 'text/html');
Now I’ve got a DOM, which I have creatively stored in a variable called …dom
.
I can poke at it using DOM methods like querySelector
. I can test to see if this particular page has an h-entry on it by looking for an element with a class
attribute containing the value “h-entry”:
if (dom.querySelector('.h-entry h1.p-name') {
...
}
In this particular case, I’m also checking to see if the h1
element of the page is the title of the h-entry. That’s so that index pages (like my home page) won’t get past this if
statement.
Inside the if
statement, I’m going to store the data I retrieve from the DOM. I’ll save the data into an object called …data
!
const data = new Object;
Well, the first piece of data isn’t actually in the markup: it’s the URL of the page. I can get that from the request
variable in my for
loop.
data.url = request.url;
I’m going to store the timestamp for this h-entry. I can get that from the datetime
attribute of the time
element marked up with a class of dt-published
.
data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));
While I’m at it, I’m going to grab the human-readable date from the innerText
property of that same time.dt-published
element.
data.published = dom.querySelector('.h-entry .dt-published').innerText;
The title of the h-entry is in the innerText
of the element with a class of p-name
.
data.title = dom.querySelector('.h-entry .p-name').innerText;
At this point, I am actually going to use some metacrap instead of the visible h-entry content. I don’t output a description of the post anywhere in the body
of the page, but I do put it in the head
in a meta
element. I’ll grab that now.
data.description = dom.querySelector('meta[name="description"]').getAttribute('content');
Alright. I’ve got a URL, a timestamp, a publication date, a title, and a description, all retrieved from the HTML. I’ll stick all of that data into my browsingHistory
array.
browsingHistory.push(data);
My if
statement and my for
/in
loop are finished at this point. Here’s how the whole loop looks:
for (const request of keys) {
const response = await cache.match(request);
const html = await response.text();
const dom = parser.parseFromString(html, 'text/html');
if (dom.querySelector('.h-entry h1.p-name')) {
const data = new Object;
data.url = request.url;
data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));
data.published = dom.querySelector('.h-entry .dt-published').innerText;
data.title = dom.querySelector('.h-entry .p-name').innerText;
data.description = dom.querySelector('meta[name="description"]').getAttribute('content');
browsingHistory.push(data);
}
}
That’s the data collection part of the code. Now I’m going to take all that yummy information an output it onto the page.
First of all, I want to make sure that the browsingHistory
array isn’t empty. There’s no point going any further if it is.
if (browsingHistory.length) {
...
}
Within this if
statement, I can do what I want with the data I’ve put into the browsingHistory
array.
I’m going to arrange the data by date published. I’m not sure if this is the right thing to do. Maybe it makes more sense to show the pages in the order in which you last visited them. I may end up removing this at some point, but for now, here’s how I sort the browsingHistory
array according to the timestamp
property of each item within it:
browsingHistory.sort( (a,b) => {
return b.timestamp - a.timestamp;
});
Now I’m going to concatenate some strings. This is the string of HTML text that will eventually be put into the “history” div
. I’m storing the markup in a string called …markup
(my imagination knows no bounds).
let markup = '<p>But you still have something to read:</p>';
I’m going to add a chunk of markup for each item of data.
browsingHistory.forEach( data => {
markup += `
<h2><a href="${ data.url }">${ data.title }</a></h2>
<p>${ data.description }</p>
<p class="meta">${ data.published }</p>
`;
});
With my markup assembled, I can now insert it into the “history” part of my offline page. I’m using the handy insertAdjacentHTML()
method to do this.
document.getElementById('history').insertAdjacentHTML('beforeend', markup);
Here’s what my finished JavaScript looks like:
<script>
(async function listPages() {
const browsingHistory = [];
const parser = new DOMParser();
const cache = await caches.open('pages');
const keys = await cache.keys();
for (const request of keys) {
const response = await cache.match(request);
const html = await response.text();
const dom = parser.parseFromString(html, 'text/html');
if (dom.querySelector('.h-entry h1.p-name')) {
const data = new Object;
data.url = request.url;
data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));
data.published = dom.querySelector('.h-entry .dt-published').innerText;
data.title = dom.querySelector('.h-entry .p-name').innerText;
data.description = dom.querySelector('meta[name="description"]').getAttribute('content');
browsingHistory.push(data);
}
}
if (browsingHistory.length) {
browsingHistory.sort( (a,b) => {
return b.timestamp - a.timestamp;
});
let markup = '<p>But you still have something to read:</p>';
browsingHistory.forEach( data => {
markup += `
<h2><a href="${ data.url }">${ data.title }</a></h2>
<p>${ data.description }</p>
<p class="meta">${ data.published }</p>
`;
});
document.getElementById('history').insertAdjacentHTML('beforeend', markup);
}
})();
</script>
I’m pretty happy with that. It’s not too long but it’s still quite readable (I hope). It shows that the Cache API and the h-entry microformat are a match made in heaven.
If you’ve got an offline strategy for your website, and you’re using h-entry to mark up your content, feel free to use that code.
If you don’t have an offline strategy for your website, there’s a book for that.
Friday, July 19th, 2019
Micro Frontends
Chris succinctly describes the multiple-iframe
s-with-multiple-codebases approach to web development, AKA “micro frontends”:
The idea really is that you might build a React app and I build a Vue app and we’ll slap ‘em together on the same page. I definitely come from an era where we laughed-then-winced when we found sites that used multiple versions of jQuery on the same page, plus one thing that loaded all of MooTools and Prototype thrown on there seemingly by accident. We winced because that was a bucket full of JavaScript, mostly duplicated for no reason, causing bugs and slowing down the page. This doesn’t seem all that much different.
Tuesday, July 16th, 2019
How to Kill IE11 - What the Deaths of IE6 and IE8 Tell Us About Killing IE | Mike Sherov
An interesting look at the mortality causes for Internet Explorer 6 and Internet Explorer 8, and what they can tell us for the hoped-for death of Internet Explorer 11.
I disagree with the conclusion (that we should actively block IE11—barring any good security reasons, I don’t think that’s defensible), but I absolutely agree that we shouldn’t be shipping polyfills in production just for IE11. Give it your HTML. Give it your CSS. Withhold modern JavaScript. If you’re building with progressive enhancement (and you are, right?), then giving IE11 users a sub-par experience is absolutely fine …it’s certainly better than blocking them completely.