Tags: server

75

sparkline

Friday, July 31st, 2020

Pinboard is Eleven (Pinboard Blog)

I probably need to upgrade the Huffduffer server but Maciej nails why that’s an intimidating prospect:

Doing this on a live system is like performing kidney transplants on a playing mariachi band. The best case is that no one notices a change in the music; you chloroform the players one at a time and try to keep a steady hand while the band plays on. The worst case scenario is that the music stops and there is no way to unfix what you broke, just an angry mob. It is very scary.

Monday, July 27th, 2020

Is my host fast yet?

This is an interesting project to try to rank web hosts by performance:

Real-world server response (Time to First Byte) latencies, as experienced by real-world users navigating the web.

Saturday, July 18th, 2020

Your blog doesn’t need a JavaScript framework /// Iain Bean

If the browser needs to parse 296kb of JavaScript to show a list of blog posts, that’s not Progressive Enhancement, it’s using the wrong tool for the job.

A good explanation of the hydration problem in tools like Gatsby.

JavaScript is a powerful language that can do some incredible things, but it’s incredibly easy to jump to using it too early in development, when you could be using HTML and CSS instead.

Friday, June 19th, 2020

Thursday, April 9th, 2020

Plumbing

On Monday, I linked to Tom’s latest video. It uses a clever trick whereby the title of the video is updated to match the number of views the video has had. But there’s a lot more to the video than that. Stick around and you’ll be treated to a meditation on the changing nature of APIs, from a shared open lake to a closed commercial drybed.

It reminds me of (other) Tom’s post from a couple of year’s ago called Pouring one out for the Boxmakers, wherein he talks about Twitter’s crackdown on fun bots:

Web 2.0 really, truly, is over. The public APIs, feeds to be consumed in a platform of your choice, services that had value beyond their own walls, mashups that merged content and services into new things… have all been replaced with heavyweight websites to ensure a consistent, single experience, no out-of-context content, and maximising the views of advertising. That’s it: back to single-serving websites for single-serving use cases.

A shame. A thing I had always loved about the internet was its juxtapositions, the way it supported so many use-cases all at once. At its heart, a fundamental one: it was a medium which you could both read and write to. From that flow others: it’s not only work and play that coexisted on it, but the real and the fictional; the useful and the useless; the human and the machine.

Both Toms echo the sentiment in Anil’s The Web We Lost, written back in 2012:

Five years ago, if you wanted to show content from one site or app on your own site or app, you could use a simple, documented format to do so, without requiring a business-development deal or contractual agreement between the sites. Thus, user experiences weren’t subject to the vagaries of the political battles between different companies, but instead were consistently based on the extensible architecture of the web itself.

I know, I know. We’re a bunch of old men shouting at The Cloud. But really, Anil is right:

This isn’t our web today. We’ve lost key features that we used to rely on, and worse, we’ve abandoned core values that used to be fundamental to the web world. To the credit of today’s social networks, they’ve brought in hundreds of millions of new participants to these networks, and they’ve certainly made a small number of people rich.

But they haven’t shown the web itself the respect and care it deserves, as a medium which has enabled them to succeed. And they’ve now narrowed the possibilites of the web for an entire generation of users who don’t realize how much more innovative and meaningful their experience could be.

In his video, Tom mentions Yahoo Pipes as an example of a service that has been shut down for commercial and idealogical reasons. In many ways, it was the epitome of what Anil was talking about—a sort of meta-API that allowed you to connect different services together. Kinda like IFTTT but with a visual interface that made it as empowering as something like the Scratch programming language.

There are services today that provide some of that functionality, but they’re more developer-focused. Trys pointed me to Pipedream, which looks good but you need to know how to write Node.js code and import npm packages. I’m sure it’s great if you’re into serverless Jamstack lambda thingamybobs but I don’t think it’s going to unlock the potential for non-coders to create cool stuff.

On the more visual pipes-esque Scratchy side, Cassie pointed me to Cables:

Cables is a tool for creating beautiful interactive content.

It isn’t about making mashups, but it does look something that non-coders could potentially use to make something that looks cool. It reminds me a bit of Bret Victor and his classic talk on Inventing On Principle—always worth revisting!

Monday, April 6th, 2020

Local-first software: You own your data, in spite of the cloud

The cloud gives us collaboration, but old-fashioned apps give us ownership. Can’t we have the best of both worlds?

We would like both the convenient cross-device access and real-time collaboration provided by cloud apps, and also the personal ownership of your own data embodied by “old-fashioned” software.

This is a very in-depth look at the mindset and the challenges involved in building truly local-first software—something that Tantek has also been thinking about.

Sunday, March 8th, 2020

The 3 Laws of Serverless - Burke Holland

“Serverless”, is a buzzword. We can’t seem to agree on what it actaully means, so it ends up meaning nothing at all. Much like “cloud” or “dynamic” or “synergy”. You just wait for the right time in a meeting to drop it, walk to the board and draw a Venn Diagram, and then just sit back and wait for your well-deserved promotion.

That’s very true, and I do not like the term “serverless” for the rather obvious reason that it’s all about servers (someone else’s servers, that is). But these three principles are handy for figuring out if you’re building with in a serverlessy kind of way:

  1. You have no knowledge of the underlying system where your code runs.
  2. Scaling is an intrinsic attribute of the technology; so much so that it just happens automatically.
  3. You only pay for what you use.

Abstraction; scale; consumption.

Tuesday, March 3rd, 2020

Visitors, Developers, or Machines

Garrett’s observation is spot-on here:

I’ve been trying to understand the appeal of these frameworks by giving them an objective chance. I’ve expanded my knowledge of JavaScript and tried to give them the benefit of the doubt. They do have their places, but the only explanation I can come up with is that developers are taking a similar approach as Ruby and focusing on developer convenience and productivity. Only, instead of Ruby’s performance being tied to the CPU level, JavaScript frameworks push the performance burden to the client.

In both cases, the tradeoff happens in the name of developer happiness and productivity, but the strategies have entirely different consequences. With Ruby, the CPU is still (mostly) the responsibility of the development team, and it can be upgraded. With JavaScript, the page weight becomes an externality pushed onto visitors.

Monday, February 3rd, 2020

N26 and lack of JavaScript | Hugo Giraudel

JavaScript is fickle. It can fail to load. It can be disabled. It can be blocked. It can fail to run. It probably is fine most of the time, but when it fails, everything tends to go bad. And having such a hard point of failure is not ideal.

This is a very important point:

It’s important not to try making the no-JS experience work like the full one. The interface has to be revisited. Some features might even have to be removed, or dramatically reduced in scope. That’s also okay. As long as the main features are there and things work nicely, it should be fine that the experience is not as polished.

Thursday, December 12th, 2019

The Server Souvenir: Taking Home Remnants of Virtual Worlds | Platypus

When the game developer Blizzard Entertainment decommissioned some of their server blades to be auctioned off, they turned them into commemorative commodities, adding an etching onto the metal frame with the server’s name (e.g., “Proudmoore” or “Darkspear”), its dates of operation, and an inscription: “within the circuits and hard drive, a world of magic, adventure, and friendship thrived… this server was home to thousands of immersive experiences.” While stripped of their ability to store virtual memory or connect people to an online game world, these servers were valuable and meaningful as worlds and homes. They became repositories of social and spatial memory, souvenirs from WoW.

Monday, December 2nd, 2019

Just enough Internet | doteveryone

The carbon cost of collecting and storing data no one can use is already a moral issue.

So before you add another field, let alone make a new service, can you be sure it will make enough of a difference to legitimise its impact on the planet?

Tuesday, November 19th, 2019

Mental models

I’ve found that the older I get, the less I care about looking stupid. This is remarkably freeing. I no longer have any hesitancy about raising my hand in a meeting to ask “What’s that acronym you just mentioned?” This sometimes has the added benefit of clarifying something for others in the room who might have been to shy to ask.

I remember a few years back being really confused about npm. Fortunately, someone who was working at npm at the time came to Brighton for FFConf, so I asked them to explain it to me.

As I understood it, npm was intended to be used for managing packages of code for Node. Wasn’t it actually called “Node Package Manager” at one point, or did I imagine that?

Anyway, the mental model I had of npm was: npm is to Node as PEAR is to PHP. A central repository of open source code projects that you could easily add to your codebase …for your server-side code.

But then I saw people talking about using npm to manage client-side JavaScript. That really confused me. That’s why I was asking for clarification.

It turns out that my confusion was somewhat warranted. The npm project had indeed started life as a repo for server-side code but had since expanded to encompass client-side code too.

I understand how it happened, but it confirmed a worrying trend I had noticed. Developers were writing front-end code as though it were back-end code.

On the one hand, that makes total sense when you consider that the code is literally in the same programming language: JavaScript.

On the other hand, it makes no sense at all! If your code’s run-time is on the server, then the size of the codebase doesn’t matter that much. Whether it’s hundreds or thousands of lines of code, the execution happens more or less independentally of the network. But that’s not how front-end development works. Every byte matters. The more code you write that needs to be executed on the user’s device, the worse the experience is for that user. You need to limit how much you’re using the network. That means leaning on what the browser gives you by default (that’s your run-time environment) and keeping your code as lean as possible.

Dave echoes my concerns in his end-of-the-year piece called The Kind of Development I Like:

I now think about npm and wonder if it’s somewhat responsible for some of the pain points of modern web development today. Fact is, npm is a server-side technology that we’ve co-opted on the client and I think we’re feeling those repercussions in the browser.

Writing back-end and writing front-end code require very different approaches, in my opinion. But those differences have been erased in “modern” JavaScript.

The Unix Philosophy encourages us to write small micro libraries that do one thing and do it well. The Node.js Ecosystem did this in spades. This works great on the server where importing a small file has a very small cost. On the client, however, this has enormous costs.

In a funny way, this situation reminds me of something I saw happening over twenty years ago. Print designers were starting to do web design. They had a wealth of experience and knowledge around colour theory, typography, hierarchy and contrast. That was all very valuable to bring to the world of the web. But the web also has fundamental differences to print design. In print, you can use as many typefaces as you want, whereas on the web, to this day, you need to be judicious in the range of fonts you use. But in print, you might have to limit your colour palette for cost reasons (depending on the printing process), whereas on the web, colours are basically free. And then there’s the biggest difference of all: working within known dimensions of a fixed page in print compared to working within the unknowable dimensions of flexible viewports on the web.

Fast forward to today and we’ve got a lot of Computer Science graduates moving into front-end development. They’re bringing with them a treasure trove of experience in writing robust scalable code. But web browsers aren’t like web servers. If your back-end code is getting so big that it’s starting to run noticably slowly, you can throw more computing power at it by scaling up your server. That’s not an option on the front-end where you don’t really have one run-time environment—your end users have their own run-time environment with its own constraints around computing power and network connectivity.

That’s a very, very challenging world to get your head around. The safer option is to stick to the mental model you’re familiar with, whether you’re a print designer or a Computer Science graduate. But that does a disservice to end users who are relying on you to deliver a good experience on the World Wide Web.

Tuesday, September 3rd, 2019

How Web Content Can Affect Power Usage | WebKit

The way you build web pages—using IntersectionObserver, for example—can have a direct effect on the climate emergency.

Webpages can be good citizens of battery life.

It’s important to measure the battery impact in Web Inspector and drive those costs down.

Saturday, August 10th, 2019

Server Timing

Harry wrote a really good article all about the performance measurement Time To First Byte. Time To First Byte: What It Is and Why It Matters:

While a good TTFB doesn’t necessarily mean you will have a fast website, a bad TTFB almost certainly guarantees a slow one.

Time To First Byte has been the chink in my armour over at thesession.org, especially on the home page. Every time I ran Lighthouse, or some other performance testing tool, I’d get a high score …with some points deducted for taking too long to get that first byte from the server.

Harry’s proposed solution is to set up some Server Timing headers:

With a little bit of extra work spent implementing the Server Timing API, we can begin to measure and surface intricate timings to the front-end, allowing web developers to identify and debug potential bottlenecks previously obscured from view.

I rememberd that Drew wrote an excellent article on Smashing Magazine last year called Measuring Performance With Server Timing:

The job of Server Timing is not to help you actually time activity on your server. You’ll need to do the timing yourself using whatever toolset your backend platform makes available to you. Rather, the purpose of Server Timing is to specify how those measurements can be communicated to the browser.

He even provides some PHP code, which I was able to take wholesale and drop into the codebase for thesession.org. Then I was able to put start/stop points in my code for measuring how long some operations were taking. Then I could output the results of these measurements into Server Timing headers that I could inspect in the “Network” tab of a browser’s dev tools (Chrome is particularly good for displaying Server Timing, so I used that while I was conducting this experiment).

I started with overall database requests. Sure enough, that was where most of the time in time-to-first-byte was being spent.

Then I got more granular. I put start/stop points around specific database calls. By doing this, I was able to zero in on which operations were particularly costly. Once I had done that, I had to figure out how to make the database calls go faster.

Spoiler: I did it by adding an extra index on one particular table. It’s almost always indexes, in my experience, that make the biggest difference to database performance.

I don’t know why it took me so long to get around to messing with Server Timing headers. It has paid off in spades. I wish I had done it sooner.

And now thesession.org is positively zipping along!

Friday, August 9th, 2019

Time to First Byte: What It Is and Why It Matters by Harry Roberts

Harry takes a deep dive into the performance metric of “time to first byte”, or TTFB if you using initialisms that take as long to say as the thing they’re abbreviating.

This makes a great companion piece to Drew’s article on server timing headers.

Thursday, August 1st, 2019

Ooops, I guess we’re full-stack developers now.

Chris broke both his arms just to avoid speaking at the JAMstack conference in London. Seems a bit extreme to me.

Anyway, to make up for not being there, he made a website of his talk. It’s good stuff, tackling the split.

It’s cool to see the tech around our job evolve to the point that we can reach our arms around the whole thing. It’s worthy of some concern when we feel like complication of web technology feels like it’s raising the barrier to entry

Sunday, June 16th, 2019

The Crushing Weight of the Facepile—zachleat.com

Using IntersectionObserver to lazy load images—very handy for webmention avatars.

Monday, June 10th, 2019

Making QR codes with cloud functions • tommorris.org

Tom makes an endpoint for generating QR codes so you don’t have to rely on the Google Charts API.

He also provides a good definition of “serverless”:

Now, serverless is a very silly buzzword dreamed up by someone from the consultant class who love coming up with terrible names, so I promise I won’t use it any further. Your code obviously run on a server. It just means it runs on a server someone else manages.

Amazon call it a ‘Lambda Function’. Google call it a ‘Cloud Function’. Microsoft Azure call it simply a ‘Function’. But none of those are very descriptive, because, well, anyone who writes any kind of programming language generally writes functions pretty much all the time in much the same way as anyone who writes English writes paragraphs, and we don’t call our blogging software “Cloud Paragraphs”. (Someone will now, I’m guessing.)

Thursday, May 2nd, 2019

The Web Developer’s Guide to DNS | RJ Zaworski

At Codebar the other night, I was doing an intro chat with some beginners. At one point I touched on DNS. This explanation is great for detailing what’s going on under the hood.

Friday, April 12th, 2019

Disenchantment - Tim Novis

I would urge front-end developers to take a step back, breathe, and reassess. Let’s stop over engineering for the sake of it. Let’s think what we can do with the basic tools, progressive enhancement and a simpler approach to building websites. There are absolutely valid usecases for SPAs, React, et al. and I’ll continue to use these tools reguarly and when it’s necessary, I’m just not sure that’s 100% of the time.