Tags: backend

25

sparkline

Tuesday, November 19th, 2019

Mental models

I’ve found that the older I get, the less I care about looking stupid. This is remarkably freeing. I no longer have any hesitancy about raising my hand in a meeting to ask “What’s that acronym you just mentioned?” This sometimes has the added benefit of clarifying something for others in the room who might have been to shy to ask.

I remember a few years back being really confused about npm. Fortunately, someone who was working at npm at the time came to Brighton for FFConf, so I asked them to explain it to me.

As I understood it, npm was intended to be used for managing packages of code for Node. Wasn’t it actually called “Node Package Manager” at one point, or did I imagine that?

Anyway, the mental model I had of npm was: npm is to Node as PEAR is to PHP. A central repository of open source code projects that you could easily add to your codebase …for your server-side code.

But then I saw people talking about using npm to manage client-side JavaScript. That really confused me. That’s why I was asking for clarification.

It turns out that my confusion was somewhat warranted. The npm project had indeed started life as a repo for server-side code but had since expanded to encompass client-side code too.

I understand how it happened, but it confirmed a worrying trend I had noticed. Developers were writing front-end code as though it were back-end code.

On the one hand, that makes total sense when you consider that the code is literally in the same programming language: JavaScript.

On the other hand, it makes no sense at all! If your code’s run-time is on the server, then the size of the codebase doesn’t matter that much. Whether it’s hundreds or thousands of lines of code, the execution happens more or less independentally of the network. But that’s not how front-end development works. Every byte matters. The more code you write that needs to be executed on the user’s device, the worse the experience is for that user. You need to limit how much you’re using the network. That means leaning on what the browser gives you by default (that’s your run-time environment) and keeping your code as lean as possible.

Dave echoes my concerns in his end-of-the-year piece called The Kind of Development I Like:

I now think about npm and wonder if it’s somewhat responsible for some of the pain points of modern web development today. Fact is, npm is a server-side technology that we’ve co-opted on the client and I think we’re feeling those repercussions in the browser.

Writing back-end and writing front-end code require very different approaches, in my opinion. But those differences have been erased in “modern” JavaScript.

The Unix Philosophy encourages us to write small micro libraries that do one thing and do it well. The Node.js Ecosystem did this in spades. This works great on the server where importing a small file has a very small cost. On the client, however, this has enormous costs.

In a funny way, this situation reminds me of something I saw happening over twenty years ago. Print designers were starting to do web design. They had a wealth of experience and knowledge around colour theory, typography, hierarchy and contrast. That was all very valuable to bring to the world of the web. But the web also has fundamental differences to print design. In print, you can use as many typefaces as you want, whereas on the web, to this day, you need to be judicious in the range of fonts you use. But in print, you might have to limit your colour palette for cost reasons (depending on the printing process), whereas on the web, colours are basically free. And then there’s the biggest difference of all: working within known dimensions of a fixed page in print compared to working within the unknowable dimensions of flexible viewports on the web.

Fast forward to today and we’ve got a lot of Computer Science graduates moving into front-end development. They’re bringing with them a treasure trove of experience in writing robust scalable code. But web browsers aren’t like web servers. If your back-end code is getting so big that it’s starting to run noticably slowly, you can throw more computing power at it by scaling up your server. That’s not an option on the front-end where you don’t really have one run-time environment—your end users have their own run-time environment with its own constraints around computing power and network connectivity.

That’s a very, very challenging world to get your head around. The safer option is to stick to the mental model you’re familiar with, whether you’re a print designer or a Computer Science graduate. But that does a disservice to end users who are relying on you to deliver a good experience on the World Wide Web.

Monday, October 21st, 2019

Indy web

It was Indie Web Camp Brighton on the weekend. After a day of thought-provoking discussions, I thoroughly enjoyed spending the second day tinkering on my website.

For a while now, I’ve wanted to add maps to my monthly archive pages (to accompany the calendar heatmaps I added at a previous Indie Web Camp). Whenever I post anything to my site—a blog post, a note, a link—it’s timestamped and geotagged. I thought it would be fun to expose that in a glanceable way. A map seems like the right medium for that, but I wanted to avoid the obvious route of dropping a load of pins on a map. Instead I was looking for something more like the maps in Indiana Jones films—a line drawn from place to place to show the movement over time.

I talked to Aaron about this and his advice was that a client-side JavaScript embedded map would be the easiest option. But that seemed like overkill to me. This map didn’t need to be pannable or zoomable; just glanceable. So I decided to see if how far I could get with a static map. I timeboxed two hours for it.

After two hours, I admitted defeat.

I was able to find the kind of static maps I wanted from Mapbox—I’m already using them for my check-ins. I could even add a polyline, which is exactly what I wanted. But instead of passing latitude and longitude co-ordinates for the points on the polyline, the docs explain that I needed to provide …cur ominous thunder and lightning… The Encoded Polyline Algorithm Format.

Go to that link. I’ll wait.

Did you read through the eleven steps of instructions? Did you also think it was a piss take?

  1. Take the initial signed value.
  2. Multiply it by 1e5.
  3. Convert that decimal value to binary.
  4. Left-shift the binary value one bit.
  5. If the original decimal value is negative, invert this encoding.
  6. Break the binary value out into 5-bit chunks.
  7. Place the 5-bit chunks into reverse order.
  8. OR each value with 0x20 if another bit chunk follows.
  9. Convert each value to decimal.
  10. Add 63 to each value.
  11. Convert each value to its ASCII equivalent.

This was way beyond my brain’s pay grade. But surely someone else had written the code I needed? I did some Duck Duck Going and found a piece of PHP code to do the encoding. It didn’t work. I Ducked Ducked and Went some more. I found a different piece of PHP code. That didn’t work either.

At this point, my allotted time was up. If I wanted to have something to demo by the end of the day, I needed to switch gears. So I did.

I used Leaflet.js to create the maps I wanted using client-side JavaScript. Here’s the JavaScript code I wrote.

It waits until the page has finished loading, then it searches for any instances of the h-geo microformat (a way of encoding latitude and longitude coordinates in HTML). If there are three or more, it generates a script element to pull in the Leaflet library, and a corresponding style element. Then it draws the map with the polyline on it. I ended up using Stamen’s beautiful watercolour map tiles.

Had some fun at Indie Web Camp Brighton on the weekend messing around with @Stamen’s lovely watercolour map tiles. (I was trying to create Indiana Jones style travel maps for my site …a different kind of Indy web.)

That’s what I demoed at the end of the day.

But I wasn’t happy with it.

Sure, it looked good, but displaying the map required requests for a script, a style sheet, and multiple map tiles. I made sure that it didn’t hold up the loading of the rest of the page, but it still felt wasteful.

So after Indie Web Camp, I went back to investigate static maps again. This time I did finally manage to find some PHP code for encoding lat/lon coordinates into a polyline that worked. Finally I was able to construct URLs for a static map image that displays a line connecting multiple points with a line.

I’ve put this maps on any of the archive pages that also have calendar heat maps. Some examples:

If you go back much further than that, the maps start to trail off. That’s because I wasn’t geotagging everything from the start.

I’m pretty happy with the final results. It’s certainly far more responsible from a performance point of view. Oh, and I’ve also got the maps inside a picture element so that I can swap out the tiles if you switch to dark mode.

It’s a shame that I can’t use the lovely Stamen watercolour tiles for these static maps though.

Thursday, August 1st, 2019

Ooops, I guess we’re full-stack developers now.

Chris broke both his arms just to avoid speaking at the JAMstack conference in London. Seems a bit extreme to me.

Anyway, to make up for not being there, he made a website of his talk. It’s good stuff, tackling the split.

It’s cool to see the tech around our job evolve to the point that we can reach our arms around the whole thing. It’s worthy of some concern when we feel like complication of web technology feels like it’s raising the barrier to entry

Saturday, April 13th, 2019

Sergey | the little SSG

Trys has made YASSG—Yet Another Static Site Generator. It’s called Sergey (like SSG, see?) and it does just one thing: it allows you to include chunks of markup. It’s Apache Server Side Includes all over again!

Kick the tyres and see what you think.

Monday, March 25th, 2019

Stuffing the Front End

53% of mobile visits leave a page that takes longer than 3 seconds to load. That means that a large number of visitors probably abandoned these sites because they were staring at a blank screen for 3 seconds, said “fuck it,” and left approximately half way before the page showed up. The fact that the next page interaction would have been quicker—assuming all the JS files even downloaded correctly in the first attempt—doesn’t amount to much if they didn’t stick around for the first page to load. What was gained by putting the business logic in the front end in this scenario?

Wednesday, March 6th, 2019

The “Backendification” of Frontend Development – Hacker Noon

Are many of the modern frontend tools and practices just technical debt in disguise?

Ooh, good question!

Tuesday, May 29th, 2018

Web Push Notifications Demo | Microsoft Edge Demos

Push notifications explained using astrology. But don’t worry, there’s also some code, just in case you prefer your explanations to also include models that actually work.

Friday, May 18th, 2018

Frustration

I had some problems with my bouzouki recently. Now, I know my bouzouki pretty well. I can navigate the strings and frets to make music. But this was a problem with the pickup under the saddle of the bouzouki’s bridge. So it wasn’t so much a musical problem as it was an electronics problem. I know nothing about electronics.

I found it incredibly frustrating. Not only did I have no idea how to fix the problem, but I also had no idea of the scope of the problem. Would it take five minutes or five days? Who knows? Not me.

My solution to a problem like this is to pay someone else to fix it. Even then I have to go through the process of having the problem explained to me by someone who understands and cares about electronics much more than me. I nod my head and try my best to look like I’m taking it all in, even though the truth is I have no particular desire to get to grips with the inner workings of pickups—I just want to make some music.

That feeling of frustration I get from having wiring issues with a musical instrument is the same feeling I get whenever something goes awry with my web server. I know just enough about servers to be dangerous. When something goes wrong, I feel very out of my depth, and again, I have no idea how long it will take the fix the problem: minutes, hours, days, or weeks.

I had a very bad day yesterday. I wanted to make a small change to the Clearleft website—one extra line of CSS. But the build process for the website is quite convoluted (and clever), automatically pulling in components from the site’s pattern library. Something somewhere in the pipeline went wrong—I still haven’t figured out what—and for a while there, the Clearleft website was down, thanks to me. (Luckily for me, Danielle saved the day …again. I’d be lost without her.)

I was feeling pretty down after that stressful day. I felt like an idiot for not knowing or understanding the wiring beneath the site.

But, on the other hand, considering I was only trying to edit a little bit of CSS, maybe the problem didn’t lie entirely with me.

There’s a principle underlying the architecture of the World Wide Web called The Rule of Least Power. It somewhat counterintuitively states that you should:

choose the least powerful language suitable for a given purpose.

Perhaps, given the relative simplicity of the task I was trying to accomplish, the plumbing was over-engineered. That complexity wouldn’t matter if I could circumvent it, but without the build process, there’s no way to change the markup, CSS, or JavaScript for the site.

Still, most of the time, the build process isn’t a hindrance, it’s a help: concatenation, minification, linting and all that good stuff. Most of my frustration when something in the wiring goes wrong is because of how it makes me feel …just like with the pickup in my bouzouki, or the server powering my website. It’s not just that I find this stuff hard, but that I also feel like it’s stuff I’m supposed to know, rather than stuff I want to know.

On that note…

Last week, Paul wrote about getting to grips with JavaScript. On the very same day, Brad wrote about his struggle to learn React.

I think it’s really, really, really great when people share their frustrations and struggles like this. It’s very reassuring for anyone else out there who’s feeling similarly frustrated who’s worried that the problem lies with them. Also, this kind of confessional feedback is absolute gold dust for anyone looking to write explanations or documentation for JavaScript or React while battling the curse of knowledge. As Paul says:

The challenge now is to remember the pain and anguish I endured, and bare that in mind when helping others find their own path through the knotted weeds of JavaScript.

Thursday, May 3rd, 2018

Building Progressive Web Apps | nearForm

It is very disheartening to read misinformation like this:

A progressive web app is an enhanced version of a Single Page App (SPA) with a native app feel.

To quote from The Last Jedi, “Impressive. Everything you just said in that sentence is wrong.”

But once you get over that bit of misinformation at the start, the rest of this article is a good run-down of planning and building a progressive web app using one possible architectural choice—the app shell model. Other choices are available.

Tuesday, March 20th, 2018

Brendan Dawes - Post from Instagram to Kirby

Brendan shows how he uses IFTTT and a webhook to post to his own site from Instagram. I think I might set up something similar to post from Untappd to my own site.

Tuesday, February 27th, 2018

Let’s talk about usernames

This post goes into specifics on Django, but the broader points apply no matter what your tech stack. I’m relieved to find out that The Session is using the tripartite identity pattern (although Huffduffer, alas, isn’t):

What we really want in terms of identifying users is some combination of:

  1. System-level identifier, suitable for use as a target of foreign keys in our database
  2. Login identifier, suitable for use in performing a credential check
  3. Public identity, suitable for displaying to other users

Many systems ask the username to fulfill all three of these roles, which is probably wrong.

Friday, January 26th, 2018

The Power of Serverless

Chris has set up a whole site dedicated to someone-else’s-server sites with links to resources and services (APIs), along with ideas of what you could build in this way.

Here’s one way to think about it: you can take your front-end skills and do things that typically only a back-end can do. You can write a JavaScript function that you run and receive a response from by hitting a URL. That’s sometimes also called Cloud Functions or Functions as a Service, which are perhaps better names, but just a part of the whole serverless thing.

Tuesday, October 31st, 2017

Netflix functions without client-side React, and it’s a good thing - JakeArchibald.com

A great bucketload of common sense from Jake:

Rather than copying bad examples from the history of native apps, where everything is delivered in one big lump, we should be doing a little with a little, then getting a little more and doing a little more, repeating until complete. Think about the things users are going to do when they first arrive, and deliver that. Especially consider those most-likely to arrive with empty caches.

And here’s a good way of thinking about that:

I’m a fan of progressive enhancement as it puts you in this mindset. Continually do as much as you can with what you’ve got.

All too often, saying “use the right tool for the job” is interpreted as “don’t use that tool!” but as Jake reminds us, the sign of a really good tool is its ability to adapt instead of demanding rigid usage:

Netflix uses React on the client and server, but they identified that the client-side portion wasn’t needed for the first interaction, so they leaned on what the browser can already do, and deferred client-side React. The story isn’t that they’re abandoning React, it’s that they’re able to defer it on the client until it’s was needed. React folks should be championing this as a feature.

Thursday, September 21st, 2017

Killing Old Service Workers for the Greater Good – Hackages Blog

Ooh, this is a tricky scenario. If you decide to redirect all URLs (from, say, a www subdomain to no subdomain) and you have a service worker running, you’re going to have a bad time. But there’s a solution here to get the service worker to remove itself.

The server-side specifics are for NGINX but this is also doable with Apache.

Thursday, September 14th, 2017

Full-Stack Developers | Brad Frost

In my experience, “full-stack developers” always translates to “programmers who can do frontend code because they have to and it’s ‘easy’.” It’s never the other way around. The term “full-stack developer” implies that a developer is equally adept at both frontend code and backend code, but I’ve never in my personal experience witnessed anyone who truly fits that description.

Monday, September 11th, 2017

No space left on device – running out of Inodes – Ivan Kuznetsov

This blog post saved my ass—the Huffduffer server was b0rked and after much Duck-Duck-Going I found the answer here.

I’m filing this away for my future self because, as per Murphy’s Law, I’m pretty sure I’ll be needing this again at some point

Thursday, March 23rd, 2017

Modern JavaScript for Ancient Web Developers

Speaking as an ancient web developer myself, this account by Gina of her journey into Node.js is really insightful. But I can’t help but get exhausted just contemplating the yak-shaving involved in the tooling set-up:

The sheer number of tools and plugins and packages and dependencies and editor setup and build configurations required to do it “the right way” is enough to stall you before you even get started.

Friday, March 17th, 2017

A Little Surprise Is Waiting For You Here — Meet The Next Smashing Magazine

An open beta of Smashing Magazine’s redesign, which looks like it could be a real poster child for progressive enhancement:

We do our best to ensure that content is accessible and enhanced progressively, with performance in mind. If JavaScript isn’t available or if the network is slow, then we deliver content via static fallbacks (for example, by linking directly to Google search), as well as a service worker that persistently stores CSS, JavaScripts, SVGs, font files and other assets in its cache.

Thursday, December 8th, 2016

Server Side React

Remy wants to be able to apply progressive enhancement to React: server-side and client-side rendering, sharing the same codebase. He succeeded, but…

In my opinion, an individual or a team starting out, or without the will, aren’t going to use progressive enhancement in their approach to applying server side rendering to a React app. I don’t believe this is by choice, I think it’s simply because React lends itself so strongly to client-side, that I can see how it’s easy to oversee how you could start on the server and progressive enhance upwards to a rich client side experience.

I’m hopeful that future iterations of React will make this a smoother option.

Monday, September 12th, 2016

Oh, shit, git!

Bookmark this page! Who knew that so much knowledge could be condensed into one document? In this case, it’s life-saving git commands, explained in a user-centred way.

So here are some bad situations I’ve gotten myself into, and how I eventually got myself out of them in plain english.