Tags: cli

5

sparkline

60 seconds over Idaho

I lived in Germany for the latter half of the nineties. On August 11th, 1999, parts of Germany were in the path of a total eclipse of the sun. Freiburg—the town where I was living—wasn’t in the path, so Jessica and I travelled north with some friends to Karlsruhe.

The weather wasn’t great. There was quite a bit of cloud coverage, but at the moment of totality, the clouds had thinned out enough for us to experience the incredible sight of a black sun.

(The experience was only slightly marred by the nearby idiot who took a picture with the flash on right before totality. Had my eyesight not adjusted in time, he would still be carrying that camera around with him in an anatomically uncomfortable place.)

Eighteen years and eleven days later, Jessica and I climbed up a hill to see our second total eclipse of the sun. The hill is in Sun Valley, Idaho.

Here comes the sun.

Travelling thousands of miles just to witness something that lasts for a minute might seem disproportionate, but if you’ve ever been in the path of totality, you’ll know what an awe-inspiring sight it is (if you’ve only seen a partial eclipse, trust me—there’s no comparison). There’s a primitive part of your brain screaming at you that something is horribly, horribly wrong with the world, while another part of your brain is simply stunned and amazed. Then there’s the logical part of your brain which is trying to grasp the incredible good fortune of this cosmic coincidence—that the sun is 400 times bigger than the moon and also happens to be 400 times the distance away.

This time viewing conditions were ideal. Not a cloud in the sky. It was beautiful. We even got a diamond ring.

I like to think I can be fairly articulate, but at the moment of totality all I could say was “Oh! Wow! Oh! Holy shit! Woah!”

Totality

Our two eclipses were separated by eighteen years, but they’re connected. The Saros 145 cycle has been repeating since 1639 and will continue until 3009, although the number of total eclipses only runs from 1927 to 2648.

Eighteen years and twelve days ago, we saw the eclipse in Germany. Yesterday we saw the eclipse in Idaho. In eighteen years and ten days time, we plan to be in Japan or China.

Certbot renewals with Apache

I wrote a while back about switching to HTTPS on Apache 2.4.7 on Ubuntu 14.04 on Digital Ocean. In that post, I pointed to an example .conf file.

I’ve been having a few issues with my certificate renewals with Certbot (the artist formerly known as Let’s Encrypt). If I did a dry-run for renewing my certificates…

/etc/certbot-auto renew --dry-run

… I kept getting this message:

Encountered vhost ambiguity but unable to ask for user guidance in non-interactive mode. Currently Certbot needs each vhost to be in its own conf file, and may need vhosts to be explicitly labelled with ServerName or ServerAlias directories. Falling back to default vhost *:443…

It turns out that Certbot doesn’t like HTTP and HTTPS configurations being lumped into one .conf file. Instead it expects to see all the port 80 stuff in a domain.com.conf file, and the port 443 stuff in a domain.com-ssl.conf file.

So I’ve taken that original .conf file and split it up into two.

First I SSH’d into my server and went to the Apache directory where all these .conf files live:

cd /etc/apache2/sites-available

Then I copied the current (single) file to make the SSL version:

cp yourdomain.com.conf yourdomain.com-ssl.conf

Time to fire up one of those weird text editors to edit that newly-created file:

nano yourdomain.com-ssl.conf

I deleted everything related to port 80—all the stuff between (and including) the VirtualHost *:80 tags:

<VirtualHost *:80>
...
</VirtualHost>

Hit ctrl and o, press enter in response to the prompt, and then hit ctrl and x.

Now I do the opposite for the original file:

nano yourdomain.com.conf

Delete everything related to VirtualHost *:443:

<VirtualHost *:443>
...
</VirtualHost>

Once again, I hit ctrl and o, press enter in response to the prompt, and then hit ctrl and x.

Now I need to tell Apache about the new .conf file:

a2ensite yourdomain.com-ssl.conf

I’m told that’s cool and all, but that I need to restart Apache for the changes to take effect:

service apache2 restart

Now when I test the certificate renewing process…

/etc/certbot-auto renew --dry-run

…everything goes according to plan.

Where to start?

A lot of the talks at this year’s Chrome Dev Summit were about progressive web apps. This makes me happy. But I think the focus is perhaps a bit too much on the “app” part on not enough on “progressive”.

What I mean is that there’s an inevitable tendency to focus on technologies—Service Workers, HTTPS, manifest files—and not so much on the approach. That’s understandable. The technologies are concrete, demonstrable things, whereas approaches, mindsets, and processes are far more nebulous in comparison.

Still, I think that the most important facet of building a robust, resilient website is how you approach building it rather than what you build it with.

Many of the progressive app demos use server-side and client-side rendering, which is great …but that aspect tends to get glossed over:

Browsers without service worker support should always be served a fall-back experience. In our demo, we fall back to basic static server-side rendering, but this is only one of many options.

I think it’s vital to not think in terms of older browsers “falling back” but to think in terms of newer browsers getting a turbo-boost. That may sound like a nit-picky semantic subtlety, but it’s actually a radical difference in mindset.

Many of the arguments I’ve heard against progressive enhancement—like Tom’s presentation at Responsive Field Day—talk about the burdensome overhead of having to bolt on functionality for older or less-capable browsers (even Jake has done this). But the whole point of progressive enhancement is that you start with the simplest possible functionality for the greatest number of users. If anything gets bolted on, it’s the more advanced functionality for the newer or more capable browsers.

So if your conception of progressive enhancement is that it’s an added extra, I think you really need to turn that thinking around. And that’s hard. It’s hard because you need to rewire some well-engrained pathways.

There is some precedence for this though. It was really, really hard to convince people to stop using tables for layout and starting using CSS instead. That was a tall order—completely change the way you approach building on the web. But eventually we got there.

When Ethan came out with Responsive Web Design, it was an equally difficult pill to swallow, not because of the technologies involved—media queries, percentages, etc.—but because of the change in thinking that was required. But eventually we got there.

These kinds of fundamental changes are inevitably painful …at first. After years of building websites using tables for layout, creating your first CSS-based layout was demoralisingly difficult. But the second time was a bit easier. And the third time, easier still. Until eventually it just became normal.

Likewise with responsive design. After years of building fixed-width websites, trying to build in a fluid, flexible way was frustratingly hard. But the second time wasn’t quite as hard. And the third time …well, eventually it just became normal.

So if you’re used to thinking of the all-singing, all-dancing version of your site as the starting point, it’s going to be really, really hard to instead start by building the most basic, accessible version first and then work up to the all-singing, all-dancing version …at first. But eventually it will just become normal.

For now, though, it’s going to take work.

The recent redesign of Google+ is true case study in building a performant, responsive, progressive site:

With server-side rendering we make sure that the user can begin reading as soon as the HTML is loaded, and no JavaScript needs to run in order to update the contents of the page. Once the page is loaded and the user clicks on a link, we do not want to perform a full round-trip to render everything again. This is where client-side rendering becomes important — we just need to fetch the data and the templates, and render the new page on the client. This involves lots of tradeoffs; so we used a framework that makes server-side and client-side rendering easy without the downside of having to implement everything twice — on the server and on the client.

This took work. Had they chosen to rely on client-side rendering alone, they could have built something quicker. But I think it was worth laying that solid foundation. And the next time they need to build something this way, it’s going to be less work. Eventually it just becomes normal.

But it all starts with thinking of the server-side rendering as the default. Server-side rendering is not a fallback; client-side rendering is an enhancement.

That’s exactly the kind of mindset that enables Jack Franklin to build robust, resilient websites:

Now we’ll build the React application entirely on the server, before adding the client-side JavaScript right at the end.

I had a chance to chat briefly with Jack at the Edge conference in London and I congratulated him on the launch of a Go Cardless site that used exactly this technique. He told me that the decision to flip the switch and make it act as a single page app came right at the end of the project. Server-side rendering was the default; client-side rendering was added later.

The key to building modern, resilient, progressive sites doesn’t lie in browser technologies or frameworks; it lies in how we think about the task at hand; how we approach building from the ground up rather than the top down. Changing the way we fundamentally think about building for the web is inevitably going to be challenging …at first. But it will also be immensely rewarding.

Communication for America

Mandy has written a great article about making remote teams work. It’s an oft-neglected aspect of working on a product when you’ve got people distributed geographically.

But remote communication isn’t just something that’s important for startups and product companies—it’s equally important for agencies when it comes to client communication.

At Clearleft, we occasionally work with clients right here in Brighton, but that’s the exception. More often than not, the clients are based in London, or somewhere else in the UK. In the case of Code for America, they’re based in San Francisco—that’s eight or nine timezones away (depending on the time of year).

As it turned out, it wasn’t a problem at all. In fact, it worked out nicely. At the end of every day, we had a quick conference call, with two or three people at our end, and two or three people at their end. For us, it was the end of the day: 5:30pm. For them, the day was just starting: 9:30am.

We’d go through what we had been doing during that day, ask any questions that had cropped up over the course of the day, and let them know if there was anything we needed from them. If there was anything we needed from them, they had the whole day to put it together while we went home. The next morning (from our perspective), it would be waiting in our in/drop-boxes.

Meanwhile, from the perspective of Code for America, they were coming into the office every morning and starting the day with a look over our work, as though we had beavering away throughout the night.

Now, it would be easy for me to extrapolate from this that this way of working is great and everyone should do it. But actually, the whole timezone difference was a red herring. The real reason why the communication worked so well throughout the project was because of the people involved.

Right from the start, it was clear that because of time and budget constraints that we’d have to move fast. We wouldn’t have the luxury of debating everything in detail and getting every decision signed off. Instead we had a sort of “rough consensus and running code” approach that worked really well. It worked because everyone understood that was what was happening—if just one person was expecting a more formalised structure, I’m sure it wouldn’t have gone quite so smoothly.

So we provided materials in whatever level of fidelity made sense for the idea under discussion. Sometimes that was a quick sketch. Sometimes it was a fairly high-fidelity mockup. Sometimes it was a module of markup and CSS. Whatever it took.

Most of all, there was a great feeling of trust on both sides of the equation. It was clear right from the start that the people at Code for America were super-smart and weren’t going to make any outlandish or unreasonable requests of Clearleft. Instead they gave us just the right amount of guidance and constraints, while trusting us to make good decisions.

At one point, Jon was almost complaining about not getting pushback on his designs. A nice complaint to have.

Because of the daily transatlantic “stand up” via teleconference, there was a great feeling of inevitability to the project as it came together from idea to execution. Inevitability doesn’t sound like a very sexy attribute of a web project, but it’s far preferable to the kind of project that involves milestones of “big reveals”—the Mad Men approach to project management.

Oh, and we made sure that we kept those transatlantic calls nice and short. They never lasted longer than 10 or 15 minutes. We wanted to avoid the many pitfalls of conference calls.

Shape shifting

I’ve been with the same ISP for years: Eclipse Internet. I never had any cause to complain until recently. I’ve been finding that certain types of requests simply weren’t completing; file uploads, some forms, Ajax requests…

I started googling for any similar reports. I found quite a few forum posts, all of them expressing the same sentiment; that Eclipse used to be good but that since getting bought out by Kingston, service had really gone downhill. Most alarmingly of all, I read reports of .

I called up technical support. They didn’t deny it. Instead, they tried to upsell me on a package that would give me an admin panel to allow more control over exactly how my packets were throttled.

Screw. That.

I started shopping around for a new ISP. When I twittered what I was doing, I received some good recommendations. Eventually, I was able to narrow my search down to two providers who both sounded good: Zen and IDNet. With little else to distinguish between the two companies, their respective websites became the deciding factor. That settled it. IDNet was the clear winner. Not only is their site prettier, it also validates and even uses hCard.

To cancel my Eclipese account, I needed to call them to request a . Of course the reason why they make you call is so that they can try to persuade not to leave. Sure enough, the guy who took my call asked And can you tell me why you’re moving away from Eclipse?

Traffic shaping, I responded.

Ah right, he said. Technical reasons.

I corrected him. Moral reasons, actually.

I got the info I needed. I ordered my new broadband service. Today I switched over.

If you want to find out if your broadband provider is a filthy traffic-shaper, check to see if it’s listed on .

If you find yourself changing to a more ethical ISP, you too will probably be asked to explain why you’re jumping ship. Just tell them what you want:

Fat Pipe, Always On, Get Out of the Way.