Tags: javascript

702

sparkline

Wednesday, May 23rd, 2018

Tuesday, May 22nd, 2018

Easy Toggle State

I think about 90% of the JavaScript I’ve ever written was some DOM scripting to handle the situation of “when the user triggers an event on this element, do something to this other element.” Toggles, lightboxes, accordions, tabs, tooltips …they’re all basically following the same underlying pattern. So it makes sense to me to see this pattern abstracted into a little library.

Monday, May 21st, 2018

Meet swup

This looks like a handy library for managing page transitions on sites that are not single page apps.

Here’s the code.

I’ve said it before and I’ll say it again, but I really think that this handles 80% of the justification for using a single page app architecture.

Saturday, May 19th, 2018

The Slow Death of Internet Explorer and the Future of Progressive Enhancement · An A List Apart Article

Oliver Williams makes the case—and shows the code—for delivering only HTML to old versions of Internet Explorer, sparing them from the kind of CSS and JavaScript that they can’t deal with it. Seems like a sensible approach to me (assuming you’re correctly building in a layered way so that your core content is delivered in markup).

Rather than transpiling and polyfilling and hoping for the best, we can deliver what the person came for, in the most resilient, performant, and robust form possible: unadulterated HTML. No company has the resources to actively test their site on every old version of every browser. Malfunctioning JavaScript can ruin a web experience and make a simple page unusable. Rather than leaving users to a mass of polyfills and potential JavaScript errors, we give them a basic but functional experience.

Super-powered layouts with CSS Variables + CSS Grid by Michelle Barker on CodePen

This article is about using custom properties and CSS grid together, but I think my favourite part is this description of how custom properties differ from the kind of variables you get from a preprocessor:

If you’re familiar with Javascript, I like to think of the difference between preprocessor variables and CSS Variables as similar to the difference between const and let - they both serve different purposes.

Friday, May 18th, 2018

Frustration

I had some problems with my bouzouki recently. Now, I know my bouzouki pretty well. I can navigate the strings and frets to make music. But this was a problem with the pickup under the saddle of the bouzouki’s bridge. So it wasn’t so much a musical problem as it was an electronics problem. I know nothing about electronics.

I found it incredibly frustrating. Not only did I have no idea how to fix the problem, but I also had no idea of the scope of the problem. Would it take five minutes or five days? Who knows? Not me.

My solution to a problem like this is to pay someone else to fix it. Even then I have to go through the process of having the problem explained to me by someone who understands and cares about electronics much more than me. I nod my head and try my best to look like I’m taking it all in, even though the truth is I have no particular desire to get to grips with the inner workings of pickups—I just want to make some music.

That feeling of frustration I get from having wiring issues with a musical instrument is the same feeling I get whenever something goes awry with my web server. I know just enough about servers to be dangerous. When something goes wrong, I feel very out of my depth, and again, I have no idea how long it will take the fix the problem: minutes, hours, days, or weeks.

I had a very bad day yesterday. I wanted to make a small change to the Clearleft website—one extra line of CSS. But the build process for the website is quite convoluted (and clever), automatically pulling in components from the site’s pattern library. Something somewhere in the pipeline went wrong—I still haven’t figured out what—and for a while there, the Clearleft website was down, thanks to me. (Luckily for me, Danielle saved the day …again. I’d be lost without her.)

I was feeling pretty down after that stressful day. I felt like an idiot for not knowing or understanding the wiring beneath the site.

But, on the other hand, considering I was only trying to edit a little bit of CSS, maybe the problem didn’t lie entirely with me.

There’s a principle underlying the architecture of the World Wide Web called The Rule of Least Power. It somewhat counterintuitively states that you should:

choose the least powerful language suitable for a given purpose.

Perhaps, given the relative simplicity of the task I was trying to accomplish, the plumbing was over-engineered. That complexity wouldn’t matter if I could circumvent it, but without the build process, there’s no way to change the markup, CSS, or JavaScript for the site.

Still, most of the time, the build process isn’t a hindrance, it’s a help: concatenation, minification, linting and all that good stuff. Most of my frustration when something in the wiring goes wrong is because of how it makes me feel …just like with the pickup in my bouzouki, or the server powering my website. It’s not just that I find this stuff hard, but that I also feel like it’s stuff I’m supposed to know, rather than stuff I want to know.

On that note…

Last week, Paul wrote about getting to grips with JavaScript. On the very same day, Brad wrote about his struggle to learn React.

I think it’s really, really, really great when people share their frustrations and struggles like this. It’s very reassuring for anyone else out there who’s feeling similarly frustrated who’s worried that the problem lies with them. Also, this kind of confessional feedback is absolute gold dust for anyone looking to write explanations or documentation for JavaScript or React while battling the curse of knowledge. As Paul says:

The challenge now is to remember the pain and anguish I endured, and bare that in mind when helping others find their own path through the knotted weeds of JavaScript.

How to display a “new version available” of your Progressive Web App

This is a good walkthrough of the flow you’d need to implement if you want to notify users of an updated service worker.

Saturday, May 12th, 2018

I Used The Web For A Day With JavaScript Turned Off — Smashing Magazine

Following on from Charlie’s experiment last year, Chris Ashton has been assessing which sites rely on JavaScript, and which sites use it in a more defensive, resilient way. Some interesting results in here.

A good core experience is indicative of a well-structured web page, which, in turn, is usually a good sign for SEO and for accessibility. It’s usually a well designed web page, as the designer and developer have spent time and effort thinking about what’s truly core to the experience. Progressive enhancement means more robust experiences, with fewer bugs in production and fewer individual browser quirks, because we’re letting the platform do the job rather than trying to write it all from scratch.

Sunday, May 6th, 2018

HTML5 Constraint Validation

The slides from a presentation by Drew on all the functionality that browsers give us for free when it comes to validating form inputs.

Half the battle of the web platform is knowing what technology is out there, ready to use. We’re all familiar with the ability to declare validation constraints in our HTML5 forms, but were you aware there’s a JavaScript API that goes along with it?

Thursday, May 3rd, 2018

Building Progressive Web Apps | nearForm

It is very disheartening to read misinformation like this:

A progressive web app is an enhanced version of a Single Page App (SPA) with a native app feel.

To quote from The Last Jedi, “Impressive. Everything you just said in that sentence is wrong.”

But once you get over that bit of misinformation at the start, the rest of this article is a good run-down of planning and building a progressive web app using one possible architectural choice—the app shell model. Other choices are available.

Tuesday, May 1st, 2018

nystudio107 | ServiceWorkers and Offline Browsing

Here’s an article from last year that gives a really good introduction to service workers and provides a plug-in for the Craft CMS.

Monday, April 30th, 2018

Going Offline: Designing An Ideal Offline Experience With Service Workers By Jeremy Keith

Here’s a great even-handed in-depth review of Going Offline:

If you’re interested in the “offline first” movement or want to learn more about Service Workers, Going Offline by Jeremy Keith is a really gentle and highly accessible introduction to the topic. At times, it even felt “too gentle”, with Keith taking a moment here and there to explain what a “variable” is and what “JSON” (JavaScript Object Notation) is. But, this just goes to show you the unassuming and welcoming mindset behind writing a book like this one.

Saturday, April 28th, 2018

alphagov/accessible-autocomplete: An autocomplete component, built to be accessible.

If you’re looking for an accessible standalone autocomplete script, this one from GDS looks very good (similar to Lea’s awesomplete).

The Illusion of Control in Web Design · An A List Apart Article

Aaron gives a timely run-down of all the parts of a web experience that are out of our control. But don’t despair…

Recognizing all of the ways our carefully-crafted experiences can be rendered unusable can be more than a little disheartening. No one likes to spend their time thinking about failure. So don’t. Don’t focus on all of the bad things you can’t control. Focus on what you can control.

Start simply. Code defensively. User-test the heck out of it. Recognize the chaos. Embrace it. And build resilient web experiences that will work no matter what the internet throws at them.

Friday, April 27th, 2018

Introducing Service Workers

The first chapter of Going Offline, originally published on A List Apart.

This is the first chapter of Going Offline, a brief book about service workers for web designers, published by A Book Apart.

Businesses are built on the web. Without the web, Twitter couldn’t exist. Facebook couldn’t exist. And not just businesses—Wikipedia couldn’t exist. Your favorite blog couldn’t exist without the web. The web doesn’t favor any one kind of use. It’s been deliberately designed to accommodate many and varied activities.

Just as many wonderful things are built upon the web, the web itself is built upon the internet. Though we often use the terms web and internet interchangeably, the World Wide Web is just one application that uses the internet as its plumbing. Email, for instance, is another.

Like the web, the internet was designed to allow all kinds of services to be built on top of it. The internet is a network of networks, all of them agreeing to use the same protocols to shuttle packets of data around. Those packets are transmitted down fiber-optic cables across the ocean floor, bounced around with Wi-Fi or radio signals, or beamed from satellites in freakin’ space.

As long as these networks are working, the web is working. But sometimes networks go bad. Mobile networks have a tendency to get flaky once you’re on a train or in other situations where you’re, y’know, mobile. Wi-Fi networks work fine until you try to use one in a hotel room (their natural enemy).

When the network fails, the web fails. That’s just the way it is, and there’s nothing we can do about it. Until now.

Weaving the Web

For as long as I can remember, the World Wide Web has had an inferiority complex. Back in the ’90s, it was outshone by CD-ROMs (ask your parents). They had video, audio, and a richness that the web couldn’t match. But they lacked links—you couldn’t link from something in one CD-ROM to something in another CD-ROM. They faded away. The web grew.

Later, the web technologies of HTML, CSS, and JavaScript were found wanting when compared to the whiz-bang beauty of Flash. Again, Flash movies were much richer than regular web pages. But they were also black boxes. The Flash format seemed superior to the open standards of the web, and yet the very openness of those standards made the web an unstoppable force. Flash—under the control of just one company—faded away. The web grew.

These days it’s native apps that make the web look like an underachiever. Like Flash, they’re under the control of individual companies instead of being a shared resource like the web. Like Flash, they demonstrate all sorts of capabilities that the web lacks, such as access to device APIs and, crucially, the ability to work even when there’s no network connection.

The history of the web starts to sound like an endless retelling of the fable of the tortoise and the hare. CD-ROMs, Flash, and native apps outshine the web in the short term, but the web always seems to win the day somehow.

Each of those technologies proved very useful for the expansion of web standards. In a way, Flash was like the R&D department for HTML, CSS, and JavaScript. Smooth animations, embedded video, and other great features first saw the light of day in Flash. Having shown their usefulness, they later appeared in web standards. The same thing is happening with native apps. Access to device features like the camera and the accelerometer is beginning to show up in web browsers. Most exciting of all, we’re finally getting the ability for a website to continue working even when the network isn’t available.

Service Workers

The technology that makes this bewitching offline sorcery possible is a browser feature called service workers. You might have heard of them. You might have heard that they’re something to do with JavaScript, and technically they are…but conceptually they’re very different from other kinds of scripts.

Usually when you’re writing some JavaScript that’s going to run in a web browser, it’s all related to the document currently being displayed in the browser window. You might want to listen out for events triggered by the user interacting with the document (clicks, swipes, hovers, etc.). You might want to update the contents of the document: add some markup here, remove some text there, manipulate some values somewhere else. The sky’s the limit. And it’s all made possible thanks to the Document Object Model (DOM), a representation of what the browser is rendering. Through the combination of the DOM and JavaScript—DOM scripting, if you will—you can conjure up all sorts of wonderful magic.

Well, a service worker can’t do any of that. It’s still a script, and it’s still written in the same language—JavaScript—but it has no access to the DOM. Without any DOM scripting capabilities, this kind of script might seem useless at first glance. But there’s an advantage to having a script that never needs to interact with the current document. Adding, editing, and deleting parts of the DOM can be hard work for the browser. If you’re not careful, things can get very sluggish very quickly. But if there’s a whole class of script that isn’t allowed access to the DOM, then the browser can happily run that script in parallel to its regular rendering activities, safe in the knowledge that it’s an entirely separate process.

The first kind of script to come with this constraint was called a web worker. In a web worker, you could write some JavaScript to do number-crunching calculations without slowing down whatever else was being displayed in the browser window. Spin up a web worker to generate larger and larger prime numbers, for instance, and it will merrily do so in the background.

A service worker is like a web worker with extra powers. It still can’t access the DOM, but it does have access to the fundamental inner workings of the browser.

Browsers and Servers

Let’s take a step back and think about how the World Wide Web works. It’s a beautiful ballet of client and server. The client is usually a web browser—or, to use the parlance of web standards, a user agent: a piece of software that acts on behalf of the user.

The user wants to accomplish a task or find some information. The URL is the key technology that will empower the user in their quest. They will either type a URL into their web browser or follow a link to get there. This is the point at which the web browser—or client—makes a request to a web server. Before the request can reach the server, it must traverse the internet of undersea cables, radio towers, and even the occasional satellite (Fig 1.1).

Diagram of the request/response cycle between a user and a server
Fig 1.1: Browsers send URL requests to servers, and servers respond by sending files.

Imagine if you could leave instructions for the web browser that would be executed before the request is even sent. That’s exactly what service workers allow you to do (Fig 1.2).

Diagram of the request/response cycle between a user and a server with a service worker being the first thing the response hits
Fig 1.2: Service workers tell the web browser to do something before they send the request to queue up a URL.

Usually when we write JavaScript, the code is executed after it’s been downloaded from a server. With service workers, we can write a script that’s executed by the browser before anything else happens. We can tell the browser, “If the user asks you to retrieve a URL for this particular website, run this corresponding bit of JavaScript first.” That explains why service workers don’t have access to the Document Object Model; when the service worker is run, there’s no document yet.

Getting Your Head Around Service Workers

A service worker is like a cookie. Cookies are downloaded from a web server and installed in a browser. You can go to your browser’s preferences and see all the cookies that have been installed by sites you’ve visited. Cookies are very small and very simple little text files. A website can set a cookie, read a cookie, and update a cookie. A service worker script is much more powerful. It contains a set of instructions that the browser will consult before making any requests to the site that originally installed the service worker.

A service worker is like a virus. When you visit a website, a service worker is surreptitiously installed in the background. Afterwards, whenever you make a request to that website, your request will be intercepted by the service worker first. Your computer or phone becomes the home for service workers lurking in wait, ready to perform man-in-the-middle attacks. Don’t panic. A service worker can only handle requests for the site that originally installed that service worker. When you write a service worker, you can only use it to perform man-in-the-middle attacks on your own website.

A service worker is like a toolbox. By itself, a service worker can’t do much. But it allows you to access some very powerful browser features, like the Fetch API, the Cache API, and even notifications. API stands for Application Programming Interface, which sounds very fancy but really just means a tool that you can program however you want. You can write a set of instructions in your service worker to take advantage of these tools. Most of your instructions will be written as “when this happens, reach for this tool.” If, for instance, the network connection fails, you can instruct the service worker to retrieve a backup file using the Cache API.

A service worker is like a duck-billed platypus. The platypus not only lactates, but also lays eggs. It’s the only mammal capable of making its own custard. A service worker can also…Actually, hang on, a service worker is nothing like a duck-billed platypus! Sorry about that. But a service worker is somewhat like a cookie, and somewhat like a virus, and somewhat like a toolbox.

Safety First

Service workers are powerful. Once a service worker has been installed on your machine, it lies in wait, like a patient spider waiting to feel the vibrations of a particular thread.

Imagine if a malicious ne’er-do-well wanted to wreak havoc by impersonating a website in order to install a service worker. They could write instructions in the service worker to prevent the website ever appearing in that browser again. Or they could write instructions to swap out the content displayed under that site’s domain. That’s why it’s so important to make sure that a service worker really belongs to the site it claims to come from. As the specification for service workers puts it, they “create the opportunity for a bad actor to turn a bad day into a bad eternity.”

To prevent this calamity, service workers require you to adhere to two policies:

  1. Same origin.
  2. HTTPS only.

The same-origin policy means that a website at example.com can only install a service worker script that lives at example.com. That means you can’t put your service worker script on a different domain. You can use a domain like for hosting your images and other assets, but not your service worker script. That domain wouldn’t match the domain of the site installing the service worker.

The HTTPS-only policy means that https://example.com can install a service worker, but http://example.com can’t. A site running under HTTPS (the S stands for Secure) instead of HTTP is much harder to spoof. Without HTTPS, the communication between a browser and a server could be intercepted and altered. If you’re sitting in a coffee shop with an open Wi-Fi network, there’s no guarantee that anything you’re reading in browser from http://newswebsite.com hasn’t been tampered with. But if you’re reading something from https://newswebsite.com, you can be pretty sure you’re getting what you asked for.

Securing Your Site

Enabling HTTPS on your site opens up a whole series of secure-only browser features—like the JavaScript APIs for geolocation, payments, notifications, and service workers. Even if you never plan to add a service worker to your site, it’s still a good idea to switch to HTTPS. A secure connection makes it trickier for snoopers to see who’s visiting which websites. Your website might not contain particularly sensitive information, but when someone visits your site, that’s between you and your visitor. Enabling HTTPS won’t stop unethical surveillance by the NSA, but it makes the surveillance slightly more difficult.

There’s one exception. You can use a service worker on a site being served from localhost, a web server on your own computer, not part of the web. That means you can play around with service workers without having to deploy your code to a live site every time you want to test something.

If you’re using a Mac, you can spin up a local server from the command line. Let’s say your website is in a folder called mysite. Drag that folder to the Terminal app, or open up the Terminal app and navigate to that folder using the cd command to change directory. Then type:

python -m SimpleHTTPServer 8000

This starts a web server from the mysite folder, served over port 8000. Now you can visit localhost:8000 in a web browser on the same computer, which means you can add a service worker to the website you’ve got inside the mysite folder: http://localhost:8000.

But if you then put the site live at, say, http://mysite.com, the service worker won’t run. You’ll need to serve the site from https://mysite.com instead. To do that, you need a secure certificate for your server.

There was a time when certificates cost money and were difficult to install. Now, thanks to a service called Certbot, certificates are free. But I’m not going to lie: it still feels a bit intimidating to install the certificate. There’s something about logging on to a server and typing commands that makes me simultaneously feel like a l33t hacker, and also like I’m going to break everything. Fortunately, the process of using Certbot is relatively jargon-free (Fig 1.3).

Screenshot of certbot.eff.org
Fig 1.3: The website of EFF’s Certbot.

On the Certbot website, you choose which kind of web server and operating system your site is running on. From there you’ll be guided step-by-step through the commands you need to type in the command line of your web server’s computer, which means you’ll need to have SSH access to that machine. If you’re on shared hosting, that might not be possible. In that case, check to see if your hosting provider offers secure certificates. If not, please pester them to do so, or switch to a hosting provider that can serve your site over HTTPS.

Another option is to stay with your current hosting provider, but use a service like Cloudflare to act as a “front” for your website. These services can serve your website’s files from data centers around the world, making sure that the physical distance between your site’s visitors and your site’s files is nice and short. And while they’re at it, these services can make sure all of those files are served over HTTPS.

Once you’re set up with HTTPS, you’re ready to write a service worker script. It’s time to open up your favorite text editor. You’re about to turbocharge your website!

Read more…

Monday, April 23rd, 2018

Sara Soueidan: Going Offline

Sara describes the process of turning her site into a progressive web app, and has some very kind words to say about my new book:

Jeremy covers literally everything you need to know to write and install your first Service Worker, tweak it to your site’s needs, and then write the Web App Manifest file to complete the offline experience, all in a ridiculously easy to follow style. It doesn’t matter if you’re a designer, a junior developer or an experienced engineer — this book is perfect for anyone who wants to learn about Service Workers and take their Web application to a whole new level.

Too, too kind!

I highly recommend it. I read the book over the course of two days, but it can easily be read in half a day. And as someone who rarely ever reads a book cover to cover (I tend to quit halfway through most books), this says a lot about how good it is.

page-transitions-travelapp

A demo of page transition animations by Sarah—she’s written about how she did it. I really like it as an example of progressive enhancement: you can navigate around the site just fine, but with JavaScript you get the smooth transitions as a bonus.

All of this reminds me of Jake’s proposal for navigation transitions in the browser. I honestly think this would solve 80% of the use-cases for single page apps.

Friday, April 20th, 2018

Cancelling Requests with Abortable Fetch

This is a really good use-case for cancelling fetch requests: making API calls while autocompleting in search.

Tuesday, April 17th, 2018

Technical balance

Two technical editors worked with me on Going Offline.

Jake was one of the tech editors. He literally (co-)wrote the spec on service workers. There ain’t nuthin’ he don’t know about the code involved. His job was to catch any technical inaccuracies in my writing.

The other tech editor was Amber. She’s relatively new to web development. While I was writing the book, she had a solid grounding in HTML and CSS, but not much experience in JavaScript. That made her the perfect archetypal reader. Her job was to point out whenever I wasn’t explaining something clearly enough.

My job was to satisfy both of them. I needed to explain service workers and all its associated APIs. I also needed to make it approachable and understandable to people who haven’t dived deeply into JavaScript.

I deliberately didn’t wait until I was an expert in this topic before writing Going Offline. I knew that the more familiar I became with the ins-and-outs of getting a service worker up and running, the harder it would be for me to remember what it was like not to know that stuff. I figured the best way to avoid the curse of knowledge would be not to accrue too much of it. But then once I started researching and writing, I inevitably became more au fait with the topic. I had to try to battle against that, trying to keep a beginner’s mind.

My watchword was this great piece of advice from Codebar:

Assume that anyone you’re teaching has no knowledge but infinite intelligence.

It was tricky. I’m still not sure if I managed to pull off the balancing act, although early reports are very, very encouraging. You’ll be able to judge for yourself soon enough. The book is shipping at the start of next week. Get your order in now.

Sunday, April 15th, 2018

The audience for Going Offline

My new book, Going Offline, starts with no assumption of JavaScript knowledge, but by the end of the book the reader is armed with enough code to make any website work offline.

I didn’t want to overwhelm the reader with lots of code up front, so I’ve tried to dole it out in manageable chunks. The amount of code ramps up a little bit in each chapter until it peaks in chapter five. After that, it ramps down a bit with each subsequent chapter.

This tweet perfectly encapsulates the audience I had in mind for the book:

Some people have received advance copies of the PDF, and I’m very happy with the feedback I’m getting.

Honestly, that is so, so gratifying to hear!

Words cannot express how delighted I am with Sara’s reaction:

She’s walking the walk too:

That gives me a warm fuzzy glow!

If you’ve been nervous about service workers, but you’ve always wanted to turn your site into a progressive web app, you should get a copy of this book.