Tags: ip

988

sparkline

Tuesday, February 21st, 2017

[this is aaronland] fault lines — a cultural heritage of misaligned expectations

When Aaron talks, I listen. This time he’s talking about digital (and analogue) preservation, and how that can clash with licensing rules.

It is time for the sector to pick a fight with artists, and artist’s estates and even your donors. It is time for the sector to pick a fight with anyone that is preventing you from being allowed to have a greater — and I want to stress greater, not total — license of interpretation over the works which you are charged with nurturing and caring for.

It is time to pick a fight because, at least on bad days, I might even suggest that the sector has been played. We all want to outlast the present, and this is especially true of artists. Museums and libraries and archives are a pretty good bet if that’s your goal.

Monday, February 20th, 2017

Amber

I really enjoyed teaching in Porto last week. It was like having a week-long series of CodeBar sessions.

Whenever I’m teaching at CodeBar, I like to be paired up with people who are just starting out. There’s something about explaining the web and HTML from first principles that I really like. And people often have lots and lots of questions that I enjoy answering (if I can). At CodeBar—and at The New Digital School—I found myself saying “Great question!” multiple times. The really great questions are the ones that I respond to with “I don’t know …let’s find out!”

CodeBar is always a very rewarding experience for me. It has given me the opportunity to try teaching. And having tried it, I can now safely say that I like it. It’s also a great chance to meet people from all walks of life. It gets me out of my bubble.

I can’t remember when I was first paired up with Amber at CodeBar. It must have been sometime last year. I do remember that she had lots of great questions—at some point I found myself explaining how hexadecimal colours work.

I was impressed with Amber’s eagerness to learn. I also liked that she was making her own website. I told her about Homebrew Website Club and she started coming along to that (along with other CodeBar people like Cassie and Alice).

I’ve mentioned to multiple CodeBar students that there’s pretty much an open-door policy at Clearleft when it comes to shadowing: feel free to come along and sit with a front-end developer while they’re working on client projects. A few people have taken up the offer and enjoyed observing myself or Charlotte at work. Amber was one of those people. Again, I was very impressed with her drive. She’s got a full-time job (with sometimes-crazy hours) but she’s so determined to get into the world of web design and development that she’s willing to spend her free time visiting Clearleft to soak up the atmosphere of a design studio.

We’ve decided to turn this into something more structured. Amber and I will get together for a couple of hours once a week. She’s given me a list of some of the areas she wants to explore, and I think it’s a fine-looking list:

  • I want to gather base, structural knowledge about the web and all related aspects. Things seem to float around in a big cloud at the moment.
  • I want to adhere to best practices.
  • I want to learn more about what direction I want to go in, find a niche.
  • I’d love to opportunity to chat with the brilliant people who work at Clearleft and gain a broad range of knowledge from them.

My plan right now is to take a two-track approach: one track about the theory, and another track about the practicalities. The practicalities will be HTML, CSS, JavaScript, and related technologies. The theory will be about understanding the history of the web and its strengths and weaknesses as a medium. And I want to make sure there’s plenty of UX, research, information architecture and content strategy covered too.

Seeing as we’ll only have a couple of hours every week, this won’t be quite like the masterclass I just finished up in Porto. Instead I imagine I’ll be laying some groundwork and then pointing to topics to research. I guess it’s a kind of homework. For example, after we talked today, I set Amber this little bit of research for the next time we meet: “What is the difference between the internet and the World Wide Web?”

I’m excited to see where this will lead. I find Amber’s drive and enthusiasm very inspiring. I also feel a certain weight of responsibility—I don’t want to enter into this lightly.

I’m not really sure what to call this though. Is it mentorship? Or is it coaching? Or training? All of the above?

Whatever it is, I’m looking forward to documenting the journey. Amber will be writing about it too. She is already demonstrating a way with words.

Thursday, February 16th, 2017

Principles of Web Development · Jens Oliver Meiert

Some proposed design principles for web developers:

  1. Focus on the User
  2. Focus on Quality
  3. Keep It Simple
  4. Think Long-Term (and Beware of Fads)
  5. Don’t Repeat Yourself (aka One Cannot Not Maintain)
  6. Code Responsibly
  7. Know Your Field

Teaching in Porto, day three

Day two ended with a bit of a cliffhanger as I had the students mark up a document, but not yet style it. In the morning of day three, the styling began.

Rather than just treat “styling” as one big monolithic task, I broke it down into typography, colour, negative space, and so on. We time-boxed each one of those parts of the visual design. So everyone got, say, fifteen minutes to write styles relating to font families and sizes, then another fifteen minutes to write styles for colours and background colours. Bit by bit, the styles were layered on.

When it came to layout, we closed the laptops and returned to paper. Everyone did a quick round of 6-up sketching so that there was plenty of fast iteration on layout ideas. That was followed by some critique and dot-voting of the sketches.

Rather than diving into the CSS for layout—which can get quite complex—I instead walked through the approach for layout; namely putting all your layout styles inside media queries. To explain media queries, I first explained media types and then introduced the query part.

I felt pretty confident that I could skip over the nitty-gritty of media queries and cross-device layout because the next masterclass that will be taught at the New Digital School will be a week of responsive design, taught by Vitaly. I just gave them a taster—Vitaly can dive deeper.

By lunch time, I felt that we had covered CSS pretty well. After lunch it was time for the really challenging part: JavaScript.

The reason why I think JavaScript is challenging is that it’s inherently more complex than HTML or CSS. Those are declarative languages with fairly basic concepts at heart (elements, attributes, selectors, etc.), whereas an imperative language like JavaScript means entering the territory of logic, loops, variables, arrays, objects, and so on. I really didn’t want to get stuck in the weeds with that stuff.

I focused on the combination of JavaScript and the Document Object Model as a way of manipulating the HTML and CSS that’s already inside a browser. A lot of that boils down to this pattern:

When (some event happens), then (take this action).

We brainstormed some examples of this e.g. “When the user submits a form, then show a modal dialogue with an acknowledgement.” I then encouraged them to write a script …but I don’t mean a script in the JavaScript sense; I mean a script in the screenwriting or theatre sense. Line by line, write out each step that you want to accomplish. Once you’ve done that, translate each line of your English (or Portuguese) script into JavaScript.

I did quick demo as a proof of concept (which, much to my surprise, actually worked first time), but I was at pains to point out that they didn’t need to remember the syntax or vocabulary of the script; it was much more important to have a clear understanding of the thinking behind it.

With the remaining time left in the day, we ran through the many browser APIs available to JavaScript, from the relatively simple—like querySelector and Ajax—right up to the latest device APIs. I think I got the message across that, using JavaScript, there’s practically no limit to what you can do on the web these days …but the trick is to use that power responsibly.

At this point, we’ve had three days and we’ve covered three layers of web technologies: HTML, CSS, and JavaScript. Tomorrow we’ll take a step back from the nitty-gritty of the code. It’s going to be all about how to plan and think about building for the web before a single line of code gets written.

Sunday, February 12th, 2017

Accessibility and Performance | MarcySutton.com

When I heard about Universal JavaScript apps (a.k.a. isomorphic JavaScript), despite the “framework hotness”, I saw real value for accessibility and performance together. With this technique, a JavaScript app is rendered as a complete HTML payload from the server using Node.js, which is then upgraded as client resources download and execute. All of a sudden your Angular app could be usable a lot sooner, even without browser JS. Bells started going off in my head: “this could help accessible user experience, too!”

Interneting Is Hard | Web Development Tutorials For Complete Beginners

A nice straightforward introduction to web development for anyone starting from scratch.

Thursday, February 9th, 2017

Performance Under Pressure - performance, responsive web design - Bocoup

The transcript of a really great—and entertaining—talk on performance by Wilto. I may have laughed out loud at points.

Ramen. 🍜

Ramen. 🍜

Fluid Paint

The texture here is shockingly realistic.

Wednesday, February 8th, 2017

Polyfills and the evolution of the web - TAG finding

Really good advice for anyone thinking of releasing a polyfill into the world.

Friday, February 3rd, 2017

Isomorphic rendering on the JAM Stack

Phil describes the process of implementing the holy grail of web architecture (which perhaps isn’t as difficult as everyone seems to think it is):

I have been experimenting with something that seemed obvious to me for a while. A web development model which gives a pre-rendered, ready-to-consume, straight-into-the-eyeballs web page at every URL of a site. One which, once loaded, then behaves like a client-side, single page app.

Now that’s resilient web design!

Sunday, January 29th, 2017

Callback Hell

At first when I was reading this JavaScript coding guide, I thought “Isn’t this exactly what promises address?” but that is then addressed further down:

Before looking at more advanced solutions, remember that callbacks are a fundamental part of JavaScript (since they are just functions) and you should learn how to read and write them before moving on to more advanced language features, since they all depend on an understanding of callbacks.

Fair enough. In any case, what you’ll find here is mainly good advice for writing modular code.

DirtyMarkup · Tidy up your HTML, CSS, and JavaScript code

A handy prettifier for front-end code. Useful if you’re trying to find something inside code markup, CSS, or JavaScript that’s been minified.

Detecting text in an image on the web in real-time - Tales of a Developer Advocate by Paul Kinlan

The text detection API is still in its experimental stage, but it opens up a lot of really interesting possibilities for the web: assistive technology to read out text, archiving tools for digitising text …it’s all part of the nascent shape detection API.

Saturday, January 28th, 2017

The Promise of a Burger Party - Mariko Kosaka

Mariko has a real knack for explaining technical concepts in a very accessible way. This time it’s JavaScript promises.

Friday, January 27th, 2017

✨Implementing “Save For Offline” with Service Workers | Una Kravets Online✨

A great little script from Una that’s perfect for blogs and news sites—allowing users to explicitly save a page for offline reading.

Thursday, January 19th, 2017

Looking beyond launch

It’s all go, go, go at Clearleft while we’re working on a new version of our website …accompanied by a brand new identity. It’s an exciting time in the studio, tinged with the slight stress that comes with any kind of unveiling like this.

I think it’s good to remember that this is the web. I keep telling myself that we’re not unveiling something carved in stone. Even after the launch we can keep making the site better. In fact, if we wait until everything is perfect before we launch, we’ll probably never launch at all.

On the other hand, you only get one chance to make a first impression, right? So it’s got to be good …but it doesn’t have to be done. A website is never done.

I’ve got to get comfortable with that. There’s lots of things that I’d like to be done in time for launch, but realistically it’s fine if those things are completed in the subsequent days or weeks.

Adding a service worker and making a nice offline experience? I really want to do that …but it can wait.

What about other performance tweaks? Yes, we’ll to try have every asset—images, fonts—optimised …but maybe not from day one.

Making sure that each page has good metadata—Open Graph? Twitter Cards? Microformats? Maybe even AMP? Sure …but not just yet.

Having gorgeous animations? Again, I really want to have them but as Val rightly points out, animations are an enhancement—a really, really great enhancement.

If anything, putting the site live before doing all these things acts as an incentive to make sure they get done.

So when you see the new site, if you view source or run it through Web Page Test and spot areas for improvement, rest assured we’re on it.

Understanding the Critical Rendering Path

A nice and clear description of how browsers parse and render web pages.

Let them paste passwords - NCSC Site

Ever been on one of those websites that doesn’t allow you to paste into the password field? Frustrating, isn’t it? (Especially if you use a password manager.)

It turns out that nobody knows how this ever started. It’s like a cargo cult without any cargo.

Wednesday, January 11th, 2017

Making Resilient Web Design work offline

I’ve written before about taking an online book offline, documenting the process behind the web version of HTML5 For Web Designers. A book is quite a static thing so it’s safe to take a fairly aggressive offline-first approach. In fact, a static unchanging book is one of the few situations that AppCache works for. Of course a service worker is better, but until AppCache is removed from browsers (and until service worker is supported across the board), I’m using both. I wouldn’t recommend that for most sites though—for most sites, use a service worker to enhance it, and avoid AppCache like the plague.

For Resilient Web Design, I took a similar approach to HTML5 For Web Designers but I knew that there was a good chance that some of the content would be getting tweaked at least for a while. So while the approach is still cache-first, I decided to keep the cache fairly fresh.

Here’s my service worker. It starts with the usual stuff: when the service worker is installed, there’s a list of static assets to cache. In this case, that list is literally everything; all the HTML, CSS, JavaScript, and images for the whole site. Again, this is a pattern that works well for a book, but wouldn’t be right for other kinds of websites.

The real heavy lifting happens with the fetch event. This is where the logic sits for what the service worker should do everytime there’s a request for a resource. I’ve documented the logic with comments:

// Look in the cache first, fall back to the network
  // CACHE
  // Did we find the file in the cache?
      // If so, fetch a fresh copy from the network in the background
      // NETWORK
          // Stash the fresh copy in the cache
  // NETWORK
  // If the file wasn't in the cache, make a network request
      // Stash a fresh copy in the cache in the background
  // OFFLINE
  // If the request is for an image, show an offline placeholder
  // If the request is for a page, show an offline message

So my order of preference is:

  1. Try the cache first,
  2. Try the network second,
  3. Fallback to a placeholder as a last resort.

Leaving aside that third part, regardless of whether the response is served straight from the cache or from the network, the cache gets a top-up. If the response is being served from the cache, there’s an additional network request made to get a fresh copy of the resource that was just served. This means that the user might be seeing a slightly stale version of a file, but they’ll get the fresher version next time round.

Again, I think this acceptable for a book where the tweaks and changes should be fairly minor, but I definitely wouldn’t want to do it on a more dynamic site where the freshness matters more.

Here’s what it usually likes like when a file is served up from the cache:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      return responseFromCache;
  }

I’ve introduced an extra step where the fresher version is fetched from the network. This is where the code can look a bit confusing: the network request is happening in the background after the cached file has already been returned, but the code appears before the return statement:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      // If so, fetch a fresh copy from the network in the background
      event.waitUntil(
          // NETWORK
          fetch(request)
          .then( responseFromFetch => {
              // Stash the fresh copy in the cache
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      return responseFromCache;
  }

It’s asynchronous, see? So even though all that network code appears before the return statement, it’s pretty much guaranteed to complete after the cache response has been returned. You can verify this by putting in some console.log statements:

caches.match(request)
.then( responseFromCache => {
  if (responseFromCache) {
      event.waitUntil(
          fetch(request)
          .then( responseFromFetch => {
              console.log('Got a response from the network.');
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      console.log('Got a response from the cache.');
      return responseFromCache;
  }

Those log statements will appear in this order:

Got a response from the cache.
Got a response from the network.

That’s the opposite order in which they appear in the code. Everything inside the event.waitUntil part is asynchronous.

Here’s the catch: this kind of asynchronous waitUntil hasn’t landed in all the browsers yet. The code I’ve written will fail.

But never fear! Jake has written a polyfill. All I need to do is include that at the start of my serviceworker.js file and I’m good to go:

// Import Jake's polyfill for async waitUntil
importScripts('/js/async-waituntil.js');

I’m also using it when a file isn’t found in the cache, and is returned from the network instead. Here’s what the usual network code looks like:

fetch(request)
  .then( responseFromFetch => {
    return responseFromFetch;
  })

I want to also store that response in the cache, but I want to do it asynchronously—I don’t care how long it takes to put the file in the cache as long as the user gets the response straight away.

Technically, I’m not putting the response in the cache; I’m putting a copy of the response in the cache (it’s a stream, so I need to clone it if I want to do more than one thing with it).

fetch(request)
  .then( responseFromFetch => {
    // Stash a fresh copy in the cache in the background
    let responseCopy = responseFromFetch.clone();
    event.waitUntil(
      caches.open(staticCacheName)
      .then( cache => {
          cache.put(request, responseCopy);
      })
    );
    return responseFromFetch;
  })

That all seems to be working well in browsers that support service workers. For legacy browsers, like Mobile Safari, there’s the much blunter caveman logic of an AppCache manifest.

Here’s the JavaScript that decides whether a browser gets the service worker or the AppCache:

if ('serviceWorker' in navigator) {
  // If service workers are supported
  navigator.serviceWorker.register('/serviceworker.js');
} else if ('applicationCache' in window) {
  // Otherwise inject an iframe to use appcache
  var iframe = document.createElement('iframe');
  iframe.setAttribute('src', '/appcache.html');
  iframe.setAttribute('style', 'width: 0; height: 0; border: 0');
  document.querySelector('footer').appendChild(iframe);
}

Either way, people are making full use of the offline nature of the book and that makes me very happy indeed.