Decomputerization doesn’t mean no computers. It means that not all spheres of life should be rendered into data and computed upon. Ubiquitous “smartness” largely serves to enrich and empower the few at the expense of the many, while inflicting ecological harm that will threaten the survival and flourishing of billions of people.
Friday, September 27th, 2019
Saturday, September 7th, 2019
Six UX lessons from game design:
- Story vs Narrative (Think in terms of story arcs)
- Games are fractal (Break up the journey from big to small to tiny)
- Learning loop (figure out your core mechanic)
- Affordances (Prompt for known loops)
- Hintiness (Move to new loops)
- Pacing (Be sure to start here)
Friday, September 6th, 2019
I got an email recently from a young person looking to get into web development. They wanted to know what languages they should start with, whether they should a Mac or a Windows PC, and what some places to learn from.
I wrote back, saying this about languages:
And this is what I said about hardware and software:
It doesn’t matter whether you use a Mac or a Windows PC, as long as you’ve got an internet connection, some web browsers (Chrome, Firefox, for example) and a text editor. There are some very good free text editors available for Mac and PC:
For resources, I had a trawl through links I’ve tagged with “learning” and “html” and sent along some links to free online tutorials:
- Codebar tutorials
- HTML+CSS tutorial
- Marksheet, a free HTML and CSS tutorial
- Learn to code HTML and CSS
- Just starting out with CSS and HTML
- Interneting is hard (but it doesn’t have to be)
- Web design in four minutes
- The front-end developer handbook
After sending that email, I figured that this list might be useful to anyone else looking to start out in web development. If you know of anyone in that situation, I hope this list might help.
Tuesday, August 27th, 2019
Voice User Interface Design by Cheryl Platz
Why make a voice interface?
Successful voice interfaces aren’t necessarily solving new problems. They’re used to solve problems that other devices have already solved. Think about kitchen timers. There are lots of ways to set a timer. Your oven might have one. Your phone has one. Why use a $200 device to solve this mundane problem? Same goes for listening to music, news, and weather.
People are using voice interfaces for solving ordinary problems. Why? Context matters. If you’re carrying a toddler, then setting a kitchen timer can be tricky so a voice-activated timer is quite appealing. But why is voice is happening now?
Humans have been developing the art of conversation for thousands of years. It’s one of the first skills we learn. It’s deeply instinctual. Most humans use speach instinctively every day. You can’t necessarily say that about using a keyboard or a mouse.
Voice-based user interfaces are not new. Not just the idea—which we’ve seen in Star Trek—but the actual implementation. Bell Labs had Audrey back in 1952. It recognised ten words—the digits zero through nine. Why did it take so long to get to Alexa?
In the late 70s, DARPA issued a challenge to create a voice-activated system. Carnagie Mellon came up with Harpy (with a thousand word grammar). But none of the solutions could respond in real time. In conversation, we expect a break of no more than 200 or 300 milliseconds.
In the 1980s, computing power couldn’t keep up with voice technology, so progress kind of stopped. Time passed. Things finally started to catch up in the 90s with things like Dragon Naturally Speaking. But that was still about vocabulary, not grammar. By the 2000s, small grammars were starting to show up—starting an X-Box or pausing Netflix. In 2008, Google Voice Search arrived on the iPhone and natural language interaction began to arrive.
What makes natural language interactions so special? It requires minimal training because it uses the conversational muscles we’ve been working for a lifetime. It unlocks the ability to have more forgiving, less robotic conversations with devices. There might be ten different ways to set a timer.
Natural language interactions can also free us from “screen magnetism”—that tendency to stay on a device even when our original task is complete. Voice also enables fast and forgiving searches of huge catalogues without time spent typing or browsing. You can pick a needle straight out of a haystack.
Natural language interactions are excellent for older customers. These interfaces don’t intimidate people without dexterity, vision, or digital experience. Voice input often leads to more inclusive experiences. Many customers with visual or physical disabilities can’t use traditional graphical interfaces. Voice experiences throw open the door of opportunity for some people. However, voice experience can exclude people with speech difficulties.
Making the case for voice interfaces
There’s a misconception that you need to work at Amazon, Google, or Apple to work on a voice interface, or at least that you need to have a big product team. But Cheryl was able to make her first Alexa “skill” in a week. If you’re a web developer, you’re good to go. Your voice “interaction model” is just JSON.
How do you get your product team on board? Find the customers (and situations) you might have excluded with traditional input. Tell the stories of people whose hands are full, or who are vision impaired. You can also point to the adoption rate numbers for smart speakers.
You’ll need to show your scenario in context. Otherwise people will ask, “why can’t we just build an app for this?” Conduct research to demonstrate the appeal of a voice interface. Storyboarding is very useful for visualising the context of use and highlighting existing pain points.
Getting started with voice interfaces
You’ve got to understand how the technology works in order to adapt to how it fails. Here are a few basic concepts.
Utterance. A word, phrase, or sentence spoken by a customer. This is the true form of what the customer provides.
Intent. This is the meaning behind a customer’s request. This is an important distinction because one intent could have thousands of different utterances.
Prompt. The text of a system response that will be provided to a customer. The audio version of a prompt, if needed, is generated separately using text to speech.
Grammar. A finite set of expected utterances. It’s a list. Usually, each entry in a grammar is paired with an intent. Many interfaces start out as being simple grammars before moving on to a machine-learning model later once the concept has been proven.
Here’s the general idea with “artificial intelligence”…
There’s a human with a core intent to do something in the real world, like knowing when the cookies in the oven are done. This is translated into an intent like, “set a 15 minute timer.” That’s the utterance that’s translated into a string. But it hasn’t yet been parsed as language. That string is passed into a natural language understanding system. What comes is a data structure that represents the customers goal e.g. intent=timer; duration=15 minutes. That’s sent to the business logic where a timer is actually step. For a good voice interface, you also want to send back a response e.g. “setting timer for 15 minutes starting now.”
That seems simple enough, right? What’s so hard about designing for voice?
Natural language interfaces are a form of artifical intelligence so it’s not deterministic. There’s a lot of ruling out false positives. Unlike graphical interfaces, voice interfaces are driven by probability.
How do you turn a sound wave into an understandable instruction? It’s a lot like teaching a child. You feed a lot of data into a statistical model. That’s how machine learning works. It’s a probability game. That’s where it gets interesting for design—given a bunch of possible options, we need to use context to zero in on the most correct choice. This is where confidence ratings come in: the system will return the probability that a response is correct. Effectively, the system is telling you how sure or not it is about possible results. If the customer makes a request in an unusual or unexpected way, our system is likely to guess incorrectly. That’s because the system is being given something new.
Designing a conversation is relatively straightforward. But 80% of your voice design time will be spent designing for what happens when things go wrong. In voice recognition, edge cases are front and centre.
Here’s another challenge. Interaction with most voice interfaces is part conversation, part performance. Most interactions are not private.
Humans don’t distinguish digital speech fom human speech. That means these devices are intrinsically social. Our brains our wired to try to extract social information, even form digital speech. See, for example, why it’s such a big question as to what gender a voice interface has.
Delivering a voice interface
Storyboards help depict the context of use. Sample dialogues are your new wireframes. These are little scripts that not only cover the happy path, but also your edge case. Then you reverse engineer from there.
Flow diagrams communicate customer states, but don’t use the actual text in them.
Prompt lists are your final deliverable.
Functional prototypes are really important for voice interfaces. You’ll learn the real way that customers will ask for things.
If you build a working prototype, you’ll be building two things: a natural language interaction model (often a JSON file) and custom business logic (in a programming language).
Eventually voice design will become a core competency, much like mobile, which was once separate.
Ask yourself what tasks your customers complete on your site that feel clunkly. Remember that voice desing is almost never about new scenarious. Start your journey into voice interfaces by tackling old problems in new, more inclusive ways.
May the voice be with you!
Thursday, August 1st, 2019
I love React. I love how server side rendering React apps is trivial because it all compiles down to vanilla HTML rather than web components, effectively turning it into a kickass template engine that can come alive. I love the way you can very effectively still do progressive enhancement by using completely semantic markup and then letting hydration do more to it.
I also hate React. I hate React because these behaviours are not defaults. React is not gonna warn you if you make a form using divs and unlabelled textboxes and send the whole thing to a server. I hate React because CSS-in-JS approaches by default encourage you to write completely self contained one off components rather than trying to build a website UI up as a whole. I hate the way server side rendering and progressive enhancement are not defaults, but rather things you have to go out of your way to do.
And if you want to adjust the front-end code, you’ve got to set up all this tooling just to change a
div to a
button. That’s quite a barrier to entry.
In elevating frontend to the land of Serious Code we have not just made things incredibly over-engineered but we have also set fire to all the ladders that we used to get up here in the first place.
I love React because it lets me do my best work faster and more easily. I hate React because the culture around it more than the library itself actively prevents other people from doing their best work.
Tuesday, July 23rd, 2019
I find myself doing pseudo code before I write real code, sure, but I also leave it in place sometimes in code comments.
Sunday, July 21st, 2019
Brad describes how he has found his place in the world of React, creating UI components without dabbling in business logic:
Instead of merely creating components’ reference HTML, CSS, and presentational JS, frontend designers can create directly consumable HTML, CSS, and presentational JS that back-of-the-frontend developers can then breathe life into.
What’s clear is that the term “React” has become as broad and undefined as the term “front-end”. Just saying that someone does React doesn’t actually say much about the nature of the work.
When you say “we’re hiring a React developer”, what exactly do you mean by that? “React developer” is almost as vague as “frontend developer”, so clarify. Are you looking for a person to specialize in markup and styles? A person to author middleware and business logic? A person to manage data and databases? A person to own build processes?
Saturday, July 6th, 2019
The Hiding Place: Inside the World’s First Long-Term Storage Facility for Highly Radioactive Nuclear Waste - Pacific Standard
Robert McFarlane’s new book is an exploration of deep time. In this extract, he visits the Onkalo nuclear waste storage facility in Finland.
Sometimes we bury materials in order that they may be preserved for the future. Sometimes we bury materials in order to preserve the future from them.
Friday, July 5th, 2019
Don’t miss this—a masterclass in SVG animation with Cassie (I refuse to use the W word). Mark your calendar: August 20th.
Monday, July 1st, 2019
When people talk about learning React, I think that React, in and of itself, is relatively easy to understand. At least, I felt it was. I have components. I have JSX. I hit some hiccups with required keys or making sure I was wrapping child elements properly. But overall, I felt like I grasped it well enough.
Throw in everything else at the same time, though, and things get confusing because it’s hard at first to recognize what belongs to what. “Oh, this is Redux. That is React. That other thing is lodash. Got it.”
This resonates a lot with Dave’s post:
React is an ecosystem. I feel like it’s a disservice to anyone trying to learn to diminish all that React entails. React shows up on the scene with Babel, Webpack, and JSX (which each have their own learning curve) then quickly branches out into technologies like Redux, React-Router, Immutable.js, Axios, Jest, Next.js, Create-React-App, GraphQL, and whatever weird plugin you need for your app.
Thursday, June 27th, 2019
Twenty hard-won lessons from Dan from ten years of Dribbble.
We sent 50 shirts along with a card to friends and colleagues announcing Dribbble’s beta back in 2008. This first batch of members played a pivotal role in the foundation of the community and how it would develop. The shirt helped guilt them into actually checking out the site.
I think I still have my T-shirt somewhere!
Wednesday, June 26th, 2019
This looks like an excellent conference line-up! Alas, I won’t be able to make it (I’m out of the country when it’s on) but you should definitely go if you can.
Sunday, June 23rd, 2019
Lots and lots of programming advice. I can’t attest to the veracity and efficacy of all of it, but this really rang true:
If you have no idea how to start, describe the flow of the application in high level, pure English/your language first. Then fill the spaces between comments with the code.
Blogging about your stupid solution is still better than being quiet.
You may feel “I’m not start enough to talk about this” or “This must be so stupid I shouldn’t talk about it”.
Create a blog. Post about your stupid solutions.
Tuesday, June 18th, 2019
A deep dive with good advice on using—and labelling—sectioning content in HTML:
Monday, June 17th, 2019
This really is a most excellent introduction to React. Complete with cheat sheet!
Sunday, June 16th, 2019
This is a wonderfully written post packed with hard-won wisdom.
This are the myths that Monica dispelled for herself:
- I’m a senior developer
- Everyone writes tests
- We’re so far behind everyone else (AKA “tech FOMO”)
- Code quality matters most
- Everything must be documented!!!!
- Technical debt is bad
- Seniority means being the best at programming
As part of the BBC’s ongoing series on deep time, Alexander Rose describes the research he’s been doing for the clock of the long now—materials, locations, ideas …all the pieces that have historically combined to allow artifacts to survive.
The lowest common denominator of the Web. The foundation. The rhythm section. The ladyfingers in the Web trifle. It’s the HTML. And it is becoming increasingly clear to me that there’s a whole swathe of Frontend Engineers who don’t know or understand the frontend-est of frontend technologies.
Tuesday, June 11th, 2019
A (possibly) Turing complete language:
As the validity and the semantics of a program depend on the structure of the London underground system, which is administered by London Underground Ltd, a subsidiary of Transport for London, who are likely unaware of the existence of this programming language, its future compatibility is uncertain. Programs may become invalid or subtly wrong as the transport company expands or retires some of the network, reroutes lines or renames stations. Features may be removed with no prior consultation with the programming community. For all we know, Mornington Crescent itself may at some point be closed, at which point this programming language will cease to exist.
Monday, June 10th, 2019
This post absolutely nails what’s special about CSS …and why supersmart programmers might have trouble wrapping their head around it:
Other programming languages often work in controlled environments, like servers. They expect certain conditions to be true at all times, and can therefore be understood as concrete instructions as to how a program should execute.
CSS on the other hand works in a place that can never be fully controlled, so it has to be flexible by default.
Max goes on to encapsulate years of valuable CSS learnings into some short and snappy pieces of advices:
No matter what your level of CSS knowledge, this post has something for you—highly recommended!