Tags: language

255

sparkline

Saturday, November 16th, 2019

Open UI

An interesting project that will research and document the language used across different design systems to name similar components.

Monday, November 11th, 2019

Cat encounters

The latest episode of Ariel’s excellent Offworld video series (and podcast) is all about Close Encounters Of The Third Kind.

I have such fondness for this film. It’s one of those films that I love to watch on a Sunday afternoon (though that’s true of so many Spielberg films—Jaws, Raiders Of The Lost Ark, E.T.). I remember seeing it in the cinema—this would’ve been the special edition re-release—and feeling the seat under me quake with the rumbling of the musical exchange during the film’s climax.

Ariel invited Rose Eveleth and Laura Welcher on to discuss the film. They spent a lot of time discussing the depiction of first contact communication—Arrival being the other landmark film on this topic.

This is a timely discussion. There’s a new book by Daniel Oberhaus published by MIT Press called Extraterrestrial Languages:

If we send a message into space, will extraterrestrial beings receive it? Will they understand?

You can a read an article by the author on The Guardian, where he mentions some of the wilder ideas about transmitting signals to aliens:

Minsky, widely regarded as the father of AI, suggested it would be best to send a cat as our extraterrestrial delegate.

Don’t worry. Marvin Minsky wasn’t talking about sending a real live cat. Rather, we transmit instructions for building a computer and then we can transmit information as software. Software about, say, cats.

It’s not that far removed from what happened with the Voyager golden record, although that relied on analogue technology—the phonograph—and sent the message pre-compiled on hardware; a much slower transmission rate than radio.

But it’s interesting to me that Minsky specifically mentioned cats. There’s another long-term communication puzzle that has a cat connection.

The Yukka Mountain nuclear waste repository is supposed to store nuclear waste for 10,000 years. How do we warn our descendants to stay away? We can’t use language. We probably can’t even use symbols; they’re too culturally specific. A think tank called the Human Interference Task Force was convened to agree on the message to be conveyed:

This place is a message… and part of a system of messages… pay attention to it! Sending this message was important to us. We considered ourselves to be a powerful culture.

This place is not a place of honor…no highly esteemed deed is commemorated here… nothing valued is here.

What is here is dangerous and repulsive to us. This message is a warning about danger.

A series of thorn-like threatening earthworks was deemed the most feasible solution. But there was another proposal that took a two pronged approach with genetics and folklore:

  1. Breed cats that change colour in the presence of radioactive material.
  2. Teach children nursery rhymes about staying away from cats that change colour.

This is the raycat solution.

Tuesday, August 27th, 2019

Voice User Interface Design by Cheryl Platz

Cheryl Platz is speaking at An Event Apart Chicago. Her inaugural An Event Apart presentation is all about voice interfaces, and I’m going to attempt to liveblog it…

Why make a voice interface?

Successful voice interfaces aren’t necessarily solving new problems. They’re used to solve problems that other devices have already solved. Think about kitchen timers. There are lots of ways to set a timer. Your oven might have one. Your phone has one. Why use a $200 device to solve this mundane problem? Same goes for listening to music, news, and weather.

People are using voice interfaces for solving ordinary problems. Why? Context matters. If you’re carrying a toddler, then setting a kitchen timer can be tricky so a voice-activated timer is quite appealing. But why is voice is happening now?

Humans have been developing the art of conversation for thousands of years. It’s one of the first skills we learn. It’s deeply instinctual. Most humans use speach instinctively every day. You can’t necessarily say that about using a keyboard or a mouse.

Voice-based user interfaces are not new. Not just the idea—which we’ve seen in Star Trek—but the actual implementation. Bell Labs had Audrey back in 1952. It recognised ten words—the digits zero through nine. Why did it take so long to get to Alexa?

In the late 70s, DARPA issued a challenge to create a voice-activated system. Carnagie Mellon came up with Harpy (with a thousand word grammar). But none of the solutions could respond in real time. In conversation, we expect a break of no more than 200 or 300 milliseconds.

In the 1980s, computing power couldn’t keep up with voice technology, so progress kind of stopped. Time passed. Things finally started to catch up in the 90s with things like Dragon Naturally Speaking. But that was still about vocabulary, not grammar. By the 2000s, small grammars were starting to show up—starting an X-Box or pausing Netflix. In 2008, Google Voice Search arrived on the iPhone and natural language interaction began to arrive.

What makes natural language interactions so special? It requires minimal training because it uses the conversational muscles we’ve been working for a lifetime. It unlocks the ability to have more forgiving, less robotic conversations with devices. There might be ten different ways to set a timer.

Natural language interactions can also free us from “screen magnetism”—that tendency to stay on a device even when our original task is complete. Voice also enables fast and forgiving searches of huge catalogues without time spent typing or browsing. You can pick a needle straight out of a haystack.

Natural language interactions are excellent for older customers. These interfaces don’t intimidate people without dexterity, vision, or digital experience. Voice input often leads to more inclusive experiences. Many customers with visual or physical disabilities can’t use traditional graphical interfaces. Voice experiences throw open the door of opportunity for some people. However, voice experience can exclude people with speech difficulties.

Making the case for voice interfaces

There’s a misconception that you need to work at Amazon, Google, or Apple to work on a voice interface, or at least that you need to have a big product team. But Cheryl was able to make her first Alexa “skill” in a week. If you’re a web developer, you’re good to go. Your voice “interaction model” is just JSON.

How do you get your product team on board? Find the customers (and situations) you might have excluded with traditional input. Tell the stories of people whose hands are full, or who are vision impaired. You can also point to the adoption rate numbers for smart speakers.

You’ll need to show your scenario in context. Otherwise people will ask, “why can’t we just build an app for this?” Conduct research to demonstrate the appeal of a voice interface. Storyboarding is very useful for visualising the context of use and highlighting existing pain points.

Getting started with voice interfaces

You’ve got to understand how the technology works in order to adapt to how it fails. Here are a few basic concepts.

Utterance. A word, phrase, or sentence spoken by a customer. This is the true form of what the customer provides.

Intent. This is the meaning behind a customer’s request. This is an important distinction because one intent could have thousands of different utterances.

Prompt. The text of a system response that will be provided to a customer. The audio version of a prompt, if needed, is generated separately using text to speech.

Grammar. A finite set of expected utterances. It’s a list. Usually, each entry in a grammar is paired with an intent. Many interfaces start out as being simple grammars before moving on to a machine-learning model later once the concept has been proven.

Here’s the general idea with “artificial intelligence”…

There’s a human with a core intent to do something in the real world, like knowing when the cookies in the oven are done. This is translated into an intent like, “set a 15 minute timer.” That’s the utterance that’s translated into a string. But it hasn’t yet been parsed as language. That string is passed into a natural language understanding system. What comes is a data structure that represents the customers goal e.g. intent=timer; duration=15 minutes. That’s sent to the business logic where a timer is actually step. For a good voice interface, you also want to send back a response e.g. “setting timer for 15 minutes starting now.”

That seems simple enough, right? What’s so hard about designing for voice?

Natural language interfaces are a form of artifical intelligence so it’s not deterministic. There’s a lot of ruling out false positives. Unlike graphical interfaces, voice interfaces are driven by probability.

How do you turn a sound wave into an understandable instruction? It’s a lot like teaching a child. You feed a lot of data into a statistical model. That’s how machine learning works. It’s a probability game. That’s where it gets interesting for design—given a bunch of possible options, we need to use context to zero in on the most correct choice. This is where confidence ratings come in: the system will return the probability that a response is correct. Effectively, the system is telling you how sure or not it is about possible results. If the customer makes a request in an unusual or unexpected way, our system is likely to guess incorrectly. That’s because the system is being given something new.

Designing a conversation is relatively straightforward. But 80% of your voice design time will be spent designing for what happens when things go wrong. In voice recognition, edge cases are front and centre.

Here’s another challenge. Interaction with most voice interfaces is part conversation, part performance. Most interactions are not private.

Humans don’t distinguish digital speech fom human speech. That means these devices are intrinsically social. Our brains our wired to try to extract social information, even form digital speech. See, for example, why it’s such a big question as to what gender a voice interface has.

Delivering a voice interface

Storyboards help depict the context of use. Sample dialogues are your new wireframes. These are little scripts that not only cover the happy path, but also your edge case. Then you reverse engineer from there.

Flow diagrams communicate customer states, but don’t use the actual text in them.

Prompt lists are your final deliverable.

Functional prototypes are really important for voice interfaces. You’ll learn the real way that customers will ask for things.

If you build a working prototype, you’ll be building two things: a natural language interaction model (often a JSON file) and custom business logic (in a programming language).

Eventually voice design will become a core competency, much like mobile, which was once separate.

Ask yourself what tasks your customers complete on your site that feel clunkly. Remember that voice desing is almost never about new scenarious. Start your journey into voice interfaces by tackling old problems in new, more inclusive ways.

May the voice be with you!

Tuesday, June 11th, 2019

Mornington Crescent - Esolang

A (possibly) Turing complete language:

As the validity and the semantics of a program depend on the structure of the London underground system, which is administered by London Underground Ltd, a subsidiary of Transport for London, who are likely unaware of the existence of this programming language, its future compatibility is uncertain. Programs may become invalid or subtly wrong as the transport company expands or retires some of the network, reroutes lines or renames stations. Features may be removed with no prior consultation with the programming community. For all we know, Mornington Crescent itself may at some point be closed, at which point this programming language will cease to exist.

Friday, June 7th, 2019

Language and the Invention of Writing | Talking Points Memo

Language is not an invention. As best we can tell it is an evolved feature of the human brain. There have been almost countless languages humans have spoken. But they all follow certain rules that grow out of the wiring of the human brain and human cognition. Critically, it is something that is hardwired into us. Writing is an altogether different and artificial thing.

Building on Vimeo

Here’s the video of the opening talk I gave at New Adventures earlier this year. I think it’s pretty darn good!

Wednesday, May 15th, 2019

Humanizing Your Documentation - Full Talk - Speaker Deck

The slides from Carolyn’s talk at Beyond Tellerrand. The presentation is ostensibly about writing documentation, but I think it’s packed with good advice for writing in general.

Saturday, April 6th, 2019

Cool goal

One evening last month, during An Event Apart Seattle, a bunch of the speakers were gathered in the bar in the hotel lobby, shooting the breeze and having a nightcap before the next day’s activities. In a quasi-philosophical mode, the topic of goals came up. Not the sporting variety, but life and career goals.

As I everyone related (confessed?) their goals, I had to really think hard. I don’t think I have any goals. I find it hard enough to think past the next few months, much less form ideas about what I might want to be doing in a decade. But then I remembered that I did once have a goal.

Back in the ’90s, when I was living in Germany and first starting to make websites, there was a website I would check every day for inspiration: Project Cool’s Cool Site Of The Day. I resolved that my life’s goal was to one day have a website I made be the cool site of the day.

About a year later, to my great shock and surprise, I achieved my goal. An early iteration of Jessica’s site—complete with whizzy DHTML animations—was the featured site of the day on Project Cool. I was overjoyed!

I never bothered to come up with a new goal to supercede that one. Maybe I should’ve just retired there and then—I had peaked.

Megan Sapnar Ankerson wrote an article a few years back about How coolness defined the World Wide Web of the 1990s:

The early web was simply teeming with declarations of cool: Cool Sites of the Day, the Night, the Week, the Year; Cool Surf Spots; Cool Picks; Way Cool Websites; Project Cool Sightings. Coolness awards once besieged the web’s virtual landscape like an overgrown trophy collection.

It’s a terrific piece that ponders the changing nature of the web, and the changing nature of that word: cool.

Perhaps the word will continue to fall out of favour. Tim Berners-Lee may have demonstrated excellent foresight when he added this footnote to his classic document, Cool URIs don’t change—still available at its original URL, of course:

Historical note: At the end of the 20th century when this was written, “cool” was an epithet of approval particularly among young, indicating trendiness, quality, or appropriateness.

Tuesday, April 2nd, 2019

Yet Another JavaScript Framework | CSS-Tricks

This is such a well-written piece! Jay Hoffman—author of the excellent History Of The Web newsletter—talks us through the JavaScript library battles of the late 2000’s …and the consequences that arose just last year.

The closing line is perfect.

Thursday, February 28th, 2019

Getting help from your worst enemy

Onboarding. Reaching out. In terms of. Synergy. Bandwidth. Headcount. Forward planning. Multichannel. Going forward. We are constantly bombarded and polluted with nonsense speak. These words and phrases snag and attach themselves to our vocabulary like sticky weeds.

Words become walls.

I love this post from Ben on the value of plain language!

We’re not dumbing things down by using simple terms. We’re being smarter.

Read on for the story of the one exception that Ben makes—it’s a good one.

Tuesday, February 26th, 2019

The CSS mental model - QuirksBlog

PPK looks at the different mental models behind CSS and JavaScript. One is declarative and one is imperative.

There’s a lot here that ties in with what I was talking about at New Adventures around the rule of least power in technology choice.

I’m not sure if I agree with describing CSS as being state-based. The example that illustrates this—a :hover style—feels like an exception rather than a typical example of CSS.

Sunday, February 24th, 2019

Programming as translation – Increment: Internationalization

Programming lessons from Umberto Eco and Emily Wilson.

Converting the analog into the digital requires discretization, leaving things out. What we filter out—or what we focus on—depends on our biases. How do conventional translators handle issues of bias? What can programmers learn from them?

Saturday, February 2nd, 2019

“It turns out” « the jsomers.net blog

It turns out that “it turns out” is a handy linguistic shortcut for making an unsubstaniated assertion.

Friday, February 1st, 2019

Readable Code without Prescription Glasses | Ocasta

I saw Daniel give a talk at Async where he compared linguistic rules with code style:

We find the prescriptive rules hard to follow, irrespective of how complex they are, because they are invented, arbitrary, and often go against our intuition. The descriptive rules, on the other hand, are easy to follow because they are instinctive. We learned to follow them as children by listening to, analysing and mimicking speech, armed with an inbuilt concept of the basic building blocks of grammar. We follow them subconsciously, often without even knowing the rules exists.

Thus began some thorough research into trying to uncover a universal grammar for readable code:

I am excited by the possibility of discovering descriptive readability rules, and last autumn I started an online experiment to try and find some. My experiment on howreadable.com compared various coding patterns against each other in an attempt to objectively measure their readability. I haven’t found any strong candidates for prescriptive rules so far, but the results are promising and suggest a potential way forward.

I highly recommend reading through this and watching the video of the Async talk (and conference organisers; get Daniel on your line-up!).

Tuesday, January 22nd, 2019

I’m a Web Designer - Andy Bell

Something that I am increasingly uncomfortable with is our industry’s obsession with job titles. I understand that the landscape has gotten a lot more complex than when I started out in 2009, but I do think the sheer volume and variation in titles isn’t overly helpful in communicating what people actually do.

I share Andy’s concern. I kinda wish that the title for this open job role at Clearleft could’ve just said “Person”.

The Great Divide | CSS-Tricks

An excellent thorough analysis by Chris of the growing divide between front-end developers and …er, other front-end developers?

The divide is between people who self-identify as a (or have the job title of) front-end developer, yet have divergent skill sets.

On one side, an army of developers whose interests, responsibilities, and skill sets are heavily revolved around JavaScript.

On the other, an army of developers whose interests, responsibilities, and skill sets are focused on other areas of the front end, like HTML, CSS, design, interaction, patterns, accessibility, etc.

Wednesday, January 16th, 2019

Use the :lang pseudo-class over the lang attribute selector for language-specific styles

This is a great explanation of the difference between the [lang] and :lang CSS selectors. I wouldn’t even have thought’ve the differences so this is really valuable to me.

Monday, January 7th, 2019

HTML+ Discussion Document: Images

Back in 1993, David Raggett wrote up all the proposed extensions to HTML that were being discussed on the www-talk mailing list. It was called HTML+, which would’ve been a great way of describing HTML5.

Twenty five years later, I wish that the proposed IMAGE element had come to pass. Unlike the IMG element, it would’ve had a closing tag, allowing for fallback content between the tags:

The IMAGE element behaves in the same way as IMG but allows you to include descriptive text, which can be shown on text-only displays.

Yeah, I know we have the alt attribute, but that’s always felt like an inelegant bolt-on to me.

Saturday, December 29th, 2018

The 100 Year Web (In Praise of XML)

I don’t agree with Steven Pemberton on a lot of things—I’m not a fan of many of the Semantic Web technologies he likes, and I think that the Robustness Principle is well-suited to the web—but I always pay attention to what he has to say. I certainly share his concern that migrating everything to JavaScript is not good for interoperability:

This is why there are so few new elements in HTML5: they haven’t done any design, and instead said “if you need anything, you can always do it in Javascript”.

And they all have.

And they are all different.

Read this talk transcript, and even if you don’t agree with everything in it today, you may end up coming back to it in the future. He’s playing the long game:

The web is the way now that we distribute information. We will need the web pages we create now to be readable in 100 years time, just as we can still read 100-year-old books.

Requiring a webpage to depend on a particular 100-year-old implementation of Javascript is not exactly evidence of future-thinking.

Tuesday, December 18th, 2018

Stop Learning Frameworks – Lifehacks for Developers by Eduards Sizovs

It’s a terribly clickbaity (and negatively phrased) title, but if you turn it around, there’s some good advcie in here for deciding where to focus when it comes to dev technology:

  • Programming languages are different, but design smells are alike.
  • Frameworks are different, but the same design patterns shine through.
  • Developers are different, but rules of dealing with people are uniform.