Tags: interface

338

sparkline

Tuesday, November 12th, 2019

Chromium Blog: Moving towards a faster web

It’s nice to see that the Chrome browser will add interface enhancements to show whether you can expect a site to load fast or slowly.

Just a shame that the Google search team aren’t doing this kind of badging …unless you’ve given up on your website and decided to use Google AMP instead.

Maybe the Chrome team can figure out what the AMP team are doing to get such preferential treatment from the search team.

Tuesday, October 22nd, 2019

203221 – Web Share API: should prefer URL to text when both available

That unusual behaviour I wrote about with the Web Share API in Safari on iOS is now officially a bug—thanks, Tess!

Wednesday, October 16th, 2019

The Web Share API in Safari on iOS

I implemented the Web Share API over on The Session back when it was first available in Chrome in Android. It’s a nifty and quite straightforward API that allows websites to make use of the “sharing drawer” that mobile operating systems provide from within a web browser.

I already had sharing buttons that popped open links to Twitter, Facebook, and email. You can see these sharing buttons on individual pages for tunes, recordings, sessions, and so on.

I was already intercepting clicks on those buttons. I didn’t have to add too much to also check for support for the Web Share API and trigger that instead:

if (navigator.share) {
  navigator.share(
    {
      title: document.querySelector('title').textContent,
      text: document.querySelector('meta[name="description"]').getAttribute('content'),
      url: document.querySelector('link[rel="canonical"]').getAttribute('href')
    }
  );
}

That worked a treat. As you can see, there are three fields you can pass to the share() method: title, text, and url. You don’t have to provide all three.

Earlier this year, Safari on iOS shipped support for the Web Share API. I didn’t need to do anything. ‘Cause that’s how standards work. You can make use of APIs before every browser supports them, and then your website gets better and better as more and more browsers add support.

But I recently discovered something interesting about the iOS implementation.

When the share() method is triggered, iOS provides multiple ways of sharing: Messages, Airdrop, email, and so on. But the simplest option is the one labelled “copy”, which copies to the clipboard.

Here’s the thing: if you’ve provided a text parameter to the share() method then that’s what’s going to get copied to the clipboard—not the URL.

That’s a shame. Personally, I think the url field should take precedence. But I don’t think this is a bug, per se. There’s nothing in the spec to say how operating systems should handle the data sent via the Web Share API. Still, I think it’s a bit counterintuitive. If I’m looking at a web page, and I opt to share it, then surely the URL is the most important piece of data?

I’m not even sure where to direct this feedback. I guess it’s under the purview of the Safari team, but it also touches on OS-level interactions. Either way, I hope that somebody at Apple will consider changing the current behaviour for copying Web Share data to the clipboard.

In the meantime, I’ve decided to update my code to remove the text parameter:

if (navigator.share) {
  navigator.share(
    {
      title: document.querySelector('title').textContent,
      url: document.querySelector('link[rel="canonical"]').getAttribute('href')
    }
  );
}

If the behaviour of Safari on iOS changes, I’ll reinstate the missing field.

By the way, if you’re making progressive web apps that have display: standalone in the web app manifest, please consider using the Web Share API. When you remove the browser chrome, you’re removing the ability for users to easily share URLs. The Web Share API gives you a way to reinstate that functionality.

Thursday, September 19th, 2019

An HTML attribute potentially worth $4.4M to Chipotle - Cloud Four

When I liveblogged Jason’s talk at An Event Apart in Chicago, I included this bit of reporting:

Jason proceeds to relate a long and involved story about buying burritos online from Chipotle.

Well, here is that story. It’s a good one, with some practical takeaways (if you’ll pardon the pun):

  1. Use HTML5 input features
  2. Support autofill
  3. Make autofill part of your test plans

Saturday, September 7th, 2019

How Video Games Inspire Great UX – Scott Jenson

Six UX lessons from game design:

  1. Story vs Narrative (Think in terms of story arcs)
  2. Games are fractal (Break up the journey from big to small to tiny)
  3. Learning loop (figure out your core mechanic)
  4. Affordances (Prompt for known loops)
  5. Hintiness (Move to new loops)
  6. Pacing (Be sure to start here)

samuelgoto/sms-receiver: phone number verification

An interesting proposal to allow websites to detect certain SMS messages. The UX implications are fascinating.

Tuesday, September 3rd, 2019

Bottom Navigation Pattern On Mobile Web Pages: A Better Alternative? — Smashing Magazine

Making the case for moving your navigation to the bottom of the screen on mobile:

Phones are getting bigger, and some parts of the screen are easier to interact with than others. Having the hamburger menu at the top provides too big of an interaction cost, and we have a large number of amazing mobile app designs that utilize the bottom part of the screen. Maybe it’s time for the web design world to start using these ideas on websites as well?

Sunday, September 1st, 2019

Less… Is More? Apple’s Inconsistent Ellipsis Icons Inspire User Confusion - TidBITS

The ellipsis is the new hamburger.

It’s disappointing that Apple, supposedly a leader in interface design, has resorted to such uninspiring, and I’ll dare say, lazy design in its icons. I don’t claim to be a usability expert, but it seems to me that icons should represent a clear intention, followed by a consistent action.

Tuesday, August 27th, 2019

Voice User Interface Design by Cheryl Platz

Cheryl Platz is speaking at An Event Apart Chicago. Her inaugural An Event Apart presentation is all about voice interfaces, and I’m going to attempt to liveblog it…

Why make a voice interface?

Successful voice interfaces aren’t necessarily solving new problems. They’re used to solve problems that other devices have already solved. Think about kitchen timers. There are lots of ways to set a timer. Your oven might have one. Your phone has one. Why use a $200 device to solve this mundane problem? Same goes for listening to music, news, and weather.

People are using voice interfaces for solving ordinary problems. Why? Context matters. If you’re carrying a toddler, then setting a kitchen timer can be tricky so a voice-activated timer is quite appealing. But why is voice is happening now?

Humans have been developing the art of conversation for thousands of years. It’s one of the first skills we learn. It’s deeply instinctual. Most humans use speach instinctively every day. You can’t necessarily say that about using a keyboard or a mouse.

Voice-based user interfaces are not new. Not just the idea—which we’ve seen in Star Trek—but the actual implementation. Bell Labs had Audrey back in 1952. It recognised ten words—the digits zero through nine. Why did it take so long to get to Alexa?

In the late 70s, DARPA issued a challenge to create a voice-activated system. Carnagie Mellon came up with Harpy (with a thousand word grammar). But none of the solutions could respond in real time. In conversation, we expect a break of no more than 200 or 300 milliseconds.

In the 1980s, computing power couldn’t keep up with voice technology, so progress kind of stopped. Time passed. Things finally started to catch up in the 90s with things like Dragon Naturally Speaking. But that was still about vocabulary, not grammar. By the 2000s, small grammars were starting to show up—starting an X-Box or pausing Netflix. In 2008, Google Voice Search arrived on the iPhone and natural language interaction began to arrive.

What makes natural language interactions so special? It requires minimal training because it uses the conversational muscles we’ve been working for a lifetime. It unlocks the ability to have more forgiving, less robotic conversations with devices. There might be ten different ways to set a timer.

Natural language interactions can also free us from “screen magnetism”—that tendency to stay on a device even when our original task is complete. Voice also enables fast and forgiving searches of huge catalogues without time spent typing or browsing. You can pick a needle straight out of a haystack.

Natural language interactions are excellent for older customers. These interfaces don’t intimidate people without dexterity, vision, or digital experience. Voice input often leads to more inclusive experiences. Many customers with visual or physical disabilities can’t use traditional graphical interfaces. Voice experiences throw open the door of opportunity for some people. However, voice experience can exclude people with speech difficulties.

Making the case for voice interfaces

There’s a misconception that you need to work at Amazon, Google, or Apple to work on a voice interface, or at least that you need to have a big product team. But Cheryl was able to make her first Alexa “skill” in a week. If you’re a web developer, you’re good to go. Your voice “interaction model” is just JSON.

How do you get your product team on board? Find the customers (and situations) you might have excluded with traditional input. Tell the stories of people whose hands are full, or who are vision impaired. You can also point to the adoption rate numbers for smart speakers.

You’ll need to show your scenario in context. Otherwise people will ask, “why can’t we just build an app for this?” Conduct research to demonstrate the appeal of a voice interface. Storyboarding is very useful for visualising the context of use and highlighting existing pain points.

Getting started with voice interfaces

You’ve got to understand how the technology works in order to adapt to how it fails. Here are a few basic concepts.

Utterance. A word, phrase, or sentence spoken by a customer. This is the true form of what the customer provides.

Intent. This is the meaning behind a customer’s request. This is an important distinction because one intent could have thousands of different utterances.

Prompt. The text of a system response that will be provided to a customer. The audio version of a prompt, if needed, is generated separately using text to speech.

Grammar. A finite set of expected utterances. It’s a list. Usually, each entry in a grammar is paired with an intent. Many interfaces start out as being simple grammars before moving on to a machine-learning model later once the concept has been proven.

Here’s the general idea with “artificial intelligence”…

There’s a human with a core intent to do something in the real world, like knowing when the cookies in the oven are done. This is translated into an intent like, “set a 15 minute timer.” That’s the utterance that’s translated into a string. But it hasn’t yet been parsed as language. That string is passed into a natural language understanding system. What comes is a data structure that represents the customers goal e.g. intent=timer; duration=15 minutes. That’s sent to the business logic where a timer is actually step. For a good voice interface, you also want to send back a response e.g. “setting timer for 15 minutes starting now.”

That seems simple enough, right? What’s so hard about designing for voice?

Natural language interfaces are a form of artifical intelligence so it’s not deterministic. There’s a lot of ruling out false positives. Unlike graphical interfaces, voice interfaces are driven by probability.

How do you turn a sound wave into an understandable instruction? It’s a lot like teaching a child. You feed a lot of data into a statistical model. That’s how machine learning works. It’s a probability game. That’s where it gets interesting for design—given a bunch of possible options, we need to use context to zero in on the most correct choice. This is where confidence ratings come in: the system will return the probability that a response is correct. Effectively, the system is telling you how sure or not it is about possible results. If the customer makes a request in an unusual or unexpected way, our system is likely to guess incorrectly. That’s because the system is being given something new.

Designing a conversation is relatively straightforward. But 80% of your voice design time will be spent designing for what happens when things go wrong. In voice recognition, edge cases are front and centre.

Here’s another challenge. Interaction with most voice interfaces is part conversation, part performance. Most interactions are not private.

Humans don’t distinguish digital speech fom human speech. That means these devices are intrinsically social. Our brains our wired to try to extract social information, even form digital speech. See, for example, why it’s such a big question as to what gender a voice interface has.

Delivering a voice interface

Storyboards help depict the context of use. Sample dialogues are your new wireframes. These are little scripts that not only cover the happy path, but also your edge case. Then you reverse engineer from there.

Flow diagrams communicate customer states, but don’t use the actual text in them.

Prompt lists are your final deliverable.

Functional prototypes are really important for voice interfaces. You’ll learn the real way that customers will ask for things.

If you build a working prototype, you’ll be building two things: a natural language interaction model (often a JSON file) and custom business logic (in a programming language).

Eventually voice design will become a core competency, much like mobile, which was once separate.

Ask yourself what tasks your customers complete on your site that feel clunkly. Remember that voice desing is almost never about new scenarious. Start your journey into voice interfaces by tackling old problems in new, more inclusive ways.

May the voice be with you!

Friday, August 23rd, 2019

Stop Misusing Toggle Switches

Use a toggle switch if you are:

  1. Applying a system state, not a contextual one
  2. Presenting binary options, not opposing ones
  3. Activating a state, not performing an action

4 Rules for Intuitive UX – Learn UI Design

  1. Obey the Law of Locality
  2. ABD: Anything But Dropdowns
  3. Pass the Squint Test
  4. Teach by example

Saturday, August 3rd, 2019

Form design: from zero to hero all in one blog post by Adam Silver

This is about designing forms that everyone can use and complete as quickly as possible. Because nobody actually wants to use your form. They just want the outcome of having used it.

Wednesday, July 24th, 2019

Fast Software, the Best Software — by Craig Mod

Fast software is not always good software, but slow software is rarely able to rise to greatness. Fast software gives the user a chance to “meld” with its toolset. That is, not break flow.

Thursday, July 4th, 2019

User Inyerface - A worst-practice UI experiment

It’s all fun and games until you realise that everything in here was inspired by actual interfaces out there on the web.

Monday, June 24th, 2019

Am I cached or not?

When I was writing about the lie-fi strategy I’ve added to adactio.com, I finished with this thought:

What I’d really like is some way to know—on the client side—whether or not the currently-loaded page came from a cache or from a network. Then I could add some kind of interface element that says, “Hey, this page might be stale—click here if you want to check for a fresher version.”

Trys heard my plea, and came up with a very clever technique to alter the HTML of a page when it’s put into a cache.

It’s a function that reads the response body stream in, returning a new stream. Whilst reading the stream, it searches for the character codes that make up: <html. If it finds them, it tacks on a data-cached attribute.

Nice!

But then I was discussing this issue with Tantek and Aaron late one night after Indie Web Camp Düsseldorf. I realised that I might have another potential solution that doesn’t involve the service worker at all.

Caveat: this will only work for pages that have some kind of server-side generation. This won’t work for static sites.

In my case, pages are generated by PHP. I’m not doing a database lookup every time you request a page—I’ve got a server-side cache of posts, for example—but there is a little bit of assembly done for every request: get the header from here; get the main content from over there; get the footer; put them all together into a single page and serve that up.

This means I can add a timestamp to the page (using PHP). I can mark the moment that it was served up. Then I can use JavaScript on the client side to compare that timestamp to the current time.

I’ve published the code as a gist.

In a script element on each page, I have this bit of coducken:

var serverTimestamp = <?php echo time(); ?>;

Now the JavaScript variable serverTimestamp holds the timestamp that the page was generated. When the page is put in the cache, this won’t change. This number should be the number of seconds since January 1st, 1970 in the UTC timezone (that’s what my server’s timezone is set to).

Starting with JavaScript’s Date object, I use a caravan of methods like toUTCString() and getTime() to end up with a variable called clientTimestamp. This will give the current number of seconds since January 1st, 1970, regardless of whether the page is coming from the server or from the cache.

var localDate = new Date();
var localUTCString = localDate.toUTCString();
var UTCDate = new Date(localUTCString);
var clientTimestamp = UTCDate.getTime() / 1000;

Then I compare the two and see if there’s a discrepency greater than five minutes:

if (clientTimestamp - serverTimestamp > (60 * 5))

If there is, then I inject some markup into the page, telling the reader that this page might be stale:

document.querySelector('main').insertAdjacentHTML('afterbegin',`
  <p class="feedback">
    <button onclick="this.parentNode.remove()">dismiss</button>
    This page might be out of date. You can try <a href="javascript:window.location=window.location.href">refreshing</a>.
  </p>
`);

The reader has the option to refresh the page or dismiss the message.

This page might be out of date. You can try refreshing.

It’s not foolproof by any means. If the visitor’s computer has their clock set weirdly, then the comparison might return a false positive every time. Still, I thought that using UTC might be a safer bet.

All in all, I think this is a pretty good method for detecting if a page is being served from a cache. Remember, the goal here is not to determine if the user is offline—for that, there’s navigator.onLine.

The upshot is this: if you visit my site with a crappy internet connection (lie-fi), then after three seconds you may be served with a cached version of the page you’re requesting (if you visited that page previously). If that happens, you’ll now also be presented with a little message telling you that the page isn’t fresh. Then it’s up to you whether you want to have another go.

I like the way that this puts control back into the hands of the user.

Wednesday, June 19th, 2019

Using Hamburger Menus? Try Sausage Links · Bradley Taunt

Another take on the scrolling navigation pattern. However you feel about the implementation details, it’s got to better than the “teenage tidying” method of shoving everything behind a hamburger icon.

Tuesday, June 11th, 2019

Baking accessibility into components: how frameworks help

A very thoughtful post by Hidde that draws a useful distinction between the “internals” of a component (the inner workings of a React component, Vue component, or web component) and the code that wires those components together (the business logic):

I really like working on the detailed stuff that affects users: useful keyboard navigation, sensible focus management, good semantics. But I appreciate not every developer does. I have started to think this may be a helpful separation: some people work on good internals and user experience, others on code that just uses those components and deals with data and caching and solid architecture. Both are valid things, both need love. Maybe we can use the divide for good?

Thursday, June 6th, 2019

An oral history of the hamburger icon (from the people who were there)

From the days of Xerox PARC:

In your garage organization, there’s always a bucket for miscellaneous. You’ve got nuts and bolts and screws and nails, and then, stuff, miscellaneous stuff. That’s kind of what the hamburger menu button was.

Same as it ever was.

Patterns for Promoting PWA Installation (mobile)  |  Web Fundamentals  |  Google Developers

Some ideas for interface elements that prompt progressive web app users to add the website to their home screen.

Monday, May 27th, 2019

Bullet Time

Bullet comments, or 弹幕 (“danmu”), are text-based user reactions superimposed onto online videos: a visual commentary track to which anyone can contribute.

A fascinating article by Christina Xu on this overwhelming collaborative UI overlaid on Chinese video-sharing sites:

In the West, the Chinese internet is mostly depicted in negative terms: what websites and social platforms are blocked, what keywords are banned, what conversations and viral posts are scrubbed clean from the web overnight. This austere view is not inaccurate, but it leaves out what exactly the nearly 750 million internet users in China do get up to.

Take a look at bullet comments, and you’ll have a decent answer to that question. They represent the essence of Chinese internet culture: fast-paced and impish, playfully collaborative, thick with rapidly evolving inside jokes and memes. They are a social feature beloved by a generation known for being antisocial. And most importantly, they allow for a type of spontaneous, cumulative, and public conversation between strangers that is increasingly rare on the Chinese internet.