Tags: action

218

sparkline

Friday, July 24th, 2020

MSEdgeExplainers/explainer.md at main · MicrosoftEdge/MSEdgeExplainers

This is great! Ideas for allowing more styling of form controls. I agree with the goals 100% and I like the look of the proposed solutions too.

The team behind this are looking for feedback so be sure to share your thoughts (I’ll probably formulate mine into a blog post).

Thursday, July 23rd, 2020

4 Design Patterns That Violate “Back” Button Expectations – 59% of Sites Get It Wrong - Articles - Baymard Institute

Some interesting research in here around user expecations with the back button:

Generally, we’ve observed that if a new view is sufficiently different visually, or if a new view conceptually feels like a new page, it will be perceived as one — regardless of whether it technically is a new page or not. This has consequences for how a site should handle common product-finding and -exploration elements like overlays, filtering, and sorting. For example, if users click a link and 70% of the view changes to something new, most will perceive this to be a new page, even if it’s technically still the same page, just with a new view loaded in.

Saturday, July 18th, 2020

An Introduction To Stimulus.js — Smashing Magazine

An intro to Stimulus, the lightweight JavaScript library from Basecamp that takes a progressive enhancement approach, as seen with HEY.

One aspect I really like about the approach Stimulus encourages, is I can focus on sending HTML down the wire to my users, which is then jazzed up a little with JavaScript.

I’ve always been a fan of using the first few milliseconds of a user’s attention getting what I have to share with them — in front of them. Then worrying setting up the interaction layer while the user can start processing what they’re seeing.

Furthermore, if the JavaScript were to fail for whatever reason, the user can still see the content and interact with it without JavaScript.

Tuesday, June 9th, 2020

Intent

There are intentions and there are outcomes. Sometimes bad outcomes are the result of good intentions. Less often, good outcomes can be the result of bad intentions. But generally we associate the two: we expect good outcomes to come from good intentions and we expect bad outcomes to come from bad intentions.

Perhaps it’s because of this conflation that we place too much emphasis on intentions. If, for example, someone is called out for causing a bad outcome, their first response is often to defend their intentions. That’s understandable. When someone says “you have created a bad outcome”, I understand why the person on the receiving end would receive that feedback as “you intended to create this bad outcome.” Cue a non-apology that clarifies the (good) intention without acknowledging the reality of the outcome (“It was never my intention to…”).

I get it. Intentions do matter …just not as much as we give them credit for. I mean, in general, I’d prefer bad outcomes to be the inadvertent result of good intentions. But in some ways, it really doesn’t matter: a bad outcome is a bad outcome.

Anyway, all of this is just to preface something I’m going to say about myself:

I am almost certainly racist.

I don’t intend to be racist, but like I said, intentions aren’t really what matter. Outcomes are.

Note, for example, the cliché of the gormless close-minded goon who begins a sentence with “I’m not racist, but…” before going on to say something clearly racist. It’s as though the racism could be defanged by disavowing bad intent.

The same defence mechanism is used to defend racist traditions. “Oh, it’s not racist—that’s just something we’ve always done.” Again, the defence is for the intention, not the outcome. And again, outcomes matter far, far more than intentions.

I really don’t intend to be racist. But how could I not be? I grew up in a small town in Ireland where literally everyone else looked like me. By the same token, I’m also almost certainly sexist. Growing up as a cisgender male in a patriarchal society guarantees that my mind has been shaped in ways I now wish it weren’t.

Acknowledging my racism—and sexism—doesn’t mean I’m okay with it. On the contrary. It’s a source of shame. But acknowledging my racism is a necessary step to changing it.

In any case, it doesn’t really matter how I feel about any of this. This isn’t meant to be a confessional. What matters are outcomes. Outcomes aren’t really the direct result of intentions—outcomes are the direct result of actions.

Most of my actions lately have been very passive. Listening. Watching. Because my actions are passive, they are indistinguishable from silence. That’s not good. Silence can be interpreted as acquiescence, acceptance. That’s not what I intend …but my intentions don’t matter.

So, even though this isn’t about me or my voice or my intentions, and even though this is something that is so self-evident that it shouldn’t need to be said, I want to say:

Black lives matter.

Tuesday, May 26th, 2020

This Website Will Self-destruct

You can send me messages using the form below. If I go 24 hours without receiving a message, I’ll permanently self-destruct, and everything will be wiped from my database.

Friday, May 8th, 2020

Designing for Progressive Disclosure by Steven Hoober

Progressive disclosure interface patterns categorised and evaluated:

  • popups,
  • drawers,
  • mouseover popups (just say no!),
  • accordions,
  • tabs,
  • new pages,
  • scrolling,
  • scrolling sideways.

I really like the hypertext history invoked in this article.

The piece finishes with a great note on the MacNamara fallacy:

Everyone thinks metrics let us measure results. But, actually, they don’t. They measure only what they are measuring. Engagement, for example, is not something that can be measured, so we use an analogue for it. Time on page. Or clicks.

We often end up measuring what is quick, cheap, and easy to measure. Therefore, few organizations regularly conduct usability testing or customer-satisfaction surveys, but lots use analytics.

Even today, organizations often use clicks as a measure of engagement. So, all too often, they design user interfaces to generate clicks, so the system can measure them.

Home | SofaConf 2020

You don’t want to miss this! A five-day online conference with a different theme each day:

  1. Monday: Product Strategy
  2. Tuesday: Research
  3. Wednesday: Service Design
  4. Thursday: Content Strategy
  5. Friday: Interaction Design

Speakers include Amy Hupe, Kelly Goto, Kristina Halvorson, Lou Downe, Leisa Reichelt and many more still to be announce, all for ludicrously cheap ticket prices.

I know it sounds like I’m blowing my own trumpet because this is a Clearleft event, but I had nothing to do with it. The trumpets of my talented co-workers should be blasting in harmonious chorus.

(It’s a truly lovely website too!)

Sunday, March 8th, 2020

The 3 Laws of Serverless - Burke Holland

“Serverless”, is a buzzword. We can’t seem to agree on what it actaully means, so it ends up meaning nothing at all. Much like “cloud” or “dynamic” or “synergy”. You just wait for the right time in a meeting to drop it, walk to the board and draw a Venn Diagram, and then just sit back and wait for your well-deserved promotion.

That’s very true, and I do not like the term “serverless” for the rather obvious reason that it’s all about servers (someone else’s servers, that is). But these three principles are handy for figuring out if you’re building with in a serverlessy kind of way:

  1. You have no knowledge of the underlying system where your code runs.
  2. Scaling is an intrinsic attribute of the technology; so much so that it just happens automatically.
  3. You only pay for what you use.

Abstraction; scale; consumption.

Friday, December 13th, 2019

Why `details` is Not an Accordion - daverupert.com

At the risk of being a broken record; HTML really needs <accordion> , <tabs>, <dialog>, <dropdown>, and <tooltip> elements. Not more “low-level primitives” but good ol’ fashioned, difficult-to-get-consensus-on elements.

Hear, hear!

I wish browsers would prioritize accessibility improvements over things like main thread scheduling optimization to unblock tracking pixels and the Sisyphean task of competing with native.

If we really want to win, let’s make it easy for everyone to access the Web.

Saturday, September 7th, 2019

How Video Games Inspire Great UX – Scott Jenson

Six UX lessons from game design:

  1. Story vs Narrative (Think in terms of story arcs)
  2. Games are fractal (Break up the journey from big to small to tiny)
  3. Learning loop (figure out your core mechanic)
  4. Affordances (Prompt for known loops)
  5. Hintiness (Move to new loops)
  6. Pacing (Be sure to start here)

Tuesday, September 3rd, 2019

Bottom Navigation Pattern On Mobile Web Pages: A Better Alternative? — Smashing Magazine

Making the case for moving your navigation to the bottom of the screen on mobile:

Phones are getting bigger, and some parts of the screen are easier to interact with than others. Having the hamburger menu at the top provides too big of an interaction cost, and we have a large number of amazing mobile app designs that utilize the bottom part of the screen. Maybe it’s time for the web design world to start using these ideas on websites as well?

Sunday, September 1st, 2019

Less… Is More? Apple’s Inconsistent Ellipsis Icons Inspire User Confusion - TidBITS

The ellipsis is the new hamburger.

It’s disappointing that Apple, supposedly a leader in interface design, has resorted to such uninspiring, and I’ll dare say, lazy design in its icons. I don’t claim to be a usability expert, but it seems to me that icons should represent a clear intention, followed by a consistent action.

Tuesday, August 27th, 2019

Voice User Interface Design by Cheryl Platz

Cheryl Platz is speaking at An Event Apart Chicago. Her inaugural An Event Apart presentation is all about voice interfaces, and I’m going to attempt to liveblog it…

Why make a voice interface?

Successful voice interfaces aren’t necessarily solving new problems. They’re used to solve problems that other devices have already solved. Think about kitchen timers. There are lots of ways to set a timer. Your oven might have one. Your phone has one. Why use a $200 device to solve this mundane problem? Same goes for listening to music, news, and weather.

People are using voice interfaces for solving ordinary problems. Why? Context matters. If you’re carrying a toddler, then setting a kitchen timer can be tricky so a voice-activated timer is quite appealing. But why is voice is happening now?

Humans have been developing the art of conversation for thousands of years. It’s one of the first skills we learn. It’s deeply instinctual. Most humans use speach instinctively every day. You can’t necessarily say that about using a keyboard or a mouse.

Voice-based user interfaces are not new. Not just the idea—which we’ve seen in Star Trek—but the actual implementation. Bell Labs had Audrey back in 1952. It recognised ten words—the digits zero through nine. Why did it take so long to get to Alexa?

In the late 70s, DARPA issued a challenge to create a voice-activated system. Carnagie Mellon came up with Harpy (with a thousand word grammar). But none of the solutions could respond in real time. In conversation, we expect a break of no more than 200 or 300 milliseconds.

In the 1980s, computing power couldn’t keep up with voice technology, so progress kind of stopped. Time passed. Things finally started to catch up in the 90s with things like Dragon Naturally Speaking. But that was still about vocabulary, not grammar. By the 2000s, small grammars were starting to show up—starting an X-Box or pausing Netflix. In 2008, Google Voice Search arrived on the iPhone and natural language interaction began to arrive.

What makes natural language interactions so special? It requires minimal training because it uses the conversational muscles we’ve been working for a lifetime. It unlocks the ability to have more forgiving, less robotic conversations with devices. There might be ten different ways to set a timer.

Natural language interactions can also free us from “screen magnetism”—that tendency to stay on a device even when our original task is complete. Voice also enables fast and forgiving searches of huge catalogues without time spent typing or browsing. You can pick a needle straight out of a haystack.

Natural language interactions are excellent for older customers. These interfaces don’t intimidate people without dexterity, vision, or digital experience. Voice input often leads to more inclusive experiences. Many customers with visual or physical disabilities can’t use traditional graphical interfaces. Voice experiences throw open the door of opportunity for some people. However, voice experience can exclude people with speech difficulties.

Making the case for voice interfaces

There’s a misconception that you need to work at Amazon, Google, or Apple to work on a voice interface, or at least that you need to have a big product team. But Cheryl was able to make her first Alexa “skill” in a week. If you’re a web developer, you’re good to go. Your voice “interaction model” is just JSON.

How do you get your product team on board? Find the customers (and situations) you might have excluded with traditional input. Tell the stories of people whose hands are full, or who are vision impaired. You can also point to the adoption rate numbers for smart speakers.

You’ll need to show your scenario in context. Otherwise people will ask, “why can’t we just build an app for this?” Conduct research to demonstrate the appeal of a voice interface. Storyboarding is very useful for visualising the context of use and highlighting existing pain points.

Getting started with voice interfaces

You’ve got to understand how the technology works in order to adapt to how it fails. Here are a few basic concepts.

Utterance. A word, phrase, or sentence spoken by a customer. This is the true form of what the customer provides.

Intent. This is the meaning behind a customer’s request. This is an important distinction because one intent could have thousands of different utterances.

Prompt. The text of a system response that will be provided to a customer. The audio version of a prompt, if needed, is generated separately using text to speech.

Grammar. A finite set of expected utterances. It’s a list. Usually, each entry in a grammar is paired with an intent. Many interfaces start out as being simple grammars before moving on to a machine-learning model later once the concept has been proven.

Here’s the general idea with “artificial intelligence”…

There’s a human with a core intent to do something in the real world, like knowing when the cookies in the oven are done. This is translated into an intent like, “set a 15 minute timer.” That’s the utterance that’s translated into a string. But it hasn’t yet been parsed as language. That string is passed into a natural language understanding system. What comes is a data structure that represents the customers goal e.g. intent=timer; duration=15 minutes. That’s sent to the business logic where a timer is actually step. For a good voice interface, you also want to send back a response e.g. “setting timer for 15 minutes starting now.”

That seems simple enough, right? What’s so hard about designing for voice?

Natural language interfaces are a form of artifical intelligence so it’s not deterministic. There’s a lot of ruling out false positives. Unlike graphical interfaces, voice interfaces are driven by probability.

How do you turn a sound wave into an understandable instruction? It’s a lot like teaching a child. You feed a lot of data into a statistical model. That’s how machine learning works. It’s a probability game. That’s where it gets interesting for design—given a bunch of possible options, we need to use context to zero in on the most correct choice. This is where confidence ratings come in: the system will return the probability that a response is correct. Effectively, the system is telling you how sure or not it is about possible results. If the customer makes a request in an unusual or unexpected way, our system is likely to guess incorrectly. That’s because the system is being given something new.

Designing a conversation is relatively straightforward. But 80% of your voice design time will be spent designing for what happens when things go wrong. In voice recognition, edge cases are front and centre.

Here’s another challenge. Interaction with most voice interfaces is part conversation, part performance. Most interactions are not private.

Humans don’t distinguish digital speech fom human speech. That means these devices are intrinsically social. Our brains our wired to try to extract social information, even form digital speech. See, for example, why it’s such a big question as to what gender a voice interface has.

Delivering a voice interface

Storyboards help depict the context of use. Sample dialogues are your new wireframes. These are little scripts that not only cover the happy path, but also your edge case. Then you reverse engineer from there.

Flow diagrams communicate customer states, but don’t use the actual text in them.

Prompt lists are your final deliverable.

Functional prototypes are really important for voice interfaces. You’ll learn the real way that customers will ask for things.

If you build a working prototype, you’ll be building two things: a natural language interaction model (often a JSON file) and custom business logic (in a programming language).

Eventually voice design will become a core competency, much like mobile, which was once separate.

Ask yourself what tasks your customers complete on your site that feel clunkly. Remember that voice desing is almost never about new scenarious. Start your journey into voice interfaces by tackling old problems in new, more inclusive ways.

May the voice be with you!

Friday, August 23rd, 2019

Stop Misusing Toggle Switches

Use a toggle switch if you are:

  1. Applying a system state, not a contextual one
  2. Presenting binary options, not opposing ones
  3. Activating a state, not performing an action

4 Rules for Intuitive UX – Learn UI Design

  1. Obey the Law of Locality
  2. ABD: Anything But Dropdowns
  3. Pass the Squint Test
  4. Teach by example

Saturday, August 3rd, 2019

Form design: from zero to hero all in one blog post by Adam Silver

This is about designing forms that everyone can use and complete as quickly as possible. Because nobody actually wants to use your form. They just want the outcome of having used it.

Thursday, June 6th, 2019

Patterns for Promoting PWA Installation (mobile)  |  Web Fundamentals  |  Google Developers

Some ideas for interface elements that prompt progressive web app users to add the website to their home screen.

Friday, May 31st, 2019

The World-Wide Work

I’ve been to a lot of events and I’ve seen a lot of talks. I find that, even after all this time, I always get something out of every presentation I see. Kudos to anyone who’s got the guts to get up on stage and share their thoughts.

But there are some talks that are genuinely special. When they come along, it’s a real privilege to be in the room. Wilson’s talk, When We Build was one of those moments. There are some others that weren’t recorded, but will always stay with me.

Earlier this year, I had the great honour of opening the New Adventures conference in Nottingham. I definitely felt a lot of pressure, and I did my utmost to set the scene for the day. The final talk of the day was delivered by my good friend Ethan. He took it to another level.

Like I said at the time:

Look, I could gush over how good Ethan’s talk was, or try to summarise it, but there’s really no point. I’ll just say that I felt the same sense of being present at something genuinely important that I felt when I was in the room for his original responsive web design talk at An Event Apart back in 2010. When the video is released, you really must watch it.

Well, the video has been released and you really must watch it. Don’t multitask. Don’t fast forward. Set aside some time and space, and then take it all in.

The subject matter, the narrative structure, the delivery, and the message come together in a unique way.

If, having watched the presentation, you want to dive deeper into any of Ethan’s references, check out the reading list that accompanies the talk.

I mentioned that I felt under pressure to deliver a good opener for New Adventures. I know that Ethan was really feeling the pressure too. He needn’t have worried. He delivered one of the best conference talks I’ve ever seen.

Thank you, Ethan.

Thursday, May 16th, 2019

Welcome to Acccessible App | Accessible App

A very welcome project from Marcus Herrmann, documenting how to make common interaction patterns accessible in popular frameworks: Vue, React, and Angular.

Wednesday, April 24th, 2019

Interview with Kyle Simpson (O’Reilly Fluent Conference 2016) - YouTube

I missed this when it was first posted three years ago, but now I think I’ll be revisiting this 12 minute interview every few months.

Everything that Kyle says here is spot on, nuanced, and thoughtful. He talks about abstraction, maintainability, learning, and complexity.

I want a transcript of the whole thing.

Interview with Kyle Simpson (O'Reilly Fluent Conference 2016)