Tags: learning

256

sparkline

Monday, December 16th, 2019

Artificial Intelligence: Threat or Menace? - Charlie’s Diary

I am not a believer in the AI singularity — the rapture of the nerds — that is, in the possibility of building a brain-in-a-box that will self-improve its own capabilities until it outstrips our ability to keep up. What CS professor and fellow SF author Vernor Vinge described as “the last invention humans will ever need to make”. But I do think we’re going to keep building more and more complicated, systems that are opaque rather than transparent, and that launder our unspoken prejudices and encode them in our social environment. As our widely-deployed neural processors get more powerful, the decisions they take will become harder and harder to question or oppose. And that’s the real threat of AI — not killer robots, but “computer says no” without recourse to appeal.

Tuesday, December 10th, 2019

AI Weirdness • Play AI Dungeon 2. Become a dragon. Eat the moon.

After reading this account of a wonderfully surreal text adventure game, you’ll probably want to play AI Dungeon 2:

A PhD student named Nathan trained the neural net on classic dungeon crawling games, and playing it is strangely surreal, repetitive, and mesmerizing, like dreaming about playing one of the games it was trained on.

Tuesday, December 3rd, 2019

Music and Web Design | Brad Frost

I feel my trajectory as a musician maps to the trajectory of the web industry. The web is still young. We’re all still figuring stuff out and we’re all eager to get better. In our eagerness to get better, we’re reaching for more complexity. More complex abstractions, build processes, and tools. Because who wants to be bored playing in 4/4 when you can be playing in 7/16?

I hope we in the web field will arrive at the same realization that I did as a musician: complexity is not synonymous with quality.

Can I get an “Amen!”?

Monday, December 2nd, 2019

Design Questions Library | d.school public library

This site is not meant to be exhaustive, but rather a useful guide—our FAQ for design understanding. We hope it will inspire discussion, some questioning, a little soul searching, and ideally, a bit of intellectual support for your everyday endeavors.

The Design Questions Library goes nicely with the Library of Ambiguity.

Wednesday, November 27th, 2019

The Thought Process Behind a Flexbox Layout | CSS-Tricks

This is such a great way to explain a technology! Chris talks through his thought process when using flexbox for layout.

Thursday, November 21st, 2019

Frank Chimero Redesign Blog: The Popeye Moment

Frank is redesigning in the open. Watch this space:

By writing about it, it may help both of us. I can further develop my methods by navigating the friction of explaining them. I’ve been looking for a way to clarify and share my thoughts about typography and layout on screens, and this seems like a good chance to do so. And you? Well, perhaps the site can offer a clearly explained way of working that’s worth considering. That seems to be a rare thing on the web these days.

Wednesday, November 20th, 2019

Build your own React

This is a fascinating way to present a code tutorial! It reminds of Tim’s Tutorial Markdown that I linked to a while back (which in turn reminds me of Bret Victor’s work).

Thursday, November 7th, 2019

Thursday, October 24th, 2019

Design muscles

Look. Observe. See.

Friday, September 27th, 2019

To decarbonize we must decomputerize: why we need a Luddite revolution | Technology | The Guardian

Decomputerization doesn’t mean no computers. It means that not all spheres of life should be rendered into data and computed upon. Ubiquitous “smartness” largely serves to enrich and empower the few at the expense of the many, while inflicting ecological harm that will threaten the survival and flourishing of billions of people.

Saturday, September 7th, 2019

How Video Games Inspire Great UX – Scott Jenson

Six UX lessons from game design:

  1. Story vs Narrative (Think in terms of story arcs)
  2. Games are fractal (Break up the journey from big to small to tiny)
  3. Learning loop (figure out your core mechanic)
  4. Affordances (Prompt for known loops)
  5. Hintiness (Move to new loops)
  6. Pacing (Be sure to start here)

Friday, September 6th, 2019

Getting started

I got an email recently from a young person looking to get into web development. They wanted to know what languages they should start with, whether they should a Mac or a Windows PC, and what some places to learn from.

I wrote back, saying this about languages:

For web development, start with HTML, then CSS, then JavaScript (and don’t move on to JavaScript too quickly—really get to grips with HTML and CSS first).

And this is what I said about hardware and software:

It doesn’t matter whether you use a Mac or a Windows PC, as long as you’ve got an internet connection, some web browsers (Chrome, Firefox, for example) and a text editor. There are some very good free text editors available for Mac and PC:

For resources, I had a trawl through links I’ve tagged with “learning” and “html” and sent along some links to free online tutorials:

After sending that email, I figured that this list might be useful to anyone else looking to start out in web development. If you know of anyone in that situation, I hope this list might help.

Tuesday, August 27th, 2019

Voice User Interface Design by Cheryl Platz

Cheryl Platz is speaking at An Event Apart Chicago. Her inaugural An Event Apart presentation is all about voice interfaces, and I’m going to attempt to liveblog it…

Why make a voice interface?

Successful voice interfaces aren’t necessarily solving new problems. They’re used to solve problems that other devices have already solved. Think about kitchen timers. There are lots of ways to set a timer. Your oven might have one. Your phone has one. Why use a $200 device to solve this mundane problem? Same goes for listening to music, news, and weather.

People are using voice interfaces for solving ordinary problems. Why? Context matters. If you’re carrying a toddler, then setting a kitchen timer can be tricky so a voice-activated timer is quite appealing. But why is voice is happening now?

Humans have been developing the art of conversation for thousands of years. It’s one of the first skills we learn. It’s deeply instinctual. Most humans use speach instinctively every day. You can’t necessarily say that about using a keyboard or a mouse.

Voice-based user interfaces are not new. Not just the idea—which we’ve seen in Star Trek—but the actual implementation. Bell Labs had Audrey back in 1952. It recognised ten words—the digits zero through nine. Why did it take so long to get to Alexa?

In the late 70s, DARPA issued a challenge to create a voice-activated system. Carnagie Mellon came up with Harpy (with a thousand word grammar). But none of the solutions could respond in real time. In conversation, we expect a break of no more than 200 or 300 milliseconds.

In the 1980s, computing power couldn’t keep up with voice technology, so progress kind of stopped. Time passed. Things finally started to catch up in the 90s with things like Dragon Naturally Speaking. But that was still about vocabulary, not grammar. By the 2000s, small grammars were starting to show up—starting an X-Box or pausing Netflix. In 2008, Google Voice Search arrived on the iPhone and natural language interaction began to arrive.

What makes natural language interactions so special? It requires minimal training because it uses the conversational muscles we’ve been working for a lifetime. It unlocks the ability to have more forgiving, less robotic conversations with devices. There might be ten different ways to set a timer.

Natural language interactions can also free us from “screen magnetism”—that tendency to stay on a device even when our original task is complete. Voice also enables fast and forgiving searches of huge catalogues without time spent typing or browsing. You can pick a needle straight out of a haystack.

Natural language interactions are excellent for older customers. These interfaces don’t intimidate people without dexterity, vision, or digital experience. Voice input often leads to more inclusive experiences. Many customers with visual or physical disabilities can’t use traditional graphical interfaces. Voice experiences throw open the door of opportunity for some people. However, voice experience can exclude people with speech difficulties.

Making the case for voice interfaces

There’s a misconception that you need to work at Amazon, Google, or Apple to work on a voice interface, or at least that you need to have a big product team. But Cheryl was able to make her first Alexa “skill” in a week. If you’re a web developer, you’re good to go. Your voice “interaction model” is just JSON.

How do you get your product team on board? Find the customers (and situations) you might have excluded with traditional input. Tell the stories of people whose hands are full, or who are vision impaired. You can also point to the adoption rate numbers for smart speakers.

You’ll need to show your scenario in context. Otherwise people will ask, “why can’t we just build an app for this?” Conduct research to demonstrate the appeal of a voice interface. Storyboarding is very useful for visualising the context of use and highlighting existing pain points.

Getting started with voice interfaces

You’ve got to understand how the technology works in order to adapt to how it fails. Here are a few basic concepts.

Utterance. A word, phrase, or sentence spoken by a customer. This is the true form of what the customer provides.

Intent. This is the meaning behind a customer’s request. This is an important distinction because one intent could have thousands of different utterances.

Prompt. The text of a system response that will be provided to a customer. The audio version of a prompt, if needed, is generated separately using text to speech.

Grammar. A finite set of expected utterances. It’s a list. Usually, each entry in a grammar is paired with an intent. Many interfaces start out as being simple grammars before moving on to a machine-learning model later once the concept has been proven.

Here’s the general idea with “artificial intelligence”…

There’s a human with a core intent to do something in the real world, like knowing when the cookies in the oven are done. This is translated into an intent like, “set a 15 minute timer.” That’s the utterance that’s translated into a string. But it hasn’t yet been parsed as language. That string is passed into a natural language understanding system. What comes is a data structure that represents the customers goal e.g. intent=timer; duration=15 minutes. That’s sent to the business logic where a timer is actually step. For a good voice interface, you also want to send back a response e.g. “setting timer for 15 minutes starting now.”

That seems simple enough, right? What’s so hard about designing for voice?

Natural language interfaces are a form of artifical intelligence so it’s not deterministic. There’s a lot of ruling out false positives. Unlike graphical interfaces, voice interfaces are driven by probability.

How do you turn a sound wave into an understandable instruction? It’s a lot like teaching a child. You feed a lot of data into a statistical model. That’s how machine learning works. It’s a probability game. That’s where it gets interesting for design—given a bunch of possible options, we need to use context to zero in on the most correct choice. This is where confidence ratings come in: the system will return the probability that a response is correct. Effectively, the system is telling you how sure or not it is about possible results. If the customer makes a request in an unusual or unexpected way, our system is likely to guess incorrectly. That’s because the system is being given something new.

Designing a conversation is relatively straightforward. But 80% of your voice design time will be spent designing for what happens when things go wrong. In voice recognition, edge cases are front and centre.

Here’s another challenge. Interaction with most voice interfaces is part conversation, part performance. Most interactions are not private.

Humans don’t distinguish digital speech fom human speech. That means these devices are intrinsically social. Our brains our wired to try to extract social information, even form digital speech. See, for example, why it’s such a big question as to what gender a voice interface has.

Delivering a voice interface

Storyboards help depict the context of use. Sample dialogues are your new wireframes. These are little scripts that not only cover the happy path, but also your edge case. Then you reverse engineer from there.

Flow diagrams communicate customer states, but don’t use the actual text in them.

Prompt lists are your final deliverable.

Functional prototypes are really important for voice interfaces. You’ll learn the real way that customers will ask for things.

If you build a working prototype, you’ll be building two things: a natural language interaction model (often a JSON file) and custom business logic (in a programming language).

Eventually voice design will become a core competency, much like mobile, which was once separate.

Ask yourself what tasks your customers complete on your site that feel clunkly. Remember that voice desing is almost never about new scenarious. Start your journey into voice interfaces by tackling old problems in new, more inclusive ways.

May the voice be with you!

Thursday, August 1st, 2019

The web without the web - DEV Community 👩‍💻👨‍💻

I love React. I love how server side rendering React apps is trivial because it all compiles down to vanilla HTML rather than web components, effectively turning it into a kickass template engine that can come alive. I love the way you can very effectively still do progressive enhancement by using completely semantic markup and then letting hydration do more to it.

I also hate React. I hate React because these behaviours are not defaults. React is not gonna warn you if you make a form using divs and unlabelled textboxes and send the whole thing to a server. I hate React because CSS-in-JS approaches by default encourage you to write completely self contained one off components rather than trying to build a website UI up as a whole. I hate the way server side rendering and progressive enhancement are not defaults, but rather things you have to go out of your way to do.

An absolutely brilliant post by Laura on how the priorites baked into JavaScript tools like React are really out of whack. They’ll make sure your behind-the-scenes code is super clean, but not give a rat’s ass for the quality of the output that users have to interact with.

And if you want to adjust the front-end code, you’ve got to set up all this tooling just to change a div to a button. That’s quite a barrier to entry.

In elevating frontend to the land of Serious Code we have not just made things incredibly over-engineered but we have also set fire to all the ladders that we used to get up here in the first place.

AMEN!

I love React because it lets me do my best work faster and more easily. I hate React because the culture around it more than the library itself actively prevents other people from doing their best work.

Tuesday, July 23rd, 2019

Pseudo Code | CSS-Tricks

I find myself doing pseudo code before I write real code, sure, but I also leave it in place sometimes in code comments.

Same!

Sunday, July 21st, 2019

Frontend Design, React, and a Bridge over the Great Divide

Brad describes how he has found his place in the world of React, creating UI components without dabbling in business logic:

Instead of merely creating components’ reference HTML, CSS, and presentational JS, frontend designers can create directly consumable HTML, CSS, and presentational JS that back-of-the-frontend developers can then breathe life into.

What’s clear is that the term “React” has become as broad and undefined as the term “front-end”. Just saying that someone does React doesn’t actually say much about the nature of the work.

When you say “we’re hiring a React developer”, what exactly do you mean by that? “React developer” is almost as vague as “frontend developer”, so clarify. Are you looking for a person to specialize in markup and styles? A person to author middleware and business logic? A person to manage data and databases? A person to own build processes?

Friday, July 5th, 2019

Smashing TV — Webinars and Live Sessions — Smashing Magazine

Don’t miss this—a masterclass in SVG animation with Cassie (I refuse to use the W word). Mark your calendar: August 20th.

Monday, July 1st, 2019

Why Did I Have Difficulty Learning React? - Snook.ca

When people talk about learning React, I think that React, in and of itself, is relatively easy to understand. At least, I felt it was. I have components. I have JSX. I hit some hiccups with required keys or making sure I was wrapping child elements properly. But overall, I felt like I grasped it well enough.

Throw in everything else at the same time, though, and things get confusing because it’s hard at first to recognize what belongs to what. “Oh, this is Redux. That is React. That other thing is lodash. Got it.”

This resonates a lot with Dave’s post:

React is an ecosystem. I feel like it’s a disservice to anyone trying to learn to diminish all that React entails. React shows up on the scene with Babel, Webpack, and JSX (which each have their own learning curve) then quickly branches out into technologies like Redux, React-Router, Immutable.js, Axios, Jest, Next.js, Create-React-App, GraphQL, and whatever weird plugin you need for your app.

Thursday, June 27th, 2019

What I Learned Co-Founding Dribbble – SimpleBits

Twenty hard-won lessons from Dan from ten years of Dribbble.

We sent 50 shirts along with a card to friends and colleagues announcing Dribbble’s beta back in 2008. This first batch of members played a pivotal role in the foundation of the community and how it would develop. The shirt helped guilt them into actually checking out the site.

I think I still have my T-shirt somewhere!

Wednesday, June 26th, 2019

Finch Front-End · Edinburgh · 23rd-25th September 2019

This looks like an excellent conference line-up! Alas, I won’t be able to make it (I’m out of the country when it’s on) but you should definitely go if you can.