Tags: privacy

6

sparkline

Designing for Personalities by Sarah Parmenter

Following on from Jeffrey and Margot, the third talk in the morning’s curated content at An Event Apart Seattle is from Sarah Parmenter. Her talk is Designing for Personalities. Here’s the description:

Just as our designs today must accommodate differences of gender, cultural background, and other factors, it’s time to create apps, websites, and internal processes that account for still another strand of human diversity: our very different personality types.

In this new presentation, Sarah shares real-life case studies demonstrating how businesses and organizations large and small are learning to adjust the thinking behind their websites and processes to account for the wishes, needs, and comfort levels of all kinds of people.

We know that the world is full of different conventions—currency, measuring systems, and more—and our web forms address these differences. Let’s do the same for the emotional and psychological assumptions behind our customer profiles. Let’s learn to design for a palette of different personalities.

I’m going to do my best to write down some of what she says…

Sarah works with Adobe, and at a gathering last year, she ended up chatting with some of her co-workers about ancestry, for some reason. She mentioned that she had French and Norwegian roots. The French part is evident in her surname: parmentier means potato farmer. So Sarah did a DNA test. It turned out that Sarah had no French or Norwegian roots—everything in her ancestry came from within an eighty mile radius of her home. It was scary how much she strongly believed for years in something that just wasn’t true.

It’s like that on the web. There are things we do because lots of people do them, but that doesn’t mean they work. Many websites and digital processes are broken and it’s down to us to fix it.

With traditional personas, we make an awful lot of assumptions about people. Have a look at facebook.com/ads/preferences. See just how easy it is for computers to make startling amounts of assumptions.

The other problem with personas is that they are amalgamations. But there’s no such thing as an average costumer. The Microsoft design team add much more context so that they can design for real people in real situations.

Designing for personas only takes care of a fraction of the work we need to do. When we add in another layer of life getting in the way, and a layer of how someone is feeling, you’ve a medley of UX issues that need solving.

The problem is that personality traits aren’t static. They evolve with context. Personas are contextual but static. What we should really be doing is creating the most desirable experience for the user, and we can only do that by empowering them, as Margot also said. We need to give our users control.

If there were such controls, Sarah would use them to reduce motion on websites. She suffers from motion sickness and some websites literally make her sick. There is a prefers-reduced-motion media query but so far only Safari and Firefox support it. It’s hard to believe that we haven’t been doing this already. This stuff seems so obvious in hindsight.

Sarah asks who in the room are introverts. People raise their hands (which seems like quite an extroverted thing to do).

Now Sarah brings up the Meyers-Briggs test, a piece of pseudoscientic bollocks. Sarah is INFJ—introversion, intuition, feeling, judging. Weird flex, but okay.

Introverts will patiently seek out complex UX patterns if it aligns with their levels of comfort. These are people who would rather do anything rather than speak to someone on the phone. An introvert figured out that if you sat on the Virgin Atlantic homepage long enough, a live chat will pop up after twenty minutes.

Apple is great for introverts. They don’t bury their chat options (unlike Amazon). Remember, introverts are a third of the population.

Users will begin to value those applications and services that bother them the least, respect their privacy, and allow them a certain level of control.

Let’s talk about designing compassionate products.

What we’re asking of people in time-critical or exceptionally personal situations is for them to have the foresight to turn on incognito mode. Everyone has an urban legend horror story about cookies following them around the web. Cookies can seem like a smart marketing solution until context lets them down.

Sarah’s best friend got pregnant. She started excitedly clicking around the web looking for pregnancy-related products. She sadly lost the baby. Sarah explained to her how to use a cookie eraser. Her friend that she was joking. Sarah showed her how to clean her search history. But if you’ve liked and subscribed a bunch of things while you’re excited, it’s not that easy—when the worst happens—to think back on everything you did.

There’s an app that’s not in the US. It’s a menstrual cycle and fertility tracking app. It captures a lot of data. At the point when Sarah’s friend lost her baby, this change was caught by the app. The message she got was lacking in empathy. It was more like market research than a compassionate message. At a time when they should’ve been thinking of the mindset of their user, they were focused on getting data. No one caught this when the app was being designed.

The entire user experience of our websites and apps is going to rely on how empathetic we are.

We don’t always save things to reminisce; we save to give us the option to remember. We can currently favourite a photograph or flag as inapropriate. It would be nice to simply save something to a memory vault.

Bloom and Wild is a company in the UK. They send nice mailbox flowers. On March 5th last year, Sarah sent an email to the CEO of Bloom and Wild. She had just received a mailout about mother’s day after her mother passed away. Was their no way of opting out of receiving mother’s day emails without unsubscribing completely?

Well, yesterday they finally implemented it! Bloom and Wild have been overwhelmed by the positive response.

For those of us trying to make the web a better place, sometimes it can be as simple as reaching out to point out what companies could be doing better. And sometimes, just sometimes, they listen.

Also, read Design For Real Life by Eric Meyer and Sara Wachter-Boettcher.

As standard, we should be giving users end-to-end control over how they interact with us.

Sarah wants to talk about designing a personal UX journey. For one of her clients, Sarah dip-sampled hundreds of existing customers. There were gaps in the customer journey. They think that what was happening was the company was getting very aggressive after initial interaction—they were phoning customers. Sarah and her team started researching this. That made them unpopular with other parts of the company. Sarah gave her team Groucho Marx glasses whenever they had to go and ask people uncomfortable questions.

Sarah’s team went on a remarketing effort. They sent an email to people who were in the gap between booking an appointment and making a purchase. They asked the users what their preferences were for contacting them. The company didn’t think they were doing anything wrong but this research showed that 76% of people prefered to avoid phone calls.

They asked a few more questions. If you ask questions, there has to be value in it for the users. Sarah got the budget for some gift cards. They got feedback that many people don’t like taking calls, especially when they’re at work. The best: “I’m an intorvert. I hate calls. Sorry.”

The customer feedback was very, very clear. Even though this would take a lot of money to fix, it was crucial to fix it. Being agile was crucial.

Then they looked at a different (shorter) gap in the customer journey. It was clear that an online booking service was desirable. They made a product quickly that booked more appointments in ten days than had previously been booked in a month by sales agents.

They also made a live chat system. You see a very slow roll-out. At the beginning, it has all new customers. After a while, people return with more questions.

The mistake they made was having a tech-savvy team with multiple browser windows open. That’s not how the customer service people operate. They usually deal with people one on one. So they were happy to leave people waiting on live chat for twenty or twenty five minutes, and of course that was far too long. So when you’re adding in a new system like this, think about key performance indicators that you want to go along with it e.g. live chat must have a response within five minutes.

There’s also a long tail of conversion. Sometimes the sales cycle is very lengthy. They decided to give users the ability to select which product they wanted and switch options on and off. It was all about giving the power back to the user. This was a phenomenal change for the company. They were able to completely change the customer journey and reduce those big gaps. They went from a cycle of fourteen weeks to seven days. They did that by handing the power back to the user.

Sarah’s question for the audience is: What is stopping your user completing your cycle? This can be very difficult. You might have to do horrible things to validate a concept. It’s okay. We’re all perfectionists, but sometimes you have to use quick’n’dirty code to achieve your goal. If the end goal is we’re able to say “hey, this thing worked!” then we can go back and do it properly.

To recap:

  • Respect privacy and build in a personal level of UX adjustment into every product.
  • Outlier data can create superfans of your product.
  • Build the most empathetic experience that you can.

Webmentions at Indie Web Camp Berlin

I was in Berlin for most of last week, and every day was packed with activity:

By the time I got back to Brighton, my brain was full …just in time for FF Conf.

All of the events were very different, but equally enjoyable. It was also quite nice to just attend events without speaking at them.

Indie Web Camp Berlin was terrific. There was an excellent turnout, and once again, I found that the format was just right: a day of discussions (BarCamp style) followed by a day of doing (coding, designing, hacking). I got very inspired on the first day, so I was raring to go on the second.

What I like to do on the second day is try to complete two tasks; one that’s fairly straightforward, and one that’s a bit tougher. That way, when it comes time to demo at the end of the day, even if I haven’t managed to complete the tougher one, I’ll still be able to demo the simpler one.

In this case, the tougher one was also tricky to demo. It involved a lot of invisible behind-the-scenes plumbing. I was tweaking my webmention endpoint (stop sniggering—tweaking your endpoint is no laughing matter).

Up until now, I could handle straightforward webmentions, and I could handle updates (if I receive more than one webmention from the same link, I check it each time). But I needed to also handle deletions.

The spec is quite clear on this. A 404 isn’t enough to trigger a deletion—that might be a temporary state. But a status of 410 Gone indicates that a resource was once here but has since been deliberately removed. In that situation, any stored webmentions for that link should also be removed.

Anyway, I think I got it working, but it’s tricky to test and even trickier to demo. “Not to worry”, I thought, “I’ve always got my simpler task.”

For that, I chose to add a little map to my homepage showing the last location I published something from. I’ve been geotagging all my content for years (journal entries, notes, links, articles), but not really doing anything with that data. This is a first step to doing something interesting with many years of location data.

I’ve got it working now, but the demo gods really weren’t with me at Indie Web Camp. Both of my demos failed. The webmention demo failed quite embarrassingly.

As well as handling deletions, I also wanted to handle updates where a URL that once linked to a post of mine no longer does. Just to be clear, the URL still exists—it’s not 404 or 410—but it has been updated to remove the original link back to one of my posts. I know this sounds like another very theoretical situation, but I’ve actually got an example of it on my very first webmention test post from five years ago. Believe it or not, there’s an escort agency in Nottingham that’s using webmention as a vector for spam. They post something that does link to my test post, send a webmention, and then remove the link to my test post. I almost admire their dedication.

Still, I wanted to foil this particular situation so I thought I had updated my code to handle it. Alas, when it came time to demo this, I was using someone else’s computer, and in my attempt to right-click and copy the URL of the spam link …I accidentally triggered it. In front of a room full of people. It was midly NSFW, but more worryingly, a potential Code Of Conduct violation. I’m very sorry about that.

Apart from the humiliating demo, I thoroughly enjoyed Indie Web Camp, and I’m going to keep adjusting my webmention endpoint. There was a terrific discussion around the ethical implications of storing webmentions, led by Sebastian, based on his epic post from earlier this year.

We established early in the discussion that we weren’t going to try to solve legal questions—like GDPR “compliance”, which varies depending on which lawyer you talk to—but rather try to figure out what the right thing to do is.

Earlier that day, during the introductions, I quite happily showed webmentions in action on my site. I pointed out that my last blog post had received a response from another site, and because that response was marked up as an h-entry, I displayed it in full on my site. I thought this was all hunky-dory, but now this discussion around privacy made me question some inferences I was making:

  1. By receiving a webention in the first place, I was inferring a willingness for the link to be made public. That’s not necessarily true, as someone pointed out: a CMS could be automatically sending webmentions, which the author might be unaware of.
  2. If the linking post is marked up in h-entry, I was inferring a willingness for the content to be republished. Again, not necessarily true.

That second inferrence of mine—that publishing in a particular format somehow grants permissions—actually has an interesting precedent: Google AMP. Simply by including the Google AMP script on a web page, you are implicitly giving Google permission to store a complete copy of that page and serve it from their servers instead of sending people to your site. No terms and conditions. No checkbox ticked. No “I agree” button pressed.

Just sayin’.

Anyway, when it comes to my own processing of webmentions, I’m going to take some of the suggestions from the discussion on board. There are certain signals I could be looking for in the linking post:

  • Does it include a link to a licence?
  • Is there a restrictive robots.txt file?
  • Are there meta declarations that say noindex?

Each one of these could help to infer whether or not I should be publishing a webmention or not. I quickly realised that what we’re talking about here is an algorithm.

Despite its current usage to mean “magic”, an algorithm is a recipe. It’s a series of steps that contribute to a decision point. The problem is that, in the case of silos like Facebook or Instagram, the algorithms are secret (which probably contributes to their aura of magical thinking). If I’m going to write an algorithm that handles other people’s information, I don’t want to make that mistake. Whatever steps I end up codifying in my webmention endpoint, I’ll be sure to document them publicly.

Name That Script! by Trent Walton

Trent is about to pop his AEA cherry and give a talk at An Event Apart in Boston. I’m going to attempt to liveblog this:

How many third-party scripts are loading on our web pages these days? How can we objectively measure the value of these (advertising, a/b testing, analytics, etc.) scripts—considering their impact on web performance, user experience, and business goals? We’ve learned to scrutinize content hierarchy, browser support, and page speed as part of the design and development process. Similarly, Trent will share recent experiences and explore ways to evaluate and discuss the inclusion of 3rd-party scripts.

Trent is going to speak about third-party scripts, which is funny, because a year ago, he never would’ve thought he’d be talking about this. But he realised he needed to pay more attention to:

any request made to an external URL.

Or how about this:

A resource included with a web page that the site owner doesn’t explicitly control.

When you include a third-party script, the third party can change the contents of that script.

Here are some uses:

  • advertising,
  • A/B testing,
  • analytics,
  • social media,
  • content delivery networks,
  • customer interaction,
  • comments,
  • tag managers,
  • fonts.

You get data from things like analytics and A/B testing. You get income from ads. You get content from CDNs.

But Trent has concerns. First and foremost, the user experience effects of poor performance. Also, there are the privacy implications.

Why does Trent—a designer—care about third party scripts? Well, over the years, the areas that Trent pays attention to has expanded. He’s progressed from image comps to frontend to performance to accessibility to design systems to the command line and now to third parties. But Trent has no impact on those third-party scripts. That’s very different to all those other areas.

Trent mostly builds prototypes. Those then get handed over for integration. Sometimes that means hooking it up to a CMS. Sometimes it means adding in analytics and ads. It gets really complex when you throw in third-party comments, payment systems, and A/B testing tools. Oftentimes, those third-party scripts can outweigh all the gains made beforehand. It happens with no discussion. And yet we spent half a meeting discussing a border radius value.

Delivering a performant, accessible, responsive, scalable website isn’t enough: I also need to consider the impact of third-party scripts.

Trent has spent the last few months learning about third parties so he can be better equiped to discuss them.

UX, performance and privacy impact

We feel the UX impact every day we browse the web (if we turn off our content blockers). The Food Network site has an intersitial asking you to disable your ad blocker. They promise they won’t spawn any pop-up windows. Trent turned his ad blocker off—the page was now 15 megabytes in size. And to top it off …he got a pop up.

Privacy can harder to perceive. We brush aside cookie notifications. What if the wording was “accept trackers” instead of “accept cookies”?

Remarketing is that experience when you’re browsing for a spatula and then every website you visit serves you ads for spatula. That might seem harmless but allowing access to our browsing history has serious privacy implications.

Web builders are on the front lines. It’s up to us to advocate for data protection and privacy like we do for web standards. Don’t wait to be told.

Categories of third parties

Ghostery categories third-party providers: advertising, comments, customer interaction, essential, site analytics, social media. You can dive into each layer and see the specific third-party services on the page you’re viewing.

Analyse and itemise third-party scripts

We have “view source” for learning web development. For third parties, you need some tool to export the data. HAR files (HTTP ARchive) are JSON files that you can create from most browsers’ network request panel in dev tools. But what do you do with a .har file? The site har.tech has plenty of resources for you. That’s where Trent found the Mac app, Charles. It can open .har files. Best of all, you can export to CSV so you can share spreadsheets of the data.

You can visualise third-party requests with Simon Hearne’s excellent Request Map. It’s quite impactful for delivering a visceral reaction in a meeting—so much more effective than just saying “hey, we have a lot of third parties.” Request Map can also export to CSV.

Know industry averages

Trent wanted to know what was “normal.” He decided to analyse HAR files for Alexa’s top 50 US websites. The result was a massive spreadsheet of third-party providers. There were 213 third-party domains (which is not even the same as the number of requests). There was an average of 22 unique third-party domains per site. The usual suspects were everywhere—Google, Amazon, Facebook, Adobe—but there were many others. You can find an alphabetical index on better.fyi/trackers. Often the lesser-known domains turn out to be owned by the bigger domains.

News sites and shopping sites have the most third-party scripts, unsurprisingly.

Understand benefits

Trent realised he needed to listen and understand why third-party scripts are being included. He found out what tag managers do. They’re funnels that allow you to cram even more third-party scripts onto your website. Trent worried that this was a Pandora’s box. The tag manager interface is easy to access and use. But he was told that it’s more like a way of organising your third-party scripts under one dashboard. But still, if you get too focused on the dashboard, you could lose focus of the impact on load times. So don’t blame the tool: it’s all about how it’s used.

Take action

Establish a centre of excellence. Put standards in place—in a cross-discipline way—to define how third-party scripts are evaluated. For example:

  1. Determine the value to the business.
  2. Avoid redundant scripts and services.
  3. Fit within the established performance budget.
  4. Comply with the organistional privacy policy.

Document those decisions, maybe even in your design system.

Also, include third-party scripts within your prototypes to get a more accurate feel for the performance implications.

On a live site, you can regularly audit third-party scripts on a regular basis. Check to see if any are redundant or if they’re exceeding the performance budget. You can monitor performance with tools like Calibre and Speed Curve to cover the time in between audits.

Make your case

Do competitive analysis. Look at other sites in your sector. It’s a compelling way to make a case for change. WPO Stats is very handy for anecdata.

You can gather comparative data with Web Page Test: you can run a full test, and you can run a test with certain third parties blocked. Use the results to kick off a discussion about the impact of those third parties.

Talk it out

Work to maintain an ongoing discussion with the entire team. As Tim Kadlec says:

Everything should have a value, because everything has a cost.

100 words 010

  • Nobody writes on their own website any more. People write on Twitter, Facebook, and Medium instead. Personal publishing is dead.
  • You can’t build a complex interactive app on the web without requiring JavaScript. It’s fine if it doesn’t work without JavaScript. JavaScript is ubiquitous.
  • Privacy is dead. The technologies exists to monitor your every movement and track all your communications so it is inevitable that our society will bend to this reality.

These statements aren’t true. But they are repeated so often, as if they were truisms, that we run the risk of believing them and thus, fulfilling their promise.

Analytical

When I was talking about Huffduffer, I mentioned that I don’t have any analytics set up for the site:

To be honest, I’m okay with that—one of the perks of having a personal project is that only metric that really matters is your own satisfaction.

For a while, I did have Google Analytics set up on The Session. But I started to feel a bit uncomfortable about willingly opening up a wormhole between my site and the Google mothership. It bothered me that Adblock Plus would show that one ad had been blocked on the site. There are no ads on the site, but the presence of the Google Analytics code was providing valuable information to Google—and its advertiser customer base—so I can understand why it gets flagged up like any other unwanted tracking.

Theoretically, users have a way of opting out of this kind of tracking by switching on the Do Not Track header (if it isn’t switched on by default). Looking at the default JavaScript code that Google provides for setting up Google Analytics, I don’t see any mention of navigator.doNotTrack.

Now, it may well be that Google sniffs for that header (and abandons any tracking) when its server is pinged via the analytics code, but there’s no way to tell from this side of the Googleplex. I certainly don’t see any mention of it in the JavaScript that gets inserted into our web pages.

I was wondering whether it makes sense to explictly check for the doNotTrack header before opening up that connection to google-analytics.com via a generated script element.

So if the current code looks like:

<script>
// generate a script element
// point it to google-analytics.com
</script>

Would it make sense to wrap it with some kind of test for navigator.doNotTrack:

<script>
if (!navigator.doNotTrack || navigator.doNotTrack != 'yes') {
// generate a script element
// point it to google-analytics.com
}
</script>

For the love of mercy, don’t actually use that code—it’s completely untested and probably causes more harm than good. But you can see the idea that I’m trying to get at, right? Google Analytics most definitely counts as tracking so it seems like the ideal use-case for Do Not Track.

It raises a few questions:

  1. Is anyone doing this already? It might well be that the answer is “no”, not because of any reluctance to respect user preferences but because the doNotTrack spec is very much in flux.
  2. Would you consider doing this?
  3. If you were to do this, could you foresee getting pushback from within your own company?

Tracking

Ajax was a really big deal six, seven, eight years ago. My second book was all about Ajax. I spoke about Ajax at conferences and gave workshops all about using Ajax and progressive enhancement.

During those workshops, I would often point out that Ajax had the potential to be abused terribly. Until the advent of Ajax, it was very clear to a user when data was being submitted to a server: you’d have to click a link or submit a form. As soon as you introduce asynchronous communication, it’s possible for the server to get information from the client even without a full-page refresh.

Imagine, for example, that you’re typing a message into a textarea. You might begin by typing, “Why, you stuck up, half-witted, scruffy-looking nerf…” before calming down and thinking better of it. Before Ajax, there was no way that what you had typed could ever reach the server. But now, it’s entirely possible to send data via Ajax with every key press.

It was just a thought experiment. I wasn’t actually that worried that anyone would ever do something quite so creepy.

Then I came across this article by Jennifer Golbeck in Slate all about Facebook tracking what’s entered—but then erased—within its status update form:

Unfortunately, the code that powers Facebook still knows what you typed—even if you decide not to publish it. It turns out that the things you explicitly choose not to share aren’t entirely private.

Initially I thought there must have been some mistake. I erronously called out Jen Golbeck when I found the PDF of a paper called The Post that Wasn’t: Exploring Self-Censorship on Facebook. The methodology behind the sample group used for that paper was much more old-fashioned than using Ajax:

First, participants took part in a weeklong diary study during which they used SMS messaging to report all instances of unshared content on Facebook (i.e., content intentionally self-censored). Participants also filled out nightly surveys to further describe unshared content and any shared content they decided to post on Facebook. Next, qualified participants took part in in-lab interviews.

But the Slate article was referencing a different paper that does indeed use Ajax to track instances of deleted text:

This research was conducted at Facebook by Facebook researchers. We collected self-censorship data from a random sample of approximately 5 million English-speaking Facebook users who lived in the U.S. or U.K. over the course of 17 days (July 6-22, 2012).

So what I initially thought was a case of alarmism—conflating something as simple as simple as a client-side character count with actual server-side monitoring—turned out to be a pretty accurate reading of the situation. I originally intended to write a scoffing post about Slate’s linkbaiting alarmism (and call it “The shocking truth behind the latest Facebook revelation”), but it turns out that my scoffing was misplaced.

That said, the article has been updated to reflect that the Ajax requests are only sending information about deleted characters—not the actual content. Still, as we learned very clearly from the NSA revelations, there’s not much practical difference between logging data and logging metadata.

The nerds among us may start firing up our developer tools to keep track of unexpected Ajax requests to the server. But what about everyone else?

This isn’t the first time that the power of JavaScript has been abused. Every browser now ships with an option to block pop-up windows. That’s because the ability to spawn new windows was so horribly misused. Maybe we’re going to see similar preference options to avoid firing Ajax requests on keypress.

It would be depressingly reductionist to conclude that any technology that can be abused will be abused. But as long as there are web developers out there who are willing to spawn pop-up windows or force persistent cookies or use Ajax to track deleted content, the depressingly reductionist conclusion looks like self-fulfilling prophecy.