Tags: met

103

sparkline

Building on Vimeo

Here’s the video of the opening talk I gave at New Adventures earlier this year. I think it’s pretty darn good!

Have we reached Peak Data?

Matt’s publishing a newsletter on the past, present, and future of tracking:

The last 100 years have been a journey to see how to measure ghosts - how to measure the invisible audiences at the end of technological distribution networks. With every decade, these ghosts have come more and more into focus, ending with a the last ten years of social media and digital advertising that has created unimaginable amounts of data about everything we see, read, click and like.

He sees the pendulum swinging the other way now …for those who can afford it:

If there’s one constant in the economics of audience data over the last 100 years, is that we only get free services if we pay for them with our attention. This has been true for commercial radio and television, free newspapers, mobile games and digital content. If we want privacy, we have to pay for it, and not everyone can afford this. Will the right to become a ghost only be for the people with money to buy premium products?

Bruce Lawson’s personal site  : Structured data and Google

Bruce wonders why Google seems to prefer separate chunks of JSON-LD in web pages instead of interwoven microdata attributes:

I strongly feel that metadata that is separated from the user-visible data associated with it highly susceptible to metadata partial copy-paste necrosis. User-visible text is also developer-visible text. When devs copy/ paste that, it’s very easy to forget to copy any associated metadata that’s not interleaved, leading to errors.

Test the impact of ads and third party scripts

This is a very useful new feature in Calibre, the performance monitoring tool. Now you can get data about just how much third-party scripts are affecting your site’s performance:

The best way of circumventing fear and anxiety around third party script performance is to capture metrics that clearly articulate their performance impact.

Methods - 18F Methods

A very handy collection of design exercises as used by 18F. There’s a lot of crossover here with the Clearleft toolbox.

A collection of tools to bring human-centered design into your project.

These methods are categorised by:

  1. Discover
  2. Decide
  3. Make
  4. Validate
  5. Fundamentals

Why Computer Programmers Should Stop Calling Themselves Engineers - The Atlantic

This article by Ian Bogost from a few years back touches on one of the themes in the talk I gave at New Adventures:

“Engineer” conjures the image of the hard-hat-topped designer-builder, carefully crafting tomorrow. But such an aspiration is rarely realized by computing. The respectability of engineering, a feature built over many decades of closely controlled, education- and apprenticeship-oriented certification, becomes reinterpreted as a fast-and-loose commitment to craftwork as business.

UX Workshop | Trys Mudford

I’m so, so happy that Trys has joined us at Clearleft!

Here, he recounts his first day, which just happened to coincide with an introductory UX workshop that went really well.

Intro to Font Metrics

Font metrics help the computer determine things like the default spacing between lines, how high or low sub and super scripts should go, and how to align two differently sized pieces of text next to each other.

Performance Budgets That Stick - TimKadlec.com

I like Tim’s definition here:

A performance budget is a clearly defined limit on one or more performance metrics that the team agrees not to exceed, and that is used to guide design and development.

And I agree about the four attributes required for a performance budget to succeed. It must be:

  1. Concrete
  2. Meaningful
  3. Integrated
  4. Enforceable

The point is not to let the performance budget try to stand on its own, somewhere hidden in company documentation collecting dust. You need to be proactive about making the budget become a part of your everyday work.

Programming as translation – Increment: Internationalization

Programming lessons from Umberto Eco and Emily Wilson.

Converting the analog into the digital requires discretization, leaving things out. What we filter out—or what we focus on—depends on our biases. How do conventional translators handle issues of bias? What can programmers learn from them?

Why Data Is Never Raw - The New Atlantis

Raw data is both an oxymoron and a bad idea; to the contrary, data should be cooked with care.

Responsive Images on the Apple Watch — ericportis.com

Some tips for getting responsive images to work well on the Apple Watch:

  • test your layouts down to 136-px wide
  • include 300w-ish resources in your full-width img’s srcsets
  • art direct to keep image subjects legible
  • say the magic meta words

UX past, present, and future | Clearleft

This long zoom by Andy is right up my alley—a history of UX design that begins in 1880. It’s not often that you get to read something that includes Don Norman, Doug Engelbart, Lilian Gilbreth, and Vladimir Lenin. So good!

AddyOsmani.com - Start Performance Budgeting

Great ideas from Addy on where to start with creating a performance budget that can act as a red line you don’t want to cross.

If it’s worth getting fast, it’s worth staying fast.

Workplace topology | Clearleft

The hits keep on comin’ from Clearleft. This time, it’s Danielle with an absolutely brilliant and thoughtful piece on the perils of gaps and overlaps in pattern libraries, design systems and organisations.

This is such a revealing lens to view these things through! Once you’re introduced to it, it’s hard to “un-see” problems in terms of gaps and overlaps in categorisation. And even once the problems are visible, you still need to solve them in the right way:

Recognising the gaps and overlaps is only half the battle. If we apply tools to a people problem, we will only end up moving the problem somewhere else.

Some issues can be solved with better tools or better processes. In most of our workplaces, we tend to reach for tools and processes by default, because they feel easier to implement. But as often as not, it’s not a technology problem. It’s a people problem. And the solution actually involves communication skills, or effective dialogue.

That last part dovetails nicely with Jerlyn’s equally great piece.

What is Modular CSS?

A walk down memory lane, looking at the history modular CSS methodologies (and the people behind them):

Incomplete Open Cubes Revisited

Art, geometry, and code. Sol LeWitt started it. Rob saw it through.

Breaking the Deadlock Between User Experience and Developer Experience · An A List Apart Article

Yes! Yes! Yes!

Our efforts to measure and improve UX are packed with tragically ironic attempts to love our users: we try to find ways to improve our app experiences by bloating them with analytics, split testing, behavioral analysis, and Net Promoter Score popovers. We stack plugins on top of third-party libraries on top of frameworks in the name of making websites “better”—whether it’s something misguided, like adding a carousel to appease some executive’s burning desire to get everything “above the fold,” or something truly intended to help people, like a support chat overlay. Often the net result is a slower page load, a frustrating experience, and/or (usually “and”) a ton of extra code and assets transferred to the browser.

Even tools that are supposed to help measure performance in order to make improvements—like, say, Real User Monitoring—require you to add a script to your web pages …thereby increasing the file size and degrading performance! It’s ironic, in that Alanis Morissette sense of not understanding what irony is.

Stacking tools upon tools may solve our problems, but it’s creating a Jenga tower of problems for our users.

This is a great article about evaluating technology.

Prioritizing the Long-Tail of Performance - TimKadlec.com

Focusing on the median or average is the equivalent of walking around with a pair of blinders on. We’re limiting our perspective and, in the process, missing out on so much crucial detail. By definition, when we make improving the median or average our goal, we are focusing on optimizing for only about half of our sessions.

Tim does the numbers…

By honing in on the 90th—or 95th or similar—we ensure those weaknesses don’t get ignored. Our goal is to optimize the performance of our site for the majority of our users—not just a small subset of them.