This looks very useful: a script that will allow visitors to tailor which tracking scripts they want to allow. Seems like a win-win to me: useful for developers, and useful for end users. A safe and sensible approach to GDPR.
The focus here is on performance, but these tools are equally useful for shining a light on just how bad the situation is with online surveillance and tracking.
Everything old is new again—sometimes the age-old technique of using a 1x1 pixel image to log requests is still the only way to get certain metrics.
While tracking pixels are far from a new idea, there are creative ways in which we can use them to collect data useful to developers. Once the data is gathered, we can begin to make much more informed decisions about how we work.
Really smart thinking from Stuart on how the randomised response technique could be applied to analytics. My only question is who exactly does the implementation.
The key point here is that, if you’re collecting data about a load of users, you’re usually doing so in order to look at it in aggregate; to draw conclusions about the general trends and the general distribution of your user base. And it’s possible to do that data collection in ways that maintain the aggregate properties of it while making it hard or impossible for the company to use it to target individual users. That’s what we want here: some way that the company can still draw correct conclusions from all the data when collected together, while preventing them from targeting individuals or knowing what a specific person said.
Here’s a clever idea from Harry if you’re willing to play the long game in tracking down redundant CSS—add a transparent background image to the rule block and then sit back and watch your server logs for any sign of that sleeper agent ever getting activated.
If you do find entries for that particular image, you know that, somehow, the legacy feature is potentially still accessible—the number of entries should give you a clue as to how severe the problem might be.
A few technical words about Upsideclown, and some thoughts about audiences and the web (17 Aug., 2017, at Interconnected)
Matt writes about the pleasure of independent publishing on the web today:
It feels transgressive to have a website in 2017. Something about having a domain name and about coding HTML which is against the grain now. It’s something big companies do, not small groups. We’re supposed to put our content on Facebook or Medium, or keep our publishing to an email newsletter. But a website?
But he points out a tension between the longevity that you get from hosting the canonical content yourself, and the lack of unified analytics when you syndicate that content elsewhere.
There’s no simple online tool that lets me add up how many people have read a particular story on Upsideclown via the website, the RSS feed, and the email newsletter. Why not? If I add syndication to Facebook, Google, and Apple, I’m even more at sea.
This is a great free service for doing a bit of performance monitoring on your site. It uses WebPageTest and you do all the set up via a Github repo that then displays the results using Github Pages.
The act of linking to this story is making it true.
“I don’t think there’s any law against this,” I said. How could there be a law against something that’s not possible?
I concur with this sentiment:
If you are starting a new blog, or have one already, the best thing you can do is turn off all analytics.
Especially true for your own personal site:
Just turn them off now. Then, write about whatever the fuck you want to write about.
Some insane numbers on the return on investment that a bit of responsive optimisation can bring.
Great news! Google Analytics now tracks page load times.
Finally revealed: what Jeff has been working on since he moved into the lair of the Google. He's been making Google Analytics look and feel nicer.