Tags: hack

60

sparkline

CSS custom properties in generated content

Cassie posted a neat tiny lesson that she’s written a reduced test case for.

Here’s the situation…

CSS custom properties are fantastic. You can drop them in just about anywhere that a property takes a value.

Here’s an example of defining a custom property for a length:

:root {
    --my-value: 1em;
}

Then I can use that anywhere I’d normally give something a length:

.my-element {
    margin-bottom: var(--my-value);
}

I went a bit overboard with custom properties on the new Patterns Day site. I used them for colour values, font stacks, and spacing. Design tokens, I guess. They really come into their own when you combine them with media queries: you can update the values of the custom properties based on screen size …without having to redefine where those properties are applied. Also, they can be updated via JavaScript so they make for a great common language between CSS and JavaScript: you can define where they’re used in your CSS and then update their values in JavaScript, perhaps in response to user interaction.

But there are a few places where you can’t use custom properties. You can’t, for example, use them as part of a media query. This won’t work:

@media all and (min-width: var(--my-value)) {
    ...
}

You also can’t use them in generated content if the value is a number. This won’t work:

:root {
    --number-value: 15;
}
.my-element::before {
    content: var(--number-value);
}

Fair enough. Generated content in CSS is kind of a strange beast. Eric delivered an entire hour-long talk at An Event Apart in Seattle on generated content.

But Cassie found a workaround if the value you want to put into that content property is numeric. The CSS counter value is a kind of generated content—the numbers that appear in front of ordered list items. And you can control the value of those numbers from CSS.

CSS counters work kind of like variables. You name them and assign values to them using the counter-reset property:

.my-element {
    counter-reset: mycounter 15;
}

You can then reference the value of mycounter in a content property using the counter value:

.my-element {
    content: counter(mycounter);
}

Cassie realised that even though you can’t pass in a custom property directly to generated content, you can pass in a custom property to the counter-reset property. So you can do this:

:root {
    --number-value: 15;
}
.my-element {
    counter-reset: mycounter var(--number-value);
    content: counter(mycounter);
}

In a roundabout way, this allows you to use a custom property for generated content!

I realise that the use cases are pretty narrow, but I can’t help but be impressed with the thinking behind this. Personally, I would’ve just read that generated content doesn’t accept custom properties and moved on. I would’ve given up quickly. But Cassie took a step back and found a creative pass-the-parcel solution to the problem.

I feel like this is a hack in the best sense of the word: a creatively improvised solution to a problem or limitation.

I was trying to display the numeric value stored in a CSS variable inside generated content… Turns out you can’t do that. But you can do this… codepen.io/cassie-codes/p… (not saying you should, but you could)

Design sprint?

Our hack week at CERN to reproduce the WorldWideWeb browser was five days long. That’s also the length of a design sprint. So …was what we did a design sprint?

I’m going to say no.

On the surface, our project has all the hallmarks of a design sprint. A group of people who don’t normally work together were thrown into an instense week of problem-solving and building, culminating in a tangible testable output. But when you look closer, the journey itself was quite different. A design sprint is typical broken into five phases, each one mapped on to a day of work:

  1. Understand and Map
  2. Demos and Sketch
  3. Decide and Storyboard
  4. Prototype
  5. Test

Gathered at CERN, hunched over laptops.

There was certainly plenty of understanding, sketching, and prototyping involved in our hack week at CERN, but we knew going in what the output would be at the end of the week. That’s not the case with most design sprints: figuring out what you’re going to make is half the work. In our case, we knew what needed to be produced; we just had to figure out how. Our process looked more like this:

  1. Understand and Map
  2. Research and Sketch
  3. Build
  4. Build
  5. Build

Now you could say that it’s a kind of design sprint, but I think there’s value in reserving the term “design sprint” for the specific five-day process. As it is, there’s enough confusion between the term “sprint” in its agile sense and “design sprint”.

Timelines of the web

Recreating the original WorldWideWeb browser was an exercise in digital archeology. With a working NeXT machine in the room, Kimberly was able to examine the source code for the first every browser and discover a treasure trove within. Like this gem in HTUtils.h:

#define TCP_PORT 80 /* Allocated to http by Jon Postel/ISI 24-Jan-92 */

Sure enough, by June of 1992 port 80 was documented as being officially assigned to the World Wide Web (Gopher got port 70). Jean-François Groff—who worked on the World Wide Web project with Tim Berners-Lee—told us that this was a moment they were very pleased about. It felt like this project of theirs was going places.

Jean-François also told us that the WorldWideWeb browser/editor was kind of like an advanced prototype. The idea was to get something up and running as quickly as possible. Well, the NeXT operating system had a very robust Text Object, so the path of least resistance for Tim Berners-Lee was to take the existing word-processing software and build a hypertext component on top of it. Likewise, instead of creating a brand new format, he used the existing SGML format and added one new piece: linking with A tags.

So the WorldWideWeb application was kind of like a word processor and document viewer mashed up with hypertext. Ted Nelson complains to this day that the original sin of the web was that it borrowed this page-based metaphor. But Nelson’s Project Xanadu, originally proposed in 1974 wouldn’t become a working reality until 2014—a gap of forty years. Whereas Tim Berners-Lee proposed his system in March 1989 and had working code within a year. There’s something to be said for being pragmatic and working with what you’ve got.

The web was also a mashup of ideas. Hypertext existed long before the web—Ted Nelson coined the term in 1963. There were conferences and academic discussions devoted to hypertext and hypermedia. But almost all the existing hypertext systems—including Tim Berners-Lee’s own ENQUIRE system from the early 80s—were confined to a local machine. Meanwhile networked computers were changing everything. First there was the ARPANET, then the internet. Tim Berners-Lee’s ambitious plan was to mash up hypertext with networks.

Going into our recreation of WorldWideWeb at CERN, I knew I wanted to convey this historical context somehow.

The World Wide Web officially celebrates its 30th birthday in March of this year. It’s kind of an arbitrary date: it’s the anniversary of the publication of Information Management: A Proposal. Perhaps a more accurate date would be the day the first website—and first web server—went online. But still. Let’s roll with this date of March 12, 1989. I thought it would be interesting not only to look at what’s happened between 1989 and 2019, but also to look at what happened between 1959 and 1989.

So now I’ve got two time cones that converge in the middle: 1959 – 1989 and 1989 – 2019. For the first time period, I made categories of influences: formats, hypertext, networks, and computing. For the second time period, I catalogued notable results: browsers, servers, and the evolution of HTML.

I did a little bit of sketching and quickly realised that these converging timelines could be represented somewhat like particle collisions. Once I had that idea in my head, I knew how I would be spending my time during the hack week.

Rather than jumping straight into the collider visualisation, I took some time to make a solid foundation to build on. I wanted to be sure that the timeline itself would be understable even if it were, say, viewed in the first ever web browser.

Progressive enhancement. Marking up (and styling) an interactive timeline that looks good in a modern browser and still works in the first ever web browser.

I marked up each timeline as an ordered list of h-events:

<li class="h-event y1968">
  <a href="https://en.wikipedia.org/wiki/NLS_%28computer_system%29" class="u-url">
    <time class="dt-start" datetime="1968-12-09">1968</time>
    <abbr class="p-name" title="oN-Line System">NLS</abbr>
  </a>
</li>

With the markup in place, I could concentrate on making it look halfway decent. For small screens, the layout is very basic—just a series of lists. When the screen gets wide enough, I lay those lists out horzontally one on top of the other. In this view, you can more easily see when events coincide. For example, ENQUIRE, Usenet, and Smalltalk all happen in 1980. But the real beauty comes when the screen is wide enough to display everthing at once. You can see how an explosion of activity in the early 90s. In 1994 alone, we get the release of Netscape Navigator, the creation of HTTPS, and the launch of Amazon.com.

The whole thing is powered by CSS transforms and positioning. Each year on a timeline has its own class that gets moved to the correct chronological point using calc(). I wanted to use translateX() but I couldn’t get the maths to work for that, so I had use plain ol’ left and right:

.y1968 {
  left: calc((1968 - 1959) * (100%/30) - 5em);
}

For events before 1989, it’s the distance of the event from 1959. For events after 1989, it’s the distance of the event from 2019:

.y2014 {
  right: calc((2019 - 2014) * (100%/30) - 5em);
}

(Each h-event has a width of 5em so that’s where the extra bit at the end comes from.)

I had to do some tweaking for legibility: bunches of events happening around the same time period needed to be separated out so that they didn’t overlap too much.

As a finishing touch, I added a few little transitions when the page loaded so that the timeline fans out from its centre point.

Et voilà!

Progressive enhancement. Marking up (and styling) an interactive timeline that looks good in a modern browser and still works in the first ever web browser.

I fiddled with the content a bit after peppering Robert Cailliau with questions over lunch. And I got some very valuable feedback from Jean-François. Some examples he provided:

1971: Unix man pages, one of the first instances of writing documents with a markup language that is interpreted live by a parser before being presented to the user.

1980: Usenet News, because it was THE everyday discussion medium by the time we created the web technology, and the Web first embraced news as a built-in information resource, then various platforms built on the web rendered it obsolete.

1982: Literary Machines, Ted Nelson’s book which was on our desk at all times

I really, really enjoyed building this “collider” timeline. It was a chance for me to smash together my excitement for web history with my enjoyment of using the raw materials of the web; HTML and CSS in this case.

The timeline pales in comparison to the achievement of the rest of the team in recreating the WorldWideWeb application but I was just glad to be able to contribute a little something to the project.

Hello WorldWideWeb.

WorldWideWeb

Nine people came together at CERN for five days and made something amazing. I still can’t quite believe it.

Coming into this, I thought it was hugely ambitious to try to not only recreate the experience of using the first ever web browser (called WorldWideWeb, later Nexus), but to also try to document the historical context of the time. Now that it’s all done, I’m somewhat astounded that we managed to achieve both.

Want to see the final result? Here you go:

worldwideweb.cern.ch

That’s the website we built. The call to action is hard to miss:

Launch WorldWideWeb

Behold! A simulation of using the first ever web browser, recreated inside your web browser.

Now you could try clicking around on the links on the opening doucment—remembering that you need to double-click on links to activate them—but you’ll quickly find that most of them don’t work. They’re long gone. So it’s probably going to be more fun to open a new page to use as your starting point. Here’s how you do that:

  1. Select Document from the menu options on the left.
  2. A new menu will pop open. Select Open from full document reference.
  3. Type a URL, like, say https://adactio.com
  4. Press that lovely chunky Open button.

You are now surfing the web through a decades-old interface. Double click on a link to open it. You’ll notice that it opens in a new window. You’ll also notice that there’s no way of seeing the current URL. Back then, the idea was that you would navigate primarily by clicking on links, creating your own “associative trails”, as first envisioned by Vannevar Bush.

But the WorldWideWeb application wasn’t just a browser. It was a Hypermedia Browser/Editor.

  1. From that Document menu you opened, select New file…
  2. Type the name of your file; something like test.html
  3. Start editing the heading and the text.
  4. In the main WorldWideWeb menu, select Links.
  5. Now focus the window with the document you opened earlier (adactio.com).
  6. With that window’s title bar in focus, choose Mark all from the Links menu.
  7. Go back to your test.html document, and highlight a piece of text.
  8. With that text highlighted, click on Link to marked from the Links menu.

If you want, you can even save the hypertext document you created. Under the Document menu there’s an option to Save a copy offline (this is the one place where the wording of the menu item isn’t exactly what was in the original WorldWideWeb application). Save the file so you can open it up in a text editor and see what the markup would’ve looked it.

I don’t know about you, but I find this utterly immersive and fascinating. Imagine what it must’ve been like to browse, create, and edit like this. Hypertext existed before the web, but it was confined to your local hard drive. Here, for the first time, you could create links across networks!

After five days time-travelling back thirty years, I have a new-found appreciation for what Tim Berners-Lee created. But equally, I’m in awe of what my friends created thirty years later.

Remy did all the JavaScript for the recreated browser …in just five days!

Kimberly was absolutely amazing, diving deep into the original source code of the application on the NeXT machine we borrowed. She uncovered some real gems.

Of course Mark wanted to make sure the font was as accurate as possible. He and Brian went down quite a rabbit hole, and with remote help from David Jonathan Ross, they ended up recreating entire families of fonts.

John exhaustively documented UI patterns that Angela turned into marvelous HTML and CSS.

Through it all, Craig and Martin put together the accomanying website. Personally, I think the website is freaking awesome—it’s packed with fascinating information! Check out the family tree of browsers that Craig made.

What a team!

Back at CERN

We got the band back together.

In September of 2013, I had the great pleasure and privilege of going to CERN with a bunch of very smart people. I’m not sure how I managed to slip by. We were there to recreate the experience of using the line-mode browser. As I wrote at the time:

Just to be clear, the line-mode browser wasn’t the world’s first web browser. That honour goes to Tim Berners-Lee’s WorldWideWeb programme. But whereas WorldWideWeb only ran on NeXT machines, the line-mode browser worked cross-platform and was, therefore, instrumental in demonstrating the power of the web as a universally-accessible medium.

In the run-up to the 30th anniversary of the original (vague but exciting) proposal for what would become the World Wide Web, we’ve been invited back to try to recreate the experience of using that first web browser, the one that one ever ran on NeXT machines.

I missed the first day due to travel madness—flying back from Interaction 19 in Seattle during snowmageddon to Heathrow and then to Geneva—but by the time I arrived, my hackmates had already made a great start in identifying the objectives:

  1. Give people an understanding of the user experience of the WorldWideWeb browser.
  2. Demonstrate that a read/write philosophy was there from the beginning.
  3. Give context—what was going on at the time?

That second point is crucial. WorldWideWeb wasn’t just a web browser; it was a browser/editor. That’s by far the biggest change in terms of the original vision of the web and what we ended up getting from Mosaic onwards.

Remy is working hard on the first point. He documented the first day and now on the second day, he’s made enormous progress already.

I’m focusing on point number three. I want to show the historical context for the World Wide Web. Here’s my plan…

Seeing as we’re coming up on the thirtieth anniversary, I thought it would be interesting to take the year of the proposal (1989) and look back in a time cone of thirty years previous to that at the influences on Tim Berners-Lee. I also want to look at what has happened with the web in the thirty years since the proposal. So the date of the proposal will be a centre point, with the timespan of 1959-1989 converging on it from the past, and the timespan of 1989-2019 diverging from it into the future. I hope it could make for a nice visualisation. Maybe I could try to get it look like data from a particle collision.

We’re here till the weekend and everyone else has already made tremendous progress. Kimberly has been hacking the Gibson …well, that’s what it looked like when she was deep in the code of the NeXT machine we’ve borrowed from Musée Bolo (merci beaucoup!).

We took a little time out for a tour of the data centre. Oh, and at lunch time, we sat with Robert Cailliau and grilled him with questions about the birth of the web. Quite a day!

Now it’s time for me to hit the hay and prepare for another day of hacking in this extraordinary place.

The road to Indie Web Camp LA

After An Event Apart San Francisco, which was—as always—excellent, it was time for me to get to the next event: Indie Web Camp Los Angeles. But I wasn’t going alone. Tantek was going too, and seeing as he has a car—a convertible, even—what better way to travel from San Francisco to LA than on the Pacific Coast Highway?

It was great—travelling through the land of Steinbeck and Guthrie at the speed of Kerouac and Springsteen. We stopped for the night at Pismo Beach and then continued on, rolling into Santa Monica at sunset.

Half Moon Bay. Roadtripping with @t. Pomponio beach. Windswept. Salinas. Refueling. Driving through the Californian night. Pismo Beach. On the beach. On the beach with @t. Stopping for a coffee in Santa Barbara. Leaving Pismo Beach. Chevron. Santa Barbara steps. On the road. Driving through Malibu. Malibu sunset. Sun worshippers. Sunset in Santa Monica.

The weekend was spent in the usual Indie Web Camp fashion: a day of BarCamp-style discussions, followed by a day of hacking on our personal websites.

I decided to follow on from what I did at the Brighton Indie Web Camp. There, I made a combined tag view—a way of seeing, for example, everything tagged with “indieweb” instead of just journal entries tagged with “indieweb” or links tagged with “indieweb”. I wanted to do the same thing with my archives. I have separate archives for my journal, my links, and my notes. What I wanted was a combined view.

After some hacking, I got it working. So now you can see combined archives by year, month, and day (I managed to add a sparkline to the month view as well):

I did face a bit of a conundrum. Both my home page stream and my tag pages show posts in reverse chronological order, with the newest posts at the top. I’ve decided to replicate that for the archive view, but I’m not sure if that’s the right decision. Maybe the list of years should begin with 2001 and end with 2016, instead of the other way around. And maybe when you’re looking at a month of posts, you should see the first posts in that month at the top.

Anyway, I’ll live with it in reverse chronological order for a while and see how it feels. I’m just glad I managed to get it down—I’ve been meaning to do it for quite a while. Once again, I’m amazed by how much gets accomplished when you’re in the same physical space as other helpful, motivated people all working on improving their indie web presence, little by little.

Greetings from Indie Web Camp LA. Indie Web Camping. Hacking away. Day two of Indie Web Camp LA.

Indie Web Camp Brighton 2016

Indie Web Camp Brighton 2016 is done and dusted. It’s hard to believe that it’s already in its fifth(!) year. As with previous years, it was a lot of fun.

IndieWebCampBrighton2016

The first day—the discussions day—covered a lot of topics. I led a session on service workers, where we brainstormed offline and caching strategies for personal websites.

There was a design session looking at alternatives to simply presenting everything in a stream. Some great ideas came out of that. And there was a session all about bookmarking and linking. That one really got my brain whirring with ideas for the second day—the making/coding day.

I’ve learned from previous Indie Web Camps that a good strategy for the second day is to have two tasks to tackle: one that’s really easy (so you’ve at least got that to demo at the end), and one that’s more ambitious. This time, I put together a list of potential goals, and then ordered them by difficulty. By the end of the day, I managed to get a few of them done.

First off, I added a small bit of code to my bookmarking flow, so that any time I link to something, I send a ping to the Internet Archive to grab a copy of that URL. So here’s a link I bookmarked to one of Remy’s blog posts, and here it is in the Wayback Machine—see how the date of storage matches the date of my link.

The code to do that was pretty straightforward. I needed to hit this endpoint:

http://web.archive.org/save/{url}

I also updated my bookmarklet for posting links so that, if I’ve highlighted any text on the page I’m linking to, that text is automatically pasted in to the description.

I tweaked my webmentions a bit so that if I receive a webmention that has a type of bookmark-of, that is displayed differently to a comment, or a like, or a share. Here’s an example of Aaron bookmarking one of my articles.

The more ambitious plan was to create an over-arching /tags area for my site. I already have tag-based navigation for my journal and my links:

But until this weekend, I didn’t have the combined view:

I didn’t get around to adding pagination. That’s something I should definitely add, because some of those pages get veeeeery long. But I did spend some time adding sparklines. They can be quite revealing, especially on topics that were hot ten years ago, but have faded over time, or topics that have becoming more and more popular with each year.

All in all, a very productive weekend.

Save the dates for Indie Web Camp Brighton 2016

September 24th and 25th—those are the dates you should put in your diary. That’s when this year’s Indie Web Camp Brighton is happening.

Once again it’ll be at 68 Middle Street, home to Clearleft. You can register for free now, and then add your name to the list of participants on the wiki.

If you haven’t been to an Indie Web Camp before, it’s a very straightforward proposition. The idea is that you should have your own website. That’s it. Every thing else is predicated on that. So while there’ll be plenty of discussions, demos, and designs, they’re all in service to that fundamental premise.

The first day of an Indie Web Camp is like a BarCamp. We make a schedule grid at the start of the day and people organise topics by room and time slot. It sounds chaotic. It is chaotic. But it works surprisingly well. The discussions can be about technologies, or interfaces, or ideas, or just about anything really.

The second day is for making. After the discussions from the previous day, most people will have a clear idea at this point for something they might want to do. It might involve adding some new technology to their website, or making some design changes, or helping build a tool. For people starting from scratch, this is the perfect time for them to build and launch a basic website.

At the end of the second day, everyone demos what they’ve done. I’m always amazed by how much people can accomplish in just one weekend. There’s something about having other people around to help you that makes it super productive.

You might be thinking “but I’m not a coder!” Don’t worry—there’ll be plenty of coders there so you can get their help on whatever you might decide to do. If you’re a designer, your skills will be in high demand by those coders. It’s that mish-mash of people that makes it such a fun gathering.

Last year’s Indie Web Camp Brighton was lots of fun. Let’s make Indie Web Camp Brighton 2016 even better!

Indie Web Camp Brighton group photo

Notice

We’ve been doing a lot of soul-searching at Clearleft recently; examining our values; trying to make implicit unspoken assumptions explicit and spoken. That process has unearthed some activities that have been at the heart of our company from the very start—sharing, teaching, and nurturing. After all, Clearleft would never have been formed if it weren’t for the generosity of people out there on the web sharing with myself, Andy, and Richard.

One of the values/mottos/watchwords that’s emerging is “Share what you learn.” I like that a lot. It echoes the original slogan of the World Wide Web project, “Share what you know.” It’s been a driving force behind our writing, speaking, and events.

In the same spirit, we’ve been running internship programmes for many years now. John is the latest of a long line of alumni that includes Anna, Emil, and James.

By the way—and this should go without saying, but apparently it still needs to be said—the internships are always, always paid. I know that there are other industries where unpaid internships are the norm. I’ve even heard otherwise-intelligent people defend those unpaid internships for the experience they offer. But what kind of message does it send to someone about the worth of their work when you withhold payment for it? Our industry is young. Let’s not fall foul of the pernicious traps set by older industries that have habitualised exploitation.

In the past couple of years, Andy concocted a new internship scheme:

So this year we decided to try a different approach by scouring the end of year degree shows for hot new talent. We found them not in the interaction courses as we’d expected, but from the worlds of Product Design, Digital Design and Robotics. We assembled a team of three interns , with a range of complementary skills, gave them a space on the mezzanine floor of our new building, and set them a high level brief.

The first such programme resulted in Chüne. The latest Clearleft internship project has just come to an end. The result is Notice.

This time ‘round, the three young graduates were Chloe, Chris and Monika. They each have differing but complementary skill sets: Chloe is a user interface designer; Chris is a product designer; Monika is an artist who knows her way around hardware hacking and coding.

I’ll miss having this lot in the Clearleft office.

Once again, they were set a fairly loose brief. They should come up with something “to enrich the lives of local residents” and it should have a physical and digital component to it.

They got stuck in to researching and brainstorming ideas. At the end of each week, we’d all gather together to get a playback of what they were coming up with. It was at these playbacks that the interns were introduced to a concept that they will no doubt encounter again in their professional lives: seagulling AKA the swoop and poop. For once, it was the Clearlefties who were in the position of being swoop-and-poopers, rather than swoop-and-poopies.

Playback at Clearleft

As the midway point of the internship approached, there were some interesting ideas, but no clear “winner” to pursue. Something else was happening around this time too: dConstruct 2015.

Chloe, Monika and Chris at dConstruct

The interns pitched in with helping out at the event, and in return, we kidnapped some of the speakers—namely John Willshire and Chris Noessel—to offer them some guidance.

There was also plenty of inspiration to be had from the dConstruct talks themselves. One talk in particular struck a chord: Dan Hill’s The City Of Things …especially the bit where he railed against the terrible state of planning application notices:

Most of the time, it ends up down the bottom of the lamppost—soiled and soggy and forgotten. This should be an amazing thing!

Hmm… sounds like something that could enrich the lives of local residents.

Not long after that, Matt Webb came to visit. He encouraged the interns to focus in on just the two ideas that really excited them rather then the 5 or 6 that they were considering. So at the next playback, they presented two potential projects—one about biking and the other about city planning. They put it to a vote and the second project won by a landslide.

That was the genesis of Notice. After that, they pulled out all the stops.

Exciting things are afoot with the @Clearleftintern project.

Not content with designing one device, they came up with a range of three devices to match the differing scope of planning applications. They set about making a working prototype of the device intended for the most common applications.

Monika and Chris, hacking

Last week marked the end of the project and the grand unveiling.

Playing with the @notice_city prototype. Chris breaks it down. Playback time. Unveiling.

They’ve done a great job. All the details are on the website, including this little note I wrote about the project:

This internship programme was an experiment for Clearleft. We wanted to see what would happen if you put through talented young people in a room together for three months to work on a fairly loose brief. Crucially, we wanted to see work that wasn’t directly related to our day-­to-­day dealings with web design.

We offered feedback and advice, but we received so much more in return. Monika, Chloe, and Chris brought an energy and enthusiasm to the Clearleft office that was invigorating. And the quality of the work they produced together exceeded our wildest expectations.

We hereby declare this experiment a success!

Personally, I think the work they’ve produced is very strong indeed. It would be a shame for it to end now. Perhaps there’s a way that it could be funded for further development. Here’s hoping.

Out on the streets of Brighton Prototype

As impressed as I am with the work, I’m even more impressed with the people. They’re not just talented and hard work—they’re a jolly nice bunch to have around.

I’m going to miss them.

The terrific trio!

Far afield

I spoke at Responsive Field Day here in Portland on Friday. It was an excellent event. All the talks were top notch.

The day flew by, with each talk clocking in at just 20 minutes, in batches of three followed by a quick panel discussion. It was a great format …but I knew it would be. See, Responsive Field Day was basically Responsive Day Out relocated to Portland.

Jason told me last year how inspired he was by the podcast recordings from Responsive Day Out and how much he and Lyza wanted to do a Responsive Day Out in Portland. I said “Go for it!” although I advised changing to the name to something a bit more American (having a “day out” at the seaside feels very British—a “field day” works perfectly as the US equivalent). Well, Jason, Lyza, and everyone at Cloud Four should feel very proud of their Responsive Field Day—it was wonderful.

As the day unfolded on Friday, I found myself being quite moved. It was genuinely touching to see my conference template replicated not only in format, but also in spirit. It was affordable (“Every expense spared!” was my motto), inclusive, diverse, and fast-paced. It was a lovely, lovely feeling to think that I had, in some small way, provided some inspiration for such a great event.

Jessica pointed out that isn’t the first time I’ve set up an event template for others to follow. When I organised the first Science Hack Day in London a few years ago, I never could have predicted how amazingly far Ariel would take the event. Fifty Science Hack Days in multiple countries—fifty! I am in awe of Ariel’s dedication. And every time I see pictures or video from a Science Hack Day in some far-flung location I’ve never been to, and I see the logo festooning the venue …I get such a warm fuzzy glow.

Y’know, when you’re making something—whether it’s an event, a website, a book, or anything else—it’s hard to imagine what kind of lifespan it might have. It’s probably just as well. I think it would be paralysing and overwhelming to even contemplate in advance. But in retrospect …it sure feels nice.

Small independent pieces, loosely joined

It was fascinating at Indie Web Camp Germany to see how much could be accomplished by taking some pre-existing small things and loosely joining them.

For example, there are already webmention and micropub plug-ins for quite a few CMSs. If you’re using Wordpress or Jekyll, you can get pretty far pretty quickly by making use of what people have already provided. And after that Indie Web Camp, you can add Drupal and Kirby to the list of CMSs with readily-available components.

I was somewhat surprised—and very pleased—that people made use of some little PHP snippets that I had posted as gists. I deliberately posted them as gists to show how minimal and barebones the code could be—no need for a whole project, or installers, or dockering the node to yeoman the gulp, or whatever it is the cool kids do these days.

This modular approach also worked well for interface elements. Glenn and Aaron worked on separate projects to create small JavaScript enhancements for posting interfaces. Assemble enough of these enhancements together and before you know it, you’ve got something approaching Medium.

By the end of the second day, I was amazed to see how much progress people had made. Like Johannes says:

I was pretty impressed by how much people got done. At the final demo session, everyone had something he or she had done to update their website – although I’m pretty sure that the end of this event will not be the end of their efforts to try and own their stuff online.

It was quite inspiring. In fact, I think I’ve been inspired to have an Indie Web Camp in Brighton. I’m thinking we could have it at the same time as Indie Web Camp Portland, which is on July 11th and 12th.

Save the dates.

100 words 049

The second day of Indie Web Camp Germany was really productive. It was amazing to see how much could be accomplished in just one day of collaborative hacking—people were posting to Twitter from their own site, sending webmentions, and creating their own micropub endpoints.

I made a little improvement to the links section of my site. Now every time I link to something, I check to see if it accepts webmentions and if it does, I ping it to let you know that I’ve linked to it.

I’ve posted the code as a gist. Feel free to use it.

Hackfarming Blood Buddies

Every year at Clearleft, there’s a week where we step away from client work, go off the grid, and disappear into the countryside to work on something fun. We call it Hack Farm.

Hack Farm usually takes place around November, but due to various complexities, Hack Farm 2014 wound up getting pushed back to the start of 2015. Last week we formed a convoy, stocked up on the bare essentials (food, post-it notes, and booze), and drove west for four hours until we were in Herefordshire at a place called The Colloquy—a return to the site of the first ever Hack Farm.

Arrival at The Colloquy.

I kept notes on each day.

Day Zero

We arrive in the late afternoon, settle into our respective rooms, and eat some wonderful home-cooked food. After dinner, even though everyone’s pretty knackered, we agree that it’s best to figure out what everyone will be working on for the next few days.

Everyone gets a chance to pitch their ideas, and then we all do some dot-voting to whittle down the options. In short order, we arrive at four different projects for four different teams.

One of my ideas is chosen. This is something I’ve been pitching every single year at Hack Farm, and every single year it ends up narrowly missing out. This year, it’s finally going to happen!

On my team I’ve got Rich, Batesy, Andy P, and Tessa.

Day One

We choose a room to use as our home base and begin.

We start by agreeing on a hypothesis—more of an assumption, really—that we’ll be basing everything upon:

People are more like to give blood if they are not alone.

Hypothesis

We start writing down questions that people might ask related to giving blood. Some of these questions might well turn out to be out of scope for this project, or can already be better answered by an existing service like blood.co.uk e.g.:

  • Can I give blood?
  • How often can I give blood?
  • Will it hurt?
  • How long will it take?

Other questions are potentially open to us providing answers:

  • Where can I give blood?
  • When I can I give blood?
  • Who else is giving blood?

That last one is a question that doesn’t seem to be answered anywhere else.

We brain-dump potential data sources that answered the “who”, “when”, and “where” questions? The data from blood.co.uk could potentially answer the “when” and “where” questions e.g. when and where is the next donation? Data from Twitter, Facebook, or your address book could answer the “who” questions e.g. who are you, and who are your friends?

We brainstorm potential outputs of the project. The obvious choices are a website or a native app, but there could also potentially be email, SMS, or even posters and postcards.

We think about potential incentives for the users of this service: peer pressure, gamification, bragging rights, reassurance, etc.

So there’s a lot of divergent thinking going on: at this stage, there are no bad ideas (no, really!).

We also establish the goals of the project—what we would like to see happen as a result of this service existing. The very minimum success criteria is:

Someone gives blood who hasn’t given blood before.

There’s a follow-on criteria for measuring longer-term success:

A group gives blood regularly.

We split into two groups to work on a propositional statement, then come together to merge what we came up with. Here it is:

For people who want to give blood, who need encouragement and motivation, Blood Buddies brings together people you know to make it a shared experience. That way, you’re more likely to give blood.

Unlike blood.co.uk, it frames giving blood as a shared, rewarding activity.

Proposition James and Tessa

Blood Buddies is a codename for now. The final service might have a different name, like Bluddies maybe.

After lunch, we start to work on user stories and personas. After a while, we think we’ve got a pretty clear idea for the minimal viable user journey.

Now we take a little break and stretch our legs.

A stroll through the fields.

When we regroup, we start researching technical possibilities (like Twitter authentication, GMail address book, Facebook contacts, etc.), while also throwing ideas around to do with branding, tone of voice, etc. James Box comes in and helps us out with a handy branding exercise.

In an effort the name the thing, we create a page filled with relevant words that might be combined into a name. Eventually we reach the “just fucking end it!” moment. The service is called “Blood Buddies” after all. The tagline is …drumroll… “Get plastered together!”

Meanwhile, having investigated the technical possibilities, it looks like Twitter’s API will be the easiest (relatively) to start with.

Vocabulary Kanban

We write out our epics and create a little kanban board. We have our tasks figured out:

  • implement sign-in with Twitter,
  • create a style guide,
  • mock up the homepage,
  • mock up a sign-up form,
  • and more.

Tomorrow everyone can assign a task to themselves and get cracking (some people have started already).

Day Two

After a late Superbowl night, we arise and begin tackling the day’s tasks.

I managed to get a very rudimentary Twitter sign-in working (eventually!) so now my task is to do something with the data that Twitter is returning …namely, storing it in a database. And because this relies on signing in with Twitter to get any results, this needs to get on to an actual web server as soon as possible.

Cue a day of wrangling with PHP, MySQL, OAuth, Git, Apache, SSH keys, and DNS settings …with an intermittent internet connection that drops out at the most inconvenient of times.

Andy is storyboarding the promo video that will help sell the story of Blood Buddies.

Storyboard

Meanwhile James and Tessa are hammering out a visual language for Blood Buddies. So the work is being approached from two different ends: the server side (how it works behind the scenes) and the interface (how it looks to the end user). In the middle is the user flow, and that’s what Richard is working on, also looking ahead past the minimal viable product to include features that can be added later.

By late afternoon the most basic server-side functionality is done, and the site is live at bloodbuddies.co.uk. Of course, there’s very, very, very little to see there, but at least our team can start adding themselves to the database.

So now the task is to join up the back-end functionality with the visual design and copy. As these strands come together, it feels like we’re getting back to a more collaborative phase: whereas yesterday involved lots of group activity, today was more splintered. But that’s going to change now that we’re going to join up the individual pieces into a unified interface.

Today felt quite productive considering that three out of the five people on our team are on cooking duty.

Spaghetti and meatballs Dinnertime

Day Three

Today is a day of rest. It’s a beautiful day. We go for a drive through the countryside, pop into a pub for some grub, and go walking on the hills.

Walking west to Wales.

Day Four

We’re down to just three team members today. Tessa is working on a different project and Andy is spending the day sleeping, puking, and generally recovering from a heavy night. N00b.

We get cracking on with integrating the visual design with the back-end functionality. That means bashing out some CSS. After an hour or two, we’ve got something basic in place.

While James works on refining the visuals—including a kick-ass logo—Richard is writing lots and lots of copy, and figuring out user flows.

Meanwhile I’m trying to get server-side stuff in place, fiddling with DNS and email; not my favourite activity.

Once the DNS is pointed to the Digital Ocean server, and with the Twitter sign-in working okay, we realise that we’ve actually launched! Admittedly it’s very basic and it needs plenty of refinement, but it’s a start.

We head out for the evening meal together. Just one more day to go.

The Stagg Inn

Day Five

James starts the day by finishing up his kick-ass Blood Buddies logo.

Richard is writing and editing lots of witty copy.

Andy is storyboarding a promotional video.

Rich, me, James, and Andy

I’m trying to get emails working, so that when someone you know signs up to Blood Buddies, we can email you to let you know. By lunchtime, we’ve got it all working.

Lots of the details are in place now: the logo, web fonts, an error page, a favicon …it feels good to be iterating on a live site.

Kanban progress Final day tasks

Device testing

After lunch, James, Richard, and I work on expanding out the home page. Once everything is in pretty good shape, we all come together (with Andy and Tessa) to talk about what the next steps could be after this minimum viable product.

There’s consensus that the most important step would be adding more ways of signing into the site, instead of just Twitter. Also, there’s a lot of functionality we could add if we can scrape the data from blood.co.uk

But that’s for another day. Right now we’ve got a barebones site, but it’s working.

We shipped.

The Session trad tune machine

Most pundits call it “the Internet of Things” but there’s another phrase from Andy Huntington that I first heard from Russell Davies: “the Geocities of Things.” I like that.

I’ve never had much exposure to this world of hacking electronics. I remember getting excited about the possibilities at a Brighton BarCamp back in 2008:

I now have my own little arduino kit, a bread board and a lucky bag of LEDs. Alas, know next to nothing about basic electronics so I’m really going to have to brush up on this stuff.

I never did do any brushing up. But that all changed last week.

Seb is doing a new two-day workshop. He doesn’t call it Internet Of Things. He doesn’t call it Geocities Of Things. He calls it Stuff That Talks To The Interwebs, or STTTTI, or ST4I. He needed some guinea pigs to test his workshop material on, so Clearleft volunteered as tribute.

In short, it was great! And this time, I didn’t stop hacking when I got home.

First off, every workshop attendee gets a hand-picked box of goodies to play with and keep: an arduino mega, a wifi shield, sensors, screens, motors, lights, you name it. That’s the hardware side of things. There are also code samples and libraries that Seb has prepared in advance.

Getting ready to workshop with @Seb_ly. Unwrapping some Christmas goodies from Santa @Seb_ly.

Now, remember, I lack even the most basic knowledge of electronics, but after two days of fiddling with this stuff, it started to click.

Blinkenlights. Hello, little fella.

On the first workshop day, we all did the same exercises, connected things up, getting them to talk to the internet, that kind of thing. For the second workshop day, Seb encouraged us to think about what we might each like to build.

I was quite taken with the ability of the piezo buzzer to play rudimentary music. I started to wonder if there was a way to hook it up to The Session and have it play the latest jigs, reels, and hornpipes that have been submitted to the site in ABC notation. A little bit of googling revealed that someone had already taken a stab at writing an ABC parser for arduino. I didn’t end up using that code, but it convinced me that what I was trying to do wasn’t crazy.

So I built a machine that plays Irish traditional music from the internet.

Playing with hardware and software, making things that go beep in the night.

The hardware has a piezo buzzer, an “on” button, an “off” button, a knob for controlling the speed of the tune, and an obligatory LED.

The software has a countdown timer that polls a URL every minute or so. The URL is http://tune.adactio.com/. That in turn uses The Session’s read-only API to grab the latest tune activity and then get the ABC notation for whichever tune is at the top of that list. Then it does some cleaning up—removing some of the more advanced ABC stuff—and outputs a single line of notes to be played. I’m fudging things a bit: the device has the range of a tin whistle, and expects tunes to be in the key of D or G, but seeing as that’s at least 90% of Irish traditional music, it’s good enough.

Whenever there’s a new tune, it plays it. Or you can hit the satisfying “on” button to manually play back the latest tune (and yes, you can hit the equally satisfying “off” button to stop it). Being able to adjust the playback speed with a twiddly knob turns out to be particularly handy if you decide to learn the tune.

I added one more lo-fi modification. I rolled up a piece of paper and placed it over the piezo buzzer to amplify the sound. It works surprisingly well. It’s loud!

Rolling my own speaker cone, quite literally.

I’ll keep tinkering with it. It’s fun. I realise I’m coming to this whole hardware-hacking thing very late, but I get it now: it really does feel similar to that feeling you would get when you first figured out how to make a web page back in the days of Geocities. I’ve built something that’s completely pointless for most people, but has special meaning for me. It’s ugly, and it’s inefficient, but it works. And that’s a great feeling.

(P.S. Seb will be running his workshop again on the 3rd and 4th of February, and there will a limited amount of early-bird tickets available for one hour, between 11am and midday this Thursday. I highly recommend you grab one.)

Habitasteroids

Science Hack Day San Francisco was held in the Github offices last weekend. It was brilliant!

Hacking begins Hacking Science hacker & grumpy cat enthusiast, Keri Bean Launch pad

This was the fifth Science Hack Day in San Francisco and the 40th worldwide. That’s truly incredible. I mean, I literally can’t believe it. When I organised the very first Science Hack Day back in 2010, I had no idea how far it would go. But Ariel has been indefatigable in making it a truly global event. She is amazing. And at this year’s San Francisco event, she outdid herself in putting together a fantastic cross-section of scientists, designers, and developers: paleontology, marine biology, geology, astronomy, particle physics, and many, many more disciplines were represented in the truly diverse attendees.

Saturday breakfast with the Science Hack Day community! The Science Hack Day girls! Stargazing on GitHub's roof Demos begin!

After an inspiring set of lightning talks on the first day, ideas started getting bounced around and the hacking began to take shape. I had a vague idea for—yet another—space-related hack. What clinched it was picking the brains of NASA’s Keri Bean. She’d help me get hold of the dataset I needed for my silly little hack.

So here’s the background…

There are many possibilities for human habitats in space: Stanford tori, O’Neill cylinders, Bernal spheres. Another idea, explored in science fiction, is hollowing out asteroids (Larry Niven’s bubbleworlds). Kim Stanley Robinson explores this idea in depth in his book 2312, where he describes the process of building an asteroid terrarium. The website of the book has a delightful walkthrough of the engineering processes involved. It’s not entirely implausible.

I wanted to make that idea approachable, so I thought about the kinds of people we might want to have living with us on the interior shell of a rotating hollowed-out asteroid. How about the people you follow on Twitter?

The only question that remains then is: which asteroid is the right one for you and your Twitter friends? Keri tracked down the motherlode of asteroid data and I started hacking the simplest of mashups—Twitter meets space rocks.

Here’s the result…

Habitasteroids!

Habitasteroids

Give it your Twitter username and it will tell you exactly which one of the asteroids in the main belt is right for you (I considered adding an enterprise option that would tell you where you could store your social network in the cloud …the Oort cloud, that is).

Be default, your asteroid will have the population density of Earth, which is quite generously. But if you want a more sparsely-populated habitat—say, the population density of Australia—or a more densely-populated world—with something like the population density of Japan—then you will be assigned a larger or smaller asteroid accordingly.

You’ll also be told by how much you should increase or decrease the rotation of the asteroid to get one gee of centrifugal force on the interior. Figuring out the equations for calculating centrifugal force almost broke me, but luckily I had help from a rocket scientist and a particle physicist …I’m not even kidding. And I should point out that the calculations take some liberties—I’m assuming a spherical body, which is quite a stretch, given the lumpy nature of most asteroids.

At 13:37 on the second day, the demos began. Keri and I were first up.

Jeremy wants to colonize an asteroid Habitasteroids

Give Habitasteroids a whirl for yourself. It’s a silly little thing, but I quite like how it turned out.

Speaking of silly things …at some point in the proceedings, Keri put the call out for asteroid data to her fellow space enthusiasts on Twitter. They responded with asteroid-related puns.

They have nice asteroids though: @brianwolven, @lukedones, @paix120, @LGalache, @motorbikematt, @brx0.

Oh, and while Habitasteroids might be a silly little hack, WRANGLER just might work.

WRANGLER: Capture and De-Spin of Asteroids and Space Debris

Indie Web Camp UK 2014

Indie Web Camp UK took place here in Brighton right after this year’s dConstruct. I was organising dConstruct. I was also organising Indie Web Camp. This was a problem.

It was a problem because I’m no good at multi-tasking, and I focused all my energy on dConstruct (it more or less dominated my time for the past few months). That meant that something had to give and that something was the organising of Indie Web Camp.

The event itself went perfectly smoothly. All the basics were there: a great venue, a solid internet connection, and a plan of action. But because I was so focused on dConstruct, I didn’t put any time into trying to get the word out about Indie Web Camp. Worse, I didn’t put any time into making sure that a diverse range of people knew about the event.

So in the end, Indie Web Camp UK 2014 was quite a homogenous gathering. That’s a real shame, and it’s my fault. My excuse is that I was busy with all things dConstruct, but that’s just that; an excuse. On the plus side, the effort I put into making dConstruct a diverse event paid off, but I’ll know better in future than to try to organise two back-to-back events. I need to learn to delegate and ask for help.

But I don’t want to cast Indie Web Camp in a totally negative light (I just want to acknowledge how it could have been better). It was actually pretty great. As with previous events, it was remarkably productive. The format of one day of talks, followed by one day of hacking is spot on.

Indie Web Camp UK attendees

I hadn’t planned to originally, but I spent the second day getting adactio.com switched over to https. Just a couple of weeks ago I wrote:

I’m looking forward to switching my website over to https:// but I’m not going to do it until the potential pain level drops.

Well, I’m afraid that potential pain level has not dropped. In fact, I can confirm that get TLS working is massive pain in the behind. But on the first day of Indie Web Camp, Tim Retout led a session on security and offered up his expertise for day two. I took full advantage of his generous offer.

With Tim’s help, I was able to get adactio.com all set. If I hadn’t had his help, it probably would’ve taken me days …or I simply would’ve given up. I took plenty of notes so I could document the process. I’ll write it up soon, but alas, it will only be useful to people with the same kind of hosting set up as I have.

By the end of Indie Web Camp, thanks to Tim’s patient assistance, quite a few people has switched on TSL for their sites. The https page on the Indie Web Camp wiki is turning into quite a handy resource.

There was lots of progress in other areas too, particularly with webactions. Some of that progress relates to what I’ve been saying about Web Components. More on that later…

Throw in some Transmat action, location-based hacks, and communication tools; all-in-all a very productive weekend.

Normal

Here in the UK, there’s a “newspaper”—and I use the term advisedly—called The Sun. In longstanding tradition, page 3 of The Sun always features a photograph of a topless woman.

To anyone outside the UK, this is absolutely bizarre. Frankly, it’s pretty bizarre to most people in the UK as well. Hence the No More Page 3 campaign which seeks to put pressure on the editor of The Sun to ditch their vestigal ’70s sexism and get with the 21st Century.

Note that the campaign is not attempting to make the publication of topless models in a daily newspaper illegal. Note that the campaign is not calling for top-down censorship from press regulators. Instead the campaign asks only that the people responsible reassess their thinking and recognise the effects of having topless women displayed in what is supposedly a family newspaper.

Laura Bates of the Everyday Sexism project has gathered together just some examples of the destructive effects of The Sun’s page 3. And sure, in this age of instant access to porn via the internet, an image of a pair of breasts might seem harmless and innocuous, but it’s the setting for that image that wreaks the damage:

Being in a national newspaper lends these images public presence and, more harmfully for young people, the perception of mainstream cultural approval. Our society, through Page 3, tells both girls and boys ‘that’s what women are’.

Simply put, having this kind of objectification in a freely-available national newspaper normalises it. When it’s socially acceptable to have a publication like The Sun in a workplace, then it’s socially acceptable for that same workplace to have the accompanying air of sexism.

That same kind of normalisation happens in online communities. When bad behaviour is tolerated, bad behaviour is normalised.

There are obvious examples of online communities where bad behaviour is tolerated, or even encouraged: 4Chan, Something Awful. But as long as I can remember, there have also been online communites that normalise abhorrent attitudes, and yet still get a free pass (usually because the site in question would deliver bucketloads of traffic …as though that were the only metric that mattered).

It used to be Slashdot. Then it was Digg. Now it’s Reddit and Hacker News.

In each case, the defence of the bad behaviour was always explained by the sheer size of the community. “Hey, that’s just the way it is. There’s nothing can be done about it.” To put it another way …it’s normal.

But normality isn’t an external phenomenon that exists in isolation. Normality is created. If something is perceived as normal—whether that’s topless women in a national newspaper or threatening remarks in an online forum—that perception is fueled by what we collectively accept to be “normal”.

Last year, Relly wrote about her experience at a conference:

Then there was the one comment I saw in a live irc style backchannel at an event, just after I came off stage. I wish I’d had the forethought to screenshot it or something but I was so shocked, I dropped my laptop on the table and immediately went and called home, to check on my kids.

Why?

Because the comment said (paraphrasing) “This talk was so pointless. After she mentioned her kids at the beginning I started thinking of ways to hunt them down and punish her for wasting my time here.”

That’s a horrible thing for anyone to say. But I can understand how someone would think nothing of making a remark like that …if they began their day by reading Reddit or Hacker News. If you make a remark like that there, nobody bats an eyelid. It’s normal.

So what do we do about that? Do we simply accept it? Do we shrug our shoulders and say “Oh, well”? Do we treat it like some kind of unchangeable immovable force of nature; that once you have a large online community, bad behaviour should be accepted as the default mode of discourse?

No.

It’s hard work. I get that. Heck, I run an online community myself and I know just how hard it is to maintain civility (and I’ve done a pretty terrible job of it in the past). But it’s not impossible. Metafilter is a testament to that.

The other defence of sites like Reddit and Hacker News is that it’s unfair to judge the whole entity based purely on their worst episodes. I don’t buy that. The economic well-being of a country shouldn’t be based on the wealth of its richest citizens—or even the wealth of its average citizens—but its poorest.

That was precisely how Rebecca Watson was shouted down when she tried to address Reddit’s problems when she was on a panel at South by Southwest last year:

Does the good, no matter if it’s a fundraiser for a kid with cancer or a Secret Santa gift exchange, negate the bigotry?

Like I said, running an online community is hardDerek’s book was waaaay ahead of its time—but it’s not impossible. If we treat awful behaviour as some kind of unstoppable force that can’t be dealt with, then what’s the point in trying to have any kind of community at all?

Just as with the No More Page 3 campaign, I’m not advocating legal action or legislative control. Instead, I just want some awareness that what we think of as normal is what we collectively decide is normal.

I try not to be a judgemental person. But if I see someone in public with a copy of The Sun, I’m going to judge them. And no, it’s not a class thing: I just don’t consider misogyny to be socially acceptable. And if you participate in Reddit or Hacker News …well, I’m afraid I’m going to judge you too. I don’t consider it socially acceptable.

Of course my judgemental opinion of someone doesn’t make a blind bit of difference to anybody. But if enough of us made our feelings clear, then maybe slowly but surely, there might be a shift in feeling. There might just be a small movement of the needle that calibrates what we think of normal in our online communities.

Hackfarming Tiny Planner

Towards the end of each year, we Clearlefties head off to a remote location in the countryside for a week of hacking on non-client work. It’s all good unclean fun.

It started two years ago when we made Map Tales. Then last year we worked on the Politmus project. A few months back, it was the turn of Hackfarm 2013.

Hackfarm 2013

This time it was bigger than ever. Rather than having everyone working on one big project all week, it made more sense to split into smaller teams and work on a few different smaller projects. Ant has written a detailed description of what went down.

By the middle of the week, I found myself on a team with James, other James, Graham, and an Andy. We started working on something that Boxman has wanted for a while now: a simple little app for adding steps to a list of things to do.

Here’s what differentiates it from the many other to-do list apps out there: you start by telling it what time you want to be finished by. Then, after you’ve added all your steps, it tells you what time you need to get started. An example use case would be preparing a Sunday roast. You know all the steps involved, and you know what time you want to sit down to eat, so what time do you need start your preparation?

We call it Tiny Planner. It’s not “done” in any meaningful sense of the word, and let’s face it, it probably never will be. What happens at hackdays, stays at hackdays …unfinished. Still, the code is public if anyone fancies doing something with it.

Hackfarm 2013 Hackfarm 2013

What made this project interesting from my perspective, was that it was one of them new-fangled single-page-app thingies. You know the kind: the ones that are made without progressive enhancement, and cease to exist in the absence of JavaScript. Exactly the kind of thing I would normally never work on, in other words.

It was …interesting. I though it would be a good opportunity to evaluate all the various JS-or-it-doesn’t-happen frameworks like Angular, Ember, and Backbone. So I started reading the documentation. I guess I hadn’t realised quite how stupid I am, because I couldn’t make any headway. It was quite dispiriting. So I left Graham to do all the hard JavaScript work and concentrated on the CSS instead. So much for investigating new technologies.

Hackfarm 2013

Partly because the internet connection at Hackfarm was so bad, we decided to reduce the server dependencies as much as possible. In the end, we didn’t need any server at all. All the data is stored in the browser in local storage. A handy side-effect of that is that we could offline everything—this may one of the few legitimate uses of appcache. Mind you, I never did get ‘round to actually adding the appcache component because, well, you know what it’s like with cache-invalidation and all that. (And like I said, the code’s public now so if it ever does get put into a presentable state, someone can add the offline stuff then.)

From a development perspective, it was an interesting experiment all ‘round; dabbling in client-side routing, client-side templating, client-side storage, client-side everything really. But it did feel …weird. There’s something uncanny about building something that doesn’t have proper URLs. It uses web technologies but it doesn’t really feel like it’s part of the web.

Anyway, feel free to play around with Tiny Planner, bearing in mind that it’s not a finished thing.

I should really put together a plan for finishing it. If only there were an app for that.

Hackfarm 2013

America

I’ve just come back from a multi-hop trip to the States, spanning three cities in just over two weeks.

It started with an all-too-brief trip to San Francisco for Science Hack Day, which—as I’ve already described—was excellent. It was a shame that it was such a flying visit and I didn’t get to see many people. But then again, I’ll be back in December for An Event Apart San Francisco.

It was An Event Apart that took me to my second destination: Austin, Texas. The conference was great, as always. But was really nice was having some time afterwards to explore the town. Being in Austin when it’s not South by Southwest is an enjoyable experience that I can heartily recommend.

Christopher and Ari took me out to Lockhart to experience Smitty’s barbecue—a place with a convoluted family drama and really, really excellent smoked meat. I never really “got” Texas BBQ until now. I always thought I liked the sauced-based variety, but now I understand: if the BBQ is good enough, you don’t need the sauce.

For the rest of my stay, Sam was an excellent host, showing me around her town until it was time for me to take off for New York city.

To start with, I was in Manhattan. I was going to be speaking at Future Of Web Design right downtown on 42nd street, and I showed up a few days early to rendezvous with Jessica and do some touristing.

We perfected the cheapskate’s guide to Manhattan, exploring the New York Public Library, having Tiff show us around the New York Times, and wrangling a tour of the MoMA from Ben Fino-Radin, who’s doing some fascinating work with the digital collection.

I gave my FOWD talk, which went fine once the technical glitches were sorted out (I went through three microphones in five minutes). The conference was in a cinema, which meant my slides were giganormous. That was nice, but the event had an odd kind of vibe. Maybe it was the venue, or maybe it was the two-track format …I really don’t like two-track conferences; I constantly feel like I’m missing out on something.

I skipped out on the second day of the conference to make my way over the bridge to Brooklyn in time for my third trip to Brooklyn Beta.

This year, they tried something quite different. For the first two days, there was a regular Brooklyn Beta: 300 lovely people gathered together at the Invisible Dog, ostensibly to listen to talks but in reality to hang out and chat. It was joyous.

Then on the third and final day, those 300 people decamped to Brooklyn’s Navy Yard to join a further 1000 people. There we heard more talks and had more chats.

Alas, the acoustics in the hangar-like space battled against the speakers. That’s why I made sure to grab a seat near the front for the afternoon talks. I found myself with a front-row seat for a series of startup stories and app tales. Then, without warning, the tech talks were replaced with stand-up comics. The comedians were very, very good (Reggie Watts!) …but I found it hard to pay attention because I realised I was in a living nightmare: somehow I was in the front-row seat of a stand-up comedy show. I spent the entire time thinking “Please don’t pick on me, please don’t pick on me, please don’t…” I couldn’t sneak out either, because that would’ve only drawn attention to myself.

But apart from confronting me with my worst fears, Brooklyn Beta was great …I’m just not sure it scales well from 300 to 1300.

And with that, my American sojourn came to an end. I’m glad that the stars aligned in such a way that I was able to hit up four events in my 16 day trip:

Radio Free Earth

Back at the first San Francisco Science Hack Day I wanted to do some kind of mashup involving the speed of light and the distance of stars:

I wanted to build a visualisation based on Matt’s brilliant light cone idea, but I found it far too daunting to try to find data in a usable format and come up with a way of drawing a customisable geocentric starmap of our corner of the galaxy. So I put that idea on the back burner…

At this year’s San Francisco Science Hack Day, I came back to that idea. I wanted some kind of mashup that demonstrated the connection between the time that light has travelled from distant stars, and the events that would have been happening on this planet at that moment. So, for example, a star would be labelled with “the battle of Hastings” or “the sack of Rome” or “Columbus’s voyage to America”. To do that, I’d need two datasets; the distance of stars, and the dates of historical events (leaving aside any Gregorian/Julian fuzziness).

For wont of a better hack, Chloe agreed to help me out. We set to work finding a good dataset of stellar objects. It turned out that a lot of the best datasets from NASA were either about our local solar neighbourhood, or else really distant galaxies and stars that are emitting prehistoric light.

The best dataset we could find was the Near Star Catalogue from Uranometria but the most distant star in that collection was only 70 or 80 light years away. That meant that we could only mash it up with historical events from the twentieth century. We figured we could maybe choose important scientific dates from the past 70 or 80 years, but to be honest, we really weren’t feeling it.

We had reached this impasse when it was time for the Science Hack Day planetarium show. It was terrific: we were treated to a panoramic tour of space, beginning with low Earth orbit and expanding all the way out to the cosmic microwave background radiation. At one point, the presenter outlined the reach of Earth’s radiosphere. That’s the distance that ionosphere-penetrating radio and television signals from Earth, travelling at the speed of light, have reached. “It extends about 70 light years out”, said the presenter.

This was perfect! That was exactly the dataset of stars that we had. It was a time for a pivot. Instead of the lofty goal of mapping historical events to the night sky, what if we tried to do something more trivial and fun? We could demonstrate how far classic television shows have travelled. Has Star Trek reached Altair? Is Sirius receiving I Love Lucy yet?

No, not TV shows …music! Now we were onto something. We would show how far the songs of planet Earth had travelled through space and which stars were currently receiving which hits.

Chloe remembered there being an API from Billboard, who have collected data on chart-topping songs since the 1940s. But that API appears to be gone, and the Echonest API doesn’t have chart dates. So instead, Chloe set to work screen-scraping Wikipedia for number one hits of the 40s, 50s, 60s, 70s …you get the picture. It was a lot of finding and replacing, but in the end we had a JSON file with every number one for the past 70 years.

Meanwhile, I was putting together the logic. Our list of stars had the distances in parsecs. So I needed to convert the date of a number one hit song into the number of parsecs that song had travelled, and then find the last star that it has passed.

We were tempted—for developer convenience—to just write all the logic in JavaScript, especially as our data was in JSON. But even though it was just a hack, I couldn’t bring myself to write something that relied on JavaScript to render the content. So I wrote some really crappy PHP instead.

By the end of the first day, the functionality was in place: you could enter a date, and find out what was number one on that date, and which star is just now receiving that song.

After the sleepover (more like a wakeover) in the aquarium, we started to style the interface. I say “we” …Chloe wrote the CSS while I made unhelpful remarks.

For the icing on the cake, Chloe used her previous experience with the Rdio API to add playback of short snippets of each song (when it’s available).

Here’s the (more or less) finished hack:

Radio Free Earth.

Basically, it’s a simple mashup of music and space …which is why I spent the whole time thinking “What would Matt do?”

Just keep hitting that button to hear a hit from planet Earth and see which lucky star is currently receiving the signal.*

Science!

*I know, I know: the inverse-square law means it’s practically impossible that the signal would be in any state to be received, but hey, it’s a hack.