Friday, September 12th, 2014

Indie Web Camp UK 2014

Indie Web Camp UK took place here in Brighton right after this year’s dConstruct. I was organising dConstruct. I was also organising Indie Web Camp. This was a problem.

It was a problem because I’m no good at multi-tasking, and I focused all my energy on dConstruct (it more or less dominated my time for the past few months). That meant that something had to give and that something was the organising of Indie Web Camp.

The event itself went perfectly smoothly. All the basics were there: a great venue, a solid internet connection, and a plan of action. But because I was so focused on dConstruct, I didn’t put any time into trying to get the word out about Indie Web Camp. Worse, I didn’t put any time into making sure that a diverse range of people knew about the event.

So in the end, Indie Web Camp UK 2014 was quite a homogenous gathering. That’s a real shame, and it’s my fault. My excuse is that I was busy with all things dConstruct, but that’s just that; an excuse. On the plus side, the effort I put into making dConstruct a diverse event paid off, but I’ll know better in future than to try to organise two back-to-back events. I need to learn to delegate and ask for help.

But I don’t want to cast Indie Web Camp in a totally negative light (I just want to acknowledge how it could have been better). It was actually pretty great. As with previous events, it was remarkably productive. The format of one day of talks, followed by one day of hacking is spot on.

Indie Web Camp UK attendees

I hadn’t planned to originally, but I spent the second day getting adactio.com switched over to https. Just a couple of weeks ago I wrote:

I’m looking forward to switching my website over to https:// but I’m not going to do it until the potential pain level drops.

Well, I’m afraid that potential pain level has not dropped. In fact, I can confirm that get TLS working is massive pain in the behind. But on the first day of Indie Web Camp, Tim Retout led a session on security and offered up his expertise for day two. I took full advantage of his generous offer.

With Tim’s help, I was able to get adactio.com all set. If I hadn’t had his help, it probably would’ve taken me days …or I simply would’ve given up. I took plenty of notes so I could document the process. I’ll write it up soon, but alas, it will only be useful to people with the same kind of hosting set up as I have.

By the end of Indie Web Camp, thanks to Tim’s patient assistance, quite a few people has switched on TSL for their sites. The https page on the Indie Web Camp wiki is turning into quite a handy resource.

There was lots of progress in other areas too, particularly with webactions. Some of that progress relates to what I’ve been saying about Web Components. More on that later…

Throw in some Transmat action, location-based hacks, and communication tools; all-in-all a very productive weekend.

Thursday, September 11th, 2014

Web Components

The Extensible Web Summit is taking place in Berlin today because Web Components are that important. I wish I could be there, but I’ll make do with the live notes, the IRC channel, and the octothorpe tag.

I have conflicting feelings about Web Components. I am simultaneously very excited and very nervous. That’s probably a good sign.

Here’s what I wrote after the last TAG meetup in London:

This really is a radically new and different way of adding features to browsers. In theory, it shifts the balance of power much more to developers (who currently have to hack together everything using JavaScript). If it works, it will be A Good Thing and result in expanding HTML’s vocabulary with genuinely useful features. I fear there may be a rocky transition to this new way of thinking, and I worry about backwards compatibility, but I can’t help but admire the audacity of the plan.

And here’s what I wrote after the Edge conference:

If Web Components work out, and we get a kind emergent semantics of UI widgets, it’ll be a huge leap forward for the web. But if we end up with a Tower of Babel, things could get very messy indeed. We’ll probably get both at once.

To explain…

The exciting thing about Web Components is that they give developers as much power as browser makers.

The frightening thing about Web Components is that they give developers as much power as browser makers.

When browser makers—and other contributors to web standards—team up to hammer out new features in HTML, they have design principles to guide them …at least in theory. First and foremost—because this is the web, not some fly-by-night “platform”—is the issue of compatability:

Support existing content

Degrade gracefully

You can see those principles at work with newly-minted elements like canvas, audio, video where fallback content can be placed between the opening and closing tags so that older user agents aren’t left high and dry (which, in turn, encourages developers to start using these features long before they’re universally supported).

You can see those principles at work in the design of datalist.

You can see those principles at work in the design of new form features which make use of the fact that browsers treat unknown input types as type="text" (again, encouraging developers to start using the new input long before they’re supported in every browser).

When developers are creating new Web Components, they could apply that same amount of thought and care; Chris Scott has demonstrated just such a pattern. Switching to Web Components does not mean abandoning progressive enhancement. If anything they provide the opportunity to create whole new levels of experience.

Web developers could ensure that their Web Components degrade gracefully in older browsers that don’t support Web Components (and no, “just polyfill it” is not a sustainable solution) or, for that matter, situations where JavaScript—for whatever reason—is not available.

Web developers could ensure that their Web Components are accessible, using appropriate ARIA properties.

But I fear that Sturgeon’s Law is going to dominate Web Components. The comparison that’s often cited for Web Components is the creation of jQuery plug-ins. And let’s face it, 90% of jQuery plug-ins are crap.

This wouldn’t matter so much if developers were only shooting themselves in the foot, but because of the wonderful spirit of sharing on the web, we might well end up shooting others in the foot too:

  1. I make something (to solve a problem).
  2. I’m excited about it.
  3. I share it.
  4. Others copy and paste what I’ve made.

Most of the time, that’s absolutely fantastic. But if the copying and pasting happens without critical appraisal, a lot of questionable decisions can get propagated very quickly.

To give you an example…

When Apple introduced the iPhone, it provided a mechanism to specify that a web page shouldn’t be displayed in a zoomed-out view. That mechanism, which Apple pulled out of their ass without going through any kind of standardisation process, was to use the meta element with a name of “viewport”:

<meta name="viewport" value="...">

The value attribute of a meta element takes a comma-separated list of values (think of name="keywords": you provide a comma-separated list of keywords). But in an early tutorial about the viewport value, code was provided which showed values separated with semicolons (like CSS declarations). People copied and pasted that code (which actually did work in Mobile Safari) and so every browser must support that usage:

Many other mobile browsers now support this tag, although it is not part of any web standard. Apple’s documentation does a good job explaining how web developers can use this tag, but we had to do some detective work to figure out exactly how to implement it in Fennec. For example, Safari’s documentation says the content is a “comma-delimited list,” but existing browsers and web pages use any mix of commas, semicolons, and spaces as separators.

Anyway, that’s just one illustration of how code gets shared, copied and pasted. It’s especially crucial during the introduction of a new technology to try to make sure that the code getting passed around is of a high quality.

I feel kind of bad saying this because the introductory phase of any new technology should be a time to say “Hey, go crazy! Try stuff out! See what works and what doesn’t!” but because Web Components are so powerful I think that mindset could end up doing a lot of damage.

Web developers have been given powerful features in the past. Vendor prefixes in CSS were a powerful feature that allowed browsers to push the boundaries of CSS without creating a Tower of Babel of propietary properties. But because developers just copied and pasted code, browser makers are now having to support prefixes that were originally scoped to different rendering engines. That’s not the fault of the browser makers. That’s the fault of web developers.

With Web Components, we are being given a lot of rope. We can either hang ourselves with it, or we can make awesome …rope …structures …out of rope this analogy really isn’t working.

I’m not suggesting we have some kind of central authority that gets to sit in judgement on which Web Components pass muster (although Addy’s FIRST principles are a great starting point). Instead I think a web of trust will emerge.

If I see a Web Component published by somebody at Paciello Group, I can be pretty sure that it will be accessible. Likewise, if Christian publishes a Web Component, it’s a good bet that it will use progressive enhancement. And if any of the superhumans at Filament Group share a Web Component, it’s bound to be accessible, performant, and well thought-out.

Because—as is so often the case on the web—it’s not really about technologies at all. It’s about people.

And it’s precisely because it’s about people that I’m so excited about Web Components …and simultaneously so nervous about Web Components.

Monday, September 8th, 2014

dConstruct 2014

dConstruct is all done for another year. Every year I feel sort of dazed in the few days after the conference—I spend so much time and energy preparing for this event looming in my future, that it always feels surreal when it’s suddenly in the past.

But this year I feel particularly dazed. A little numb. Slightly shellshocked even.

This year’s dConstruct was …heavy. Sure, there were some laughs (belly laughs, even) but overall it was a more serious event than previous years. The word that I heard the most from people afterwards was “important”. It was an important event.

Here’s the thing: if I’m going to organise a conference in 2014 and give it the theme of “Living With The Network”, and then invite the most thoughtful, informed, eloquent speakers I can think of …well, I knew it wasn’t going to be rainbows and unicorns.

If you were there, you know what I mean. If you weren’t there, it probably sounds like it wasn’t much fun. To be honest, “fun” wasn’t the highest thing on the agenda this year. But that feels right. And even though it wasn’t a laugh-fest, it was immensely enjoyable …if, like me, you enjoy having your brain slapped around.

I’m going to need some time to process and unpack everything that was squeezed into the day. Fortunately—thanks to Drew’s typical Herculean efforts—I can do that by listening to the audio, which is already available!

Slap the RSS feed in your generic MP3 listening device of choice and soak up the tsunami of thoughts, ideas, and provocations that the speakers delivered.

Oh boy, did the speakers ever deliver!

Warren Ellis at dConstruct Georgina Voss at dConstruct Clare Reddington at dConstruct Aaron Straup Cope at dConstruct Brian Suda at dConstruct Mandy Brown at dConstruct Anab Jain at dConstruct Tom Scott at dConstruct Cory Doctorow at dConstruct

Listen, it’s very nice that people come along to dConstruct each year and settle into the Brighton Dome to listen to these talks, but the harsh truth is that I didn’t choose the speakers for anyone else but myself. I know that’s very selfish, but it’s true. By lucky coincidence, the speakers I want to see turn out to deliver the best damn talks on the planet.

That said, as impressed as I was by the speakers, I was equally impressed by the audience. They were not spoon-fed. They had to contribute their time, attention, and grey matter to “get” those talks. And they did. For that, I am immensely grateful. Thank you.

I’m not going to go through all the talks one by one. I couldn’t do them justice. What was wonderful was to see the emerging themes, ideas, and references that crossed over from speaker to speaker: thoughts on history, responsibility, power, control, and the future.

And yes, there was definitely a grim undercurrent to some of those ideas about the future. But there was also hope. More than one speaker pointed out that the future is ours to write. And the emphasis on history highlighted that our present moment in time—and our future trajectory—is all part of an ongoing amazing collective narrative.

But it’s precisely because the future is ours to write that this year’s dConstruct hammered home our collective responsibility. This year’s dConstruct was a grown-up, necessarily serious event that shined a light on our current point in history …and maybe, just maybe, provided some potential paths for the future.

Thursday, September 4th, 2014

This week in Brighton

This is my favourite week of the year. It’s the week when Brighton bursts into life as the its month-long Digital Festival kicks off.

Already this week, we’ve had the Dots conference and three days of Reasons To Be Creative, where designers and makers show their work. And this afternoon Lighthouse are running their annual Improving Reality event.

But the best is yet to come. Tomorrow’s the big day: dConstruct 2014. I’ve been preparing for this day for so long now, it’s going to be very weird when it’s over. I must remember to sit back, relax and enjoy the day. I remember how fast the day whizzed by last year. I suspect that tomorrow’s proceedings might display equal levels of time dilation—I’m excited to see every single talk.

Even when dConstruct is done, the Brighton festivities will continue. I’ll be at Indie Web Camp here at 68 Middle Street on Saturday on Sunday. Also on Saturday, there’s the brilliant Maker Faire, and when the sun goes down, Brighton will be treated to Seb’s latest project which features frickin’ lasers!

This is my favourite week of the year.

Friday, August 22nd, 2014

Georgina Voss at dConstruct

It’s exactly two weeks until dConstruct. I AM EXCITE!!!11ELEVEN!! If you’ve already got your ticket: excellent! If not, you can still get one. It’s not too late.

There is a change to the advertised line-up…

Alas, Jen can no longer make it to Brighton. Circumstances have conspired to make trans-atlantic travel an impossibility. It’s a real shame because I was really looking forward to her talk, but these things happen (and she’s gutted too: she was really looking forward to being in Brighton for this year’s dConstruct).

But never fear. We’ve swapped out one fantastic talk for another fantastic talk. Brighton’s own Georgina Voss has very kindly stepped into the breach. She’s going to knock your socks off with her talk, Tethering the Hovercraft:

A careen through grassroots innovation, speculative design, supply chains and sexual healthcare provision, lashing down over-caffeinated flailing into the grit of socio-technical systems.

Awwww yeah!

I had the chance to see Georgina speak a few months back at Lighthouse Arts and it was terrific. She is the perfect fit for this year’s dConstruct—she really is living with the network.

It’s a shame that Jen can’t join us for this year’s dConstruct but, my goodness, what a great day it’s going to be—now with added Vossomeness!

Wednesday, August 20th, 2014

Security for all

Throughout the Brighton Digital Festival, Lighthouse Arts will be exhibiting a project from Julian Oliver and Danja Vasiliev called Newstweek. If you’re in town for dConstruct—and you should be—you ought to stop by and check it out.

It’s a mischievous little hardware hack intended for use in places with public WiFi. If you’ve got a Newstweek device, you can alter the content of web pages like, say, BBC News. Cheeky!

There’s one catch though. Newstweek works on http:// domains, not https://. This is exactly the scenario that Jake has been talking about:

SSL is also useful to ensure the data you’re receiving hasn’t been tampered with. It’s not just for user->server stuff

eg, when you visit http://www.theguardian.com/uk , you don’t really know it hasn’t been modified to tell a different story

There’s another good reason for switching to TLS. It would make life harder for GCHQ and the NSA—not impossible, but harder. It’s not a panacea, but it would help make our collectively-held network more secure, as per RFC 7258 from the Internet Engineering Task Force:

Pervasive monitoring is a technical attack that should be mitigated in the design of IETF protocols, where possible.

I’m all for using https:// instead of http:// but there’s a problem. It’s bloody difficult!

If you’re a sysadmin type that lives in the command line, then it’s probably not difficult at all. But for the rest of us mere mortals who just want to publish something on the web, it’s intimidatingly daunting.

Tim Bray says:

It’ll cost you <$100/yr plus a half-hour of server reconfiguration. I don’t see any excuse not to.

…but then, he also thought that anyone who can’t make a syndication feed that’s well-formed XML is an incompetent fool (whereas I ended up creating an entire service to save people from having to make RSS feeds by hand).

Google are now making SSL a ranking factor in their search results, which is their prerogative. If it results in worse search results, other search engines are available. But I don’t think it will have significant impact. Jake again:

if two pages have equal ranking except one is served securely, which do you think should appear first in results?

Ashe Dryden disagrees:

Google will be promoting SSL sites above those without, effectively doing the exact same thing we’re upset about the lack of net neutrality.

I don’t think that’s quite fair: if Google were an ISP slowing down http:// requests, that would be extremely worrying, but tweaking its already-opaque search algorithm isn’t quite the same.

Mind you, I do like this suggestion:

I think if Google is going to penalize you for not having SSL they should become a CA and issue free certs.

I’m more concerned by the discussions at Chrome and Mozilla about flagging up http:// connections as unsafe. While the approach is technically correct, I fear it could have the opposite of its intended effect. With so many sites still served over http://, users would be bombarded with constant messages of unsafe connections. Before long they would develop security blindness in much the same way that we’ve all developed banner-ad blindness.

My main issue—apart from the fact that I personally don’t have the necessary smarts to enable TLS—is related to what Ashe is concerned about:

Businesses and individuals who both know about and can afford to have SSL in place will be ranked above those who don’t/can’t.

I strongly believe that anyone should be able to publish on the web. That’s one of the reasons why I don’t share my fellow developers’ zeal for moving everything to JavaScript; I want anybody—not just programmers—to be able to share what they know. Hence my preference for simpler declarative languages like HTML and CSS (and my belief that they should remain simple and learnable).

It’s already too damn complex to register a domain and host a website. Adding one more roadblock isn’t going to help that situation. Just ask Drew and Rachel what it’s like trying to just make sure that their customers have a version of PHP from this decade.

I want a secure web. I’d really like the web to be https:// only. But until we get there, I really don’t like the thought of the web being divided into the haves and have-nots.

Still…

There is an enormous opportunity here, as John pointed out on a recent episode of The Web Ahead. Getting TLS set up is a pain point for a lot of people, not just me. Where there’s pain, there’s an opportunity to provide a service that removes the pain. Services like Squarespace are already taking the pain out of setting up a website. I’d like to see somebody provide a TLS valet service.

(And before you rush to tell me about the super-easy SSL-setup tutorial you know about, please stop and think about whether it’s actually more like this.)

I’m looking forward to switching my website over to https:// but I’m not going to do it until the potential pain level drops.

For all of you budding entrepreneurs looking for the next big thing to “disrupt”, please consider making your money not from the gold rush itself, but from providing the shovels.

Wednesday, August 13th, 2014

Anab Jain at dConstruct

The countdown to dConstruct 2014 has well and truly begun. It’s just three and a half weeks away, and I am very excited.

I have some good news and bad news.

The bad news is that Leila Johnston can no longer make it—she has decided to cancel all her public speaking engagements to focus on the next Hack Circus event.

But the (very) good news is that Anab Jain will be speaking! Yay!

I had actually approached Anab earlier when I was still putting together the line-up for this year’s dConstruct, but it didn’t look like she could fit it into her schedule. Then as the line-up of speakers coalesced, it became clearer and clearer that she would be the perfect person to talk about Living With The Network and I was filled with regret.

Now that she has so graciously agreed to step in at such short notice, I couldn’t be happier. Seriously, I am so excited about the line-up that I’m like a kid counting down the days until Christmas.

There are still tickets available for dConstruct 2014. If you haven’t got yours yet, well, you should fix that. (Have I mentioned how excited I am about this year’s line-up? I’m quite, quite excited about this year’s line-up.)

If you’re the gambling kind, you can try your luck at winning a ticket to the conference, thanks to our lovely sponsors SiteGround. Fill in their short survey and you’re in with a chance.

Regardless of how you get hold of ticket, get hold of a ticket. And I’ll see you at the magnificent Brighton Dome on Friday, September 5th for a day of superb brain-bending entertainment from Warren Ellis, Mandy Brown, Cory Doctorow, Clare Reddington, Tom Scott, Aaron Straup Cope, Jen Lowe, Brian Suda …and Anab Jain!

Tuesday, August 12th, 2014

Code refactoring for America

Here at Clearleft, we’ve been doing some extra work with Code for America following on from our initial deliverables. This makes me happy for a number of reasons:

  1. They’re a great client—really easy-going and fun to work with.
  2. We’ve got Anna back in the office and it’s always nice to have her around.
  3. We get to revisit the styleguide we provided, and test our assumptions.

That last one is important. When we provide a pattern library to a client, we hope that they’ve got everything they need. If we’ve done our job right, then they’ll be able to combine patterns in ways we haven’t foreseen to create entirely new page types.

For the most part, that’s been the case with Code for America. They have a solid set of patterns that are serving them well. But what’s been fascinating is to hear about what it’s like for the people using those patterns…

There’s been a welcome trend in recent years towards extremely robust, maintainable CSS. SMACSS, BEM, OOCSS and other methodologies might differ in their details, but their fundamental approach is pretty similar. The idea is that you apply a very specific class to every element you want to style:

<div class="thingy">
    <ul class="thingy-bit">
        <li class="thingy-bit-item"></li>
        <li class="thingy-bit-item"></li>
    </ul>
    <img class="thingy-wotsit" src="" alt="" />
</div>

That allows you to keep your CSS selectors very short, but very specific:

.thingy {}
.thingy-bit {}
.thingy-bit-item {}
.thingy-wotsit {}

There’s little or no nesting, and you only ever use class selectors. That keeps your CSS nice and clear, and you avoid specificity hell. The catch is that your HTML is necessarily more verbose: you need to explicitly add a class to whatever you want to style.

For most projects—particularly product work (think Twitter, Facebook, etc.)—that’s a completely acceptable trade-off. It’s usually the same developers editing the CSS and the HTML so there’s no problem moving complexity out of CSS and into the markup templates. Even if other people will be entering the actual content into the system, they’ll probably be doing that mediated through a Content Management System, rather than editing HTML directly.

So nine times out of ten, making the HTML more verbose is absolutely the right choice in order to make the CSS more manageable and maintainable. That’s the way we initially built the pattern library for Code for America.

Well, it turns out that the people using the markup patterns aren’t necessarily the same people who would be dealing with the CSS. Also, there isn’t necessarily a CMS involved. Instead, people (volunteers, employees, anyone really) create new pages by copying and pasting the patterns we’ve provided and then editing them.

By optimising on the CSS side of things, we’ve offloaded a lot of complexity onto their shoulders. While it’s fair enough to expect them to understand basic HTML, it’s hardly fair to expect them to learn a whole new vocabulary of thingy and thingy-wotsit class names just to get things to look they way they expect.

Here’s a markup pattern that makes more sense for the people actually dealing with the HTML:

<div class="thingy">
    <ul>
        <li></li>
        <li></li>
    </ul>
    <img src="" alt="" />
</div>

Much clearer. But now the CSS looks like this:

.thingy {}
.thingy ul {}
.thingy li {}
.thingy img {}

Actually it’s probably going to look more complicated than that: more nesting, more element selectors, more “defensive” rules trying to anticipate the kind of markup that might be used in a particular pattern.

It feels really strange for Anna and myself to work with these kind of patterns. All of our experience screams “Don’t do that! Why would you that?” …but in this case, it’s the right thing to do for the people building the actual website.

So please don’t interpret this as me saying “Hey, everyone, this is how you should write your CSS.” I’m not saying this is better or worse than adding lots of classes to your HTML. If anything, this illustrates that there is no one right way to do this.

It’s worth remembering why we’re aiming for maintainability in what we write. It’s not for any technical reason. It’s for people. If those people find it better to deal with simplified CSS with more complex HTML, than the complexity should be in the HTML. But if the priority for those people is to have simple HTML, then more complex CSS may be an acceptable price to pay.

In other words, it depends.

Sunday, August 10th, 2014

Responding

Last week I had responsive-themed tour of London.

On Tuesday I went up to Chelsea to spend the day workshopping with some people at Education First. It all went rather splendidly, I’m happy to report.

It was an interesting place. First of all, there’s the office building itself. Once owned by News International, it has a nice balance between open-plan and grouped areas. Then there’s the people. Just 20% of them are native English speakers. It was really nice to be in such a diverse group.

The workshop attendees represented a good mix of skills too: UX, front-end development, and visual design were at the forefront, but project management and content writing were also represented. That made the exercises we did together very rewarding.

I was particularly happy that the workshop wasn’t just attended by developers or designers, seeing as one of the messages I was hammering home all day was that responsive web design affects everyone at every stage of a project:

Y’see, it’s my experience that the biggest challenges of responsive design (which, let’s face it, now means web design) are not technology problems. Sure, we’ve got some wicked problems when dealing with non-flexible media like bitmap images, which fight against the flexible nature of the web, but thanks to the work of some very smart and talented people, even those kinds of issues are manageable.

No, the biggest challenges, in my experience, are to do with people. Specifically, the way that people work together.

On Thursday evening, I reiterated that point at The Digital Pond event in Islington …leading at least one person in the audience to declare that they were having an existential crisis (not my intention, honest).

I also had the pleasure of hearing Sally give her take on responsive design. She was terrific at Responsive Day Out 2 and she was, of course, terrific here again. If you get the chance to see her speak, take it.

There should be videos from Digital Pond available at some point, so you’ll be able to catch up with our talks then.

Monday, August 4th, 2014

dConstruct 2014 schedule

I’ve published the schedule for this year’s dConstruct. Curating an event like this doesn’t stop when the speakers have been finalised. Figuring out the flow of the day is another aspect that I really wanted to get right. It’s like making a mixtape.

Anyway, here’s what I’ve got planned …but maybe I’ll add the “subject to change” caveat just in case I change my mind:

Registration
Warren Ellis
Jen Lowe
Break
Clare Reddington
Aaron Straup Cope
Lunch
Brian Suda
Mandy Brown
Leila Johnston
Break
Tom Scott
Cory Doctorow
After-party

Regardless of what order the talks end up in, I’m really excited about seeing every single one of them.

Warren’s talk is simply called “A Cunning Plan”:

Inventing the next twenty years, strategic foresight, fictional futurism and English rural magic: Warren Ellis attempts to convince you that they are all pretty much the same thing, and why it was very important that some people used to stalk around village hedgerows at night wearing iron goggles.

Jen’s is “Enigmas, not Explanations: a Speculative Nonfiction”:

A wander through indescribable projects, magical realisms, and the fantastical present. A speculation on resonances within the network and the good that can come from making questions without answers.

Clare will talk about “Memes for Cities”:

A giant water slide. A talking lamppost. A zombie chase game. These recent city interventions were enabled by networks of people, technology and infrastructure, making the world more playful and creating change. In this Playable City talk, Clare will take on the functional image of a future city, sharing how to design playful experiences that change our relationships with the places we live and work.

Aaron’s talk is intriguely titled “Still Life with Emotional Contagion”.

I love where Brian is going with “Humans Are Only a Self-driving Car’s Way of Making Another Self-driving Car”:

Over 10,000 years ago we lived in balance with the network. Since then we’ve tried to control, rule and bend it to our whims. In all that time, we’ve never asked ourselves if we’re building something that controls us?

Mandy will be talking about “Hypertext as an Agent of Change”:

Mandy Brown contemplates how hypertext has changed us, and what change is yet to come.

Leila’s talk will be the autobiographical “Running Away with the Circus”:

Lessons of launching your own magazine and event series, how to make it work, what not to do, and how to keep the right attitude and get interesting stuff done against the odds.

Tom will take us on a journey to 2030:

Privacy’s dead. What happens next?

And finally, Cory will declare “Information Doesn’t Want to be Free”:

There are three iron laws of information age creativity, freedom and business, woven deep into the fabric of the Internet’s design, the functioning of markets, and the global system of regulation and trade agreements.

You can’t attain any kind of sustained commercial, creative success without understanding these laws — but more importantly, the future of freedom itself depends on getting them right.

They all sound bloody brilliant!

There are still plenty of tickets left so if you haven’t got your ticket to dConstruct yet (what’s wrong with you?), you can grab one now.