Journal

2393 sparkline

Friday, June 23rd, 2017

One week to Patterns Day

Greetings!

Patterns Day is one week from today—Friday, June 30th. I’m really looking forward to seeing you in Brighton.

If you’re arriving by train, the venue is a short walk away from the train station. The Duke Of York’s Picture House is at Preston Circus. You’ll recognise the building by its distinctive pair of artificial can-can legs emerging from the roof.

http://tinyurl.com/patternsday

Registration starts at 9am. Show up with some ID, speak friend, and enter. Patterns Day is going to be a bit different to most conferences. Instead of getting a schwag bag and a name badge on a lanyard, you’re going to get a sticker to slap on yourself. The sticker identifies you as an attendee so please don’t lose it.

Once you’re registered, please help yourself to the free coffee, tea, and pastries. I’ll open up the show shortly before 10am with some introductory remarks, and then we’ll be all set for our first speaker at 10am. Here’s how the schedule is shaping up (but always subject to change):

https://adactio.com/journal/12409

There won’t be any conference WiFi. This is by design.

There’ll be a nice long lunch break from 12:30pm to 2pm. You’ll find plenty of tasty options in the neighbourhood. I’ve listed just a few on the Patterns Day website:

https://patternsday.com/#venue

There’ll be more coffee and tea throughout the day, and maybe a nice bag of popcorn in the afternoon.

We’ll finish up before 5pm, at which point we can collectively retire to a nearby pub to continue our discussions. Or we can head to the seafront to douse our melting brains in the English channel. Let’s play it by ear.

I can’t wait to welcome you to Patterns Day, and I’m positively aquiver with anticipation of the talks we’re going to hear from the fantastic line-up of speakers: Laura, Ellen, Sareh, Rachel, Alice, Jina, Paul, and Alla.

See you soon!

—Jeremy

Tuesday, June 13th, 2017

The schedule for Patterns Day

There are only seventeen more days until Patterns Day. Squee!

I’ve got a plan now for how the day is going to run. Here’s the plan:

registration
opening remarks
Laura Elizabeth
Ellen deVries
break
Sareh Heidari
Rachel Andrew
lunch break
Alice Bartlett
Jina Anne
break
Paul Lloyd
Alla Kholmatova
closing remarks

There was a great response to my call for sponsors. Thanks to Amazon Video, we’ll have video recordings of all the talks. Thanks to Deliveroo, we’ll have coffee and tea throughout the day …and pastries in the morning! …and popcorn in the afternoon!!

You’re on your own for lunch. I’ve listed some options on the website, but I should add some more.

I have to say, looking at the schedule for the day, I’m very excited about this line-up. To say I’m looking forward to it would be quite the understatement. I can’t wait!

Friday, June 9th, 2017

Talking with the tall man about poetry

When I started making websites in the 1990s, I had plenty of help. The biggest help came from the ability to view source on any web page—the web was a teacher of itself. I also got plenty of help from people who generously shared their knowledge and experience. There was Jeffrey’s Ask Dr. Web, Steve Champeon’s WebDesign-L mailing list, and Jeff Veen’s articles on Webmonkey. Years later, I was able to meet those people. That was a real privilege.

I’ve known Jeff for over a decade now. He’s gone from Adaptive Path to Google to TypeKit to Adobe to True Ventures, and it’s always fascinating to catch up with him and get his perspective on life, the universe, and everything.

He started up a podcast called Presentable about a year ago. It’s worth having a dig through the archives to have a listen to his chats with people like Andy, Jason, Anna, and Jessica. I was honoured when Jeff asked me to be on the show.

We ended up having a really good chat. It’s out now as Episode 25: The Tenuous Resilience of the Open Web. I really enjoyed having a good ol’ natter, and I hope you might enjoy listening to it.

‘Sfunny, but I feel like a few unplanned themes came up a few times. We ended up talking about art, but also about the scientific aspects of design. I couldn’t help but be reminded of the title of Jeff’s classic book, The Art and Science of Web Design.

We also talked about my most recent book, Resilient Web Design, and that’s when I noticed another theme. When discussing the web-first nature of publishing the book, I described the web version as the canonical version and all the other formats as copies that were generated from that. That sounds a lot like how I describe the indie web—something else we discussed—where you have the canonical instance on your own site but share copies on social networks: Publish on Own Site, Syndicate Elsewhere—POSSE.

We also talked about technologies, and it’s entirely possible that we sound like two old codgers on the front porch haranguing those damn kids on the lawn. You can be the judge of that. The audio is available for your huffduffing pleasure. If you enjoy listening to it half as much as I enjoyed doing it, then I enjoyed it twice as much as you.

Monday, June 5th, 2017

eLife goes live

The World Wide Web was forged in the crucible of science. Tim Berners-Lee was working at CERN, the European Centre for Nuclear Research, a remarkable place where the pursuit of knowledge—rather than the pursuit of profit—is the driving force.

I often wonder whether the web as we know it—an open, decentralised system—could’ve been born anywhere else. These days it’s easy to focus on the success stories of the web in the worlds of commerce and social networking, but I still find there’s something that really “clicks” with the web and the science (Zooniverse being a classic example).

At Clearleft we’ve been lucky enough to work on science-driven projects like the Wellcome Library and the Wellcome Trust. It’s incredibly rewarding to work on projects where the bottom line is measured in knowledge-sharing rather than moolah. So when we were approached by eLife to help them with an upcoming redesign, we jumped at the chance.

We usually help organisations through our expertise in user-centred design, but in this case the design and UX were already in hand. The challenge was in the implementation. The team at eLife knew that they wanted a modular pattern library to keep their front-end components documented and easily reusable. Given Clearleft’s extensive experience with building pattern libraries, this was a match made in heaven (or whatever the scientific non-theistic equivalent of heaven is).

A group of us travelled up from Brighton to Cambridge to kick things off with a workshop. Before diving into code, it was important to set out the aims for the redesign, and figure out how a pattern library could best support those aims.

Right away, I was struck by the great working relationship between design and front-end development within eLife—there was a great collaborative spirit to the endeavour.

Some goals for the redesign soon emerged:

  • Promote the HTML reading experience as a 1st choice for readers.
  • Align the online experience with the eLife visual identity.

That led to some design principles:

  • Focus on content not site furniture.
  • Remove visual clutter and provide no more than the user needs at any stage of the experience.
  • Aid discovery of value added content beyond the manuscript.

Those design principles then informed the front-end development process. Together we came up with a priority of concerns:

  1. Access
  2. Maintainability
  3. Performance
  4. Taking advantage of browser capabilities
  5. Visual appeal

It’s interesting that maintainability was such a high priority that it superseded even performance, but we also proposed a hypothesis at the same time:

Maintainability doesn’t negatively impact performance.

The combination of the design principles and priorities led us to formulate approaches that could be used throughout the project:

  • Progressive enhancement.
  • Small-screen first responsive images.
  • Only add libraries as needed.

Then we dived into the tech stack: build tools, version control approaches, and naming methodologies. BEM was the winner there.

None of those decisions were set in stone, but they really helped to build a solid foundation for the work ahead. Graham camped out in Cambridge for a while, embedding himself in the team there as they began the process of identifying, naming, and building the components.

The work continued after Clearleft’s involvement wrapped up, and I’m happy to say that it all paid off. The new eLife site has just gone live. It’s looking—and performing—beautifully.

What a great combination: the best of the web and the best of science!

eLife is a non-profit organisation inspired by research funders and led by scientists. Our mission is to help scientists accelerate discovery by operating a platform for research communication that encourages and recognises the most responsible behaviours in science.

Sunday, June 4th, 2017

Month maps

One of the topics I enjoy discussing at Indie Web Camps is how we can use design to display activity over time on personal websites. That’s how I ended up with sparklines on my site—it was the a direct result of a discussion at Indie Web Camp Nuremberg a year ago:

During the discussion at Indie Web Camp, we started looking at how silos design their profile pages to see what we could learn from them. Looking at my Twitter profile, my Instagram profile, my Untappd profile, or just about any other profile, it’s a mixture of bio and stream, with the addition of stats showing activity on the site—signs of life.

Perhaps the most interesting visual example of my activity over time is on my Github profile. Halfway down the page there’s a calendar heatmap that uses colour to indicate the amount of activity. What I find interesting is that it’s using two axes of time over a year: days of the month across the X axis and days of the week down the Y axis.

I wanted to try something similar, but showing activity by time of day down the Y axis. A month of activity feels like the right range to display, so I set about adding a calendar heatmap to monthly archives. I already had the data I needed—timestamps of posts. That’s what I was already using to display sparklines. I wrote some code to loop over those timestamps and organise them by day and by hour. Then I spit out a table with days for the columns and clumps of hours for the rows.

Calendar heatmap on Dribbble

I’m using colour (well, different shades of grey) to indicate the relative amounts of activity, but I decided to use size as well. So it’s also a bubble chart.

It doesn’t work very elegantly on small screens: the table is clipped horizontally and can be swiped left and right. Ideally the visualisation itself would change to accommodate smaller screens.

Still, I kind of like the end result. Here’s last month’s activity on my site. Here’s the same time period ten years ago. I’ve also added month heatmaps to the monthly archives for my journal, links, and notes. They’re kind of like an expanded view of the sparklines that are shown with each month.

From one year ago, here’s the daily distribution of

And then here’s the the daily distribution of everything in that month all together.

I realise that the data being displayed is probably only of interest to me, but then, that’s one of the perks of having your own website—you can do whatever you feel like.

Aurora

I remember when I was first recommended to read Kim Stanley Robinson. I was chatting with Jon Tan about science fiction, and I was bemoaning the fact that dystopias seem to be the default setting. Asking "what’s the worst that could happen?" is the over-riding pre-occupation of most sci-fi. Black Mirror is the perfect example of this. Mind you, that’s probably why the ambiguous San Junipero is one of my favourites—utopia? dystopia? dystutopia? You decide.

Anyway, Jon told me I should check out Kim Stanley Robinson’s Three Californias; one book describes a dystopia, one book describes a utopia, and the other—his debut, The Wild Shore—is more ambiguous. I liked the sound of that, but I decided that if I were going to read Kim Stanley Robinson, I should start with his most famous work, the Mars trilogy.

So I read Red Mars. I liked it, but I found it tough going. It’s not exactly a light read. I still haven’t read Green Mars or Blue Mars, though I plan to. I can see why Red Mars is regarded as a classic of hard sci-fi, but it left me somewhat cold. Jessica read The Years of Rice and Salt and had a similar reaction—good premise, thoroughly researched, but tough going.

When I heard about 2312, I couldn’t resist its promise of a jaunt around the solar system. Again, I enjoyed it, but the plot—such as it was—didn’t grab me. I loved the ideas presented in the book. Heck, it inspired one of my Science Hack Day projects. Still, I found that its literary conceit wasn’t enough to carry the book—a character from Saturn who’s saturnian in nature meets a character from Mercury who’s mercurial in nature.

So I was kind of bracing myself for Aurora. Again, the subject matter really appealed to me. I’m a sucker for generation starships. Brian Aldiss’s Non-Stop was a fun read, although in typical Aldiss style, it was weird to the point of psychedelia (even if it looks positively tame next to the batshit crazy world of Hothouse). I was looking forward to reading Robinson’s hard science take on the space ark idea, but I was worried about how much of a slog the writing might be. I read some reviews and listened to some podcasts, and my heart sank when I heard about how the story is partly told by the ship’s AI, who is simultaneously trying to work out how to tell a story. It sounded just like one of those ideas that would be fine for a brief period, but which I could imagine Kim Stanley Robinson dragging out for hundreds of page.

Imagine my surprise when Aurora turned out to be an absolute pleasure. Not only does it have the thoroughly-researched hard science angle of Robinson’s other books, it’s also a rip-roaring tale, in my opinion. I had read of misgivings with the structure of the book—complaints that the story climaxes before the book is halfway done—but I think that misses the point of the story. This is not your typical tale of colonisation. Far from it. Kim Stanley Robinson is quite open about the underlying idea here, that there are certain endeavours that are simply beyond our capacity.

I know that sounds like a very pessimistic view, but I found the book to be a real testament to human ingenuity. But it certainly ruffled quite a few feathers. Like I said, the default setting for most sci-fi is to go negative, but for a sci-fi writer to claim outright that something cannot be done is audacious, and flies in the face of sci-fi tradition.

Gregory Benford wrote a review over on one of my favourite blogs, Centauri Dreams. He takes Robinson to task for stacking the deck against the crew of the ship in Aurora—an inversion of the usual deus ex machina plot devices. I find that criticism puzzling when another review, also on Centauri Dreams, by Stephen Baxter, James Benford and Joseph Miller, takes the book to task for being scientifically naïve.

For me, Aurora was perfectly balanced. It simultaneously captured the wonder of scientific exploration and our own insignificance in the universe. Best of all, it featured central characters that I was utterly invested in—one human, and one artificial. Given my previous experiences with Kim Stanley Robinson books, that was perhaps its greatest achievement. Whereas I might have previously recommended something like 2312, I would have certainly caveated the recommendation. But I wholeheartedly recommend Aurora. It’s easily the best Kim Stanley Robinson book I’ve read so far, and one of the finest science fiction books of recent years. It makes a great companion piece to Neal Stephenson’s Seveneves—not only are they both dealing with space arks, they’ve also got some in-depth descriptions of angular momentum in action, and they’re both thoroughly enjoyable stories that stretch beyond a single human lifespan.

I’m looking forward to digging back through Kim Stanley Robinson’s back catalogue, and I’m very intrigued by his newest book, New York 2140. From listening to his Long Now talk at The Interval, it sounds like the book has as much to say about near-future economics as it does about climate change.

It’s ironic though. Kim Stanley Robinson was first recommended to me because he was one of the few sci-fi writers unafraid to depict a utopia. But his writing never clicked with me until I read Aurora, whose central message sounds like the ultimate downer …that some scientific achievements will forever remain out of reach for humanity.

Tuesday, May 30th, 2017

Checking in at Indie Web Camp Nuremberg

Once I finished my workshop on evaluating technology I stayed in Nuremberg for that weekend’s Indie Web Camp.

IndieWebCamp Nuremberg

Just as with Indie Web Camp Düsseldorf the weekend before, it was a fun two days—one day of discussions, followed by one day of making.

IndieWebCamp Nuremberg IndieWebCamp Nuremberg IndieWebCamp Nuremberg IndieWebCamp Nuremberg

I spent most of the second day playing around with a new service that Aaron created called OwnYourSwarm. It’s very similar to his other service, OwnYourGram. Whereas OwnYourGram is all about posting pictures from Instagram to your own site, OwnYourSwarm is all about posting Swarm check-ins to your own site.

Usually I prefer to publish on my own site and then push copies out to other services like Twitter, Flickr, etc. (POSSE—Publish on Own Site, Syndicate Elsewhere). In the case of Instagram, that’s impossible because of their ludicrously restrictive API, so I have go the other way around (PESOS—Publish Elsewhere, Syndicate to Own Site). When it comes to check-ins, I could do it from my own site, but I’d have to create my own databases of places to check into. I don’t fancy that much (yet) so I’m using OwnYourSwarm to PESOS check-ins.

The great thing about OwnYourSwarm is that I didn’t have to do anything. I already had the building blocks in place.

First of all, I needed some way to authenticate as my website. IndieAuth takes care of all that. All I needed was rel="me" attributes pointing from my website to my profiles on Twitter, Flickr, Github, or any other services that provide OAuth. Then I can piggyback on their authentication flow (this is also how you sign in to the Indie Web wiki).

The other step is more involved. My site needs to provide an API endpoint so that services like OwnYourGram and OwnYourSwarm can post to it. That’s where micropub comes in. You can see the code for my minimal micropub endpoint if you like. If you want to test your own micropub endpoint, check out micropub.rocks—the companion to webmention.rocks.

Anyway, I already had IndieAuth and micropub set up on my site, so all I had to do was log in to OwnYourSwarm and I immediately started to get check-ins posted to my own site. They show up the same as any other note, so I decided to spend my time at Indie Web Camp Nuremberg making them look a bit different. I used Mapbox’s static map API to show an image of the location of the check-in. What’s really nice is that if I post a photo on Swarm, that gets posted to my own site too. I had fun playing around with the display of photo+map on my home page stream. I’ve made a page for keeping track of check-ins too.

All in all, a fun way to spend Indie Web Camp Nuremberg. But when it came time to demo, the one that really impressed me was Amber’s. She worked flat out on her site, getting to the second level on IndieWebify.me …including sending a webmention to my site!

IndieWebCamp Nuremberg

Wednesday, May 24th, 2017

A workshop on evaluating technology

After hacking away at Indie Web Camp Düsseldorf, I stuck around for Beyond Tellerrand. I ended up giving a talk, stepping in for Ellen. I was a poor substitute, but I hope I entertained the lovely audience for 45 minutes.

After Beyond Tellerrand, I got on a train to Nuremberg …along with a dozen of my peers who were also at the event.

All aboard the Indie Web Train from Düsseldorf to Nürnberg. Indie Web Train.

I arrived right in the middle of Web Week Nürnberg. Among the many events going on was a workshop that Joschi arranged for me to run called Evaluating Technology. The workshop version of my Beyond Tellerrand talk, basically.

This was an evolution of a workshop I ran a while back. I have to admit, I was a bit nervous going into this. I had no tangible material prepared; no slides, no handouts, nothing. Instead the workshop is a collaborative affair. In order for it to work, the attendees needed to jump in and co-create it with me. Luckily for me, I had a fantastic and enthusiastic group of people at my workshop.

Evaluating Technology

We began with a complete braindump. “Name some tools and technologies,” I said. “Just shout ‘em out.” Shout ‘em out, they did. I struggled to keep up just writing down everything they said. This was great!

Evaluating Technology

The next step was supposed to be dot-voting on which technologies to cover, but there were so many of them, we introduced an intermediate step: grouping the technologies together.

Evaluating Technology Evaluating Technology

Once the technologies were grouped into categories like build tools, browser APIs, methodologies etc., we voted on which categories to cover, only then diving deeper into specific technologies.

I proposed a number of questions to ask of each technology we covered. First of all, who benefits from the technology? Is it a tool for designers and developers, or is it a tool for the end user? Build tools, task runners, version control systems, text editors, transpilers, and pattern libraries all fall into the first category—they make life easier for the people making websites. Browser features generally fall into the second category—they improve the experience for the end user.

Looking at user-facing technologies, we asked: how well do they fail? In other words, can you add this technology as an extra layer of enhancement on top of what you’re building or do you have to make it a foundational layer that’s potentially a single point of failure?

For both classes of technologies, we asked the question: what are the assumptions? What fundamental philosophy has been baked into the technology?

Evaluating Technology Evaluating Technology

Now, the point of this workshop is not for me to answer those questions. I have a limited range of experience with the huge amount of web technologies out there. But collectively all of us attending the workshop will have a good range of experience and knowledge.

Interesting then that the technologies people voted for were:

  • service workers,
  • progressive web apps,
  • AMP,
  • web components,
  • pattern libraries and design systems.

Those are topics I actually do have some experience with. Lots of the attendees had heard of these things, they were really interested in finding out more about them, but they hadn’t necessarily used them yet.

And so I ended up doing a lot of the talking …which wasn’t the plan at all! That was just the way things worked out. I was more than happy to share my opinions on those topics, but it was of a shame that I ended up monopolising the discussion. I felt for everyone having to listen to me ramble on.

Still, by the end of the day we had covered quite a few topics. Better yet, we had a good framework for categorising and evaluating web technologies. The specific technologies we covered were interesting enough, but the general approach provided the lasting value.

All in all, a great day with a great group of people.

Evaluating Technology

I’m already looking forward to running this workshop again. If you think it would be valuable for your company, get in touch.

Tuesday, May 23rd, 2017

Going offline at Indie Web Camp Düsseldorf

I’ve just come back from a ten-day trip to Germany. The trip kicked off with Indie Web Camp Düsseldorf over the course of a weekend.

IndieWebCamp Düsseldorf 2017

Once again the wonderful people at Sipgate hosted us in their beautiful building, and once again myself and Aaron helped facilitate the two days.

IndieWebCamp Düsseldorf 2017

Saturday was the BarCamp-like discussion day. Plenty of interesting topics were covered. I led a session on service workers, and that’s also what I decided to work on for the second day—that’s when the talking is done and we get down to making.

IndieWebCamp Düsseldorf 2017 IndieWebCamp Düsseldorf 2017 IndieWebCamp Düsseldorf 2017 IndieWebCamp Düsseldorf 2017

I like what Ethan is doing on his offline page. He shows a list of pages that have been cached, but instead of just listing URLs, he shows a title and description for each page.

I’ve already got a separate cache for pages that gets added to as the user browses around my site. I needed to figure out a way to store the metadata for those pages so that I could then display it on the offline page. I came up with a workable solution, and interestingly, it involved no changes to the service worker script at all.

When you visit any blog post, I put metadata about the page into localStorage (after first checking that there’s an active service worker):

if (navigator.serviceWorker && navigator.serviceWorker.controller) {
  window.addEventListener('load', function() {
    var data = {
      "title": "A minority report on artificial intelligence",
      "description": "Revisiting Spielberg’s films after a decade and a half.",
      "published": "May 7th, 2017",
      "timestamp": "1494171049"
    };
    localStorage.setItem(
      window.location.href,
      JSON.stringify(data)
    );
  });
}

In my case, I’m outputting the metadata from the server, but you could just as easily grab some from the DOM like this:

var data = {
  "title": document.querySelector("title").innerText,
  "description": document.querySelector("meta[name='description']").getAttribute("contents")
}

Meanwhile in my service worker, when you visit that same page, it gets added to a cache called “pages”. Both localStorage and the cache API are using URLs as keys. I take advantage of that on my offline page.

The nice thing about writing JavaScript on my offline page is that I know the page will only be seen by modern browsers that support service workers, so I can use all sorts of fancy from ES6, or whatever we’re calling it now.

I start by looping through the keys of the “pages” cache (that’s right—the cache API isn’t just for service workers; you can access it from any script). Then I check to see if there is a corresponding localStorage key with the same string (a URL). If there is, I pull the metadata out of local storage and add it to an array called browsingHistory:

const browsingHistory = [];
caches.open('pages')
.then( cache => {
  cache.keys()
  .then(keys => {
    keys.forEach( request => {
      let data = JSON.parse(localStorage.getItem(request.url));
        if (data) {
          data['url'] = request.url;
          browsingHistory.push(data);
      }
    });

Then I sort the list of pages in reverse chronological order:

browsingHistory.sort( (a,b) => {
  return b.timestamp - a.timestamp;
});

Now I loop through each page in the browsing history list and construct a link to each URL, complete with title and description:

let markup = '';
browsingHistory.forEach( data => {
  markup += `
<h2><a href="${ data.url }">${ data.title }</a></h2>
<p>${ data.description }</p>
<p class="meta">${ data.published }</p>
`;
});

Finally I dump the constructed markup into a waiting div in the page with an ID of “history”:

let container = document.getElementById('history');
container.insertAdjacentHTML('beforeend', markup);

All those steps need to be wrapped inside the then clause attached to caches.open("pages") because the cache API is asynchronous.

There you have it. Now if you’re browsing adactio.com and your network connection drops (or my server goes offline), you can choose from a list of pages you’ve previously visited.

The current situation isn’t ideal though. I’ve got a clean-up operation in my service worker to limit the number of items stored in my “pages” cache. The cache never gets bigger than 35 items. But there’s no corresponding clean-up of metadata stored in localStorage. So there could be a lot more bits of metadata in local storage than there are pages in the cache. It’s not harmful, but it’s a bit wasteful.

I can’t do a clean-up of localStorage from my service worker because service workers can’t access localStorage. There’s a very good reason for that: the localStorage API is synchronous, and everything that happens in a service worker needs to be asynchronous.

Service workers can access indexedDB: it’s asynchronous. I could use indexedDB instead of localStorage, but I’m not a masochist. My best bet would be to use the localForage library, which wraps indexedDB in the simple syntax of localStorage.

Maybe I’ll do that at the next Homebrew Website Club here in Brighton.

Monday, May 22nd, 2017

Sponsoring Patterns Day

It didn’t take long for Patterns Day to sell out (in the sense of the tickets all being sold; not in the sense of going mainstream and selling out to The Man).

I’m very pleased about the ticket situation. It certainly makes my life easier. Now I can concentrate on the logistics for the day, without having to worry about trying to flog tickets AKA marketing.

But I also feel bad. Some people who really, really wanted to come weren’t able to get tickets in time. This is usually because they work at a company where to have to get clearance for the time off, and the cost of the ticket. By the time the word came down from on high that they’ve got the green light, the tickets were already gone. That’s a real shame.

There is, however, a glimmer of hope on the horizon. There is one last chance to get tickets for Patterns Day, and that’s through sponsorship.

Here’s the deal: if I can get some things sponsored (like recordings of the talks, tea and coffee for the day, or an after-party), I can offer a few tickets in return. I can also offer your logo on the Patterns Day website, your logo on the slide between talks, and a shout-out on stage. But that’s pretty much it. I can’t offer a physical stand at the event—there just isn’t enough room. And I certainly can’t offer you a list of attendee details for your marketing list—that’s just wrong.

In order of priority, here’s what I would love to get sponsored, and here’s what I can offer in return:

  1. £2000: Sponsoring video recordings of the talks—4 tickets. This is probably the best marketing opportunity for your company; we can slap your logo at the start and end of each video when they go online.
  2. £2000: Sponsoring tea and coffee for attendees for the day—4 tickets. This is a fixed price, set by the venue.
  3. £2000+: Sponsoring an after-party near the conference—4 tickets. Ideally you’d take care of booking a venue for this, and you can go crazy decking it out with your branding. Two pubs right across from the conference venue have upstairs rooms you can book: The Joker, and The Hare And Hounds.

There you have it. There’s no room for negotiation, I’m afraid, but I think they’re pretty good deals. Remember, by sponsoring Patterns Day you’ll also have my undying gratitude, and the goodwill of all my peers coming to this event.

Reckon you can convince your marketing department? Drop me a line, let me know which sponsorship option you’d like to snap up, and those four tickets could be yours.