Tags: ssl

35

sparkline

Monday, January 22nd, 2018

We need more phishing sites on HTTPS!

All the books, Montag.

If we want a 100% encrypted web then we need to encrypt all sites, despite whether or not you agree with what they do/say/sell/etc… 100% is 100% and it includes the ‘bad guys’ too.

Thursday, December 21st, 2017

Extended Validation is Broken

How a certificate with extended validation makes it easier to phish. But I think the title could be amended—here’s what’s really broken:

On Safari, the URL is completely hidden! This means the attacker does not even need to register a convincing phishing domain. They can register anything, and Safari will happily cover it with a nice green bar.

Monday, November 27th, 2017

SSL Issuer Popularity - NetTrack.info

This graph warms the cockles of my heart. It’s so nice to see a genuinely good project like Let’s Encrypt come in and upset the applecart of a sluggish monopolistic industry.

Monday, January 30th, 2017

Thursday, January 19th, 2017

Certified Malice – text/plain

Following from that great post about the “zone of death” in browsers, Eric Law looks at security and trust in a world where certificates are free and easily available …even to the bad guys.

Saturday, December 10th, 2016

Certbot renewals with Apache

I wrote a while back about switching to HTTPS on Apache 2.4.7 on Ubuntu 14.04 on Digital Ocean. In that post, I pointed to an example .conf file.

I’ve been having a few issues with my certificate renewals with Certbot (the artist formerly known as Let’s Encrypt). If I did a dry-run for renewing my certificates…

/etc/certbot-auto renew --dry-run

… I kept getting this message:

Encountered vhost ambiguity but unable to ask for user guidance in non-interactive mode. Currently Certbot needs each vhost to be in its own conf file, and may need vhosts to be explicitly labelled with ServerName or ServerAlias directories. Falling back to default vhost *:443…

It turns out that Certbot doesn’t like HTTP and HTTPS configurations being lumped into one .conf file. Instead it expects to see all the port 80 stuff in a domain.com.conf file, and the port 443 stuff in a domain.com-ssl.conf file.

So I’ve taken that original .conf file and split it up into two.

First I SSH’d into my server and went to the Apache directory where all these .conf files live:

cd /etc/apache2/sites-available

Then I copied the current (single) file to make the SSL version:

cp yourdomain.com.conf yourdomain.com-ssl.conf

Time to fire up one of those weird text editors to edit that newly-created file:

nano yourdomain.com-ssl.conf

I deleted everything related to port 80—all the stuff between (and including) the VirtualHost *:80 tags:

<VirtualHost *:80>
...
</VirtualHost>

Hit ctrl and o, press enter in response to the prompt, and then hit ctrl and x.

Now I do the opposite for the original file:

nano yourdomain.com.conf

Delete everything related to VirtualHost *:443:

<VirtualHost *:443>
...
</VirtualHost>

Once again, I hit ctrl and o, press enter in response to the prompt, and then hit ctrl and x.

Now I need to tell Apache about the new .conf file:

a2ensite yourdomain.com-ssl.conf

I’m told that’s cool and all, but that I need to restart Apache for the changes to take effect:

service apache2 restart

Now when I test the certificate renewing process…

/etc/certbot-auto renew --dry-run

…everything goes according to plan.

Wednesday, November 30th, 2016

The Guardian has moved to https 🔒 | Info | The Guardian

Details of The Guardian’s switch to HTTPS.

Friday, November 25th, 2016

Hey designers, if you only know one thing about JavaScript, this is what I would recommend | CSS-Tricks

This is a really great short explanation by Chris. I think it shows that the really power of JavaScript in the browser isn’t so much the language itself, but the DOM—the glue that ties the JavaScript to the HTML.

It reminds me of the old jQuery philosophy: find something and do stuff to it.

Thursday, July 21st, 2016

HTTPS Adoption *doubled* this year

Slowly but surely the web is switching over to HTTPS. The past year shows a two to threefold increase.

Sunday, May 29th, 2016

Switching to HTTPS on Apache 2.4.7 on Ubuntu 14.04 on Digital Ocean

I’ve been updating my book sites over to HTTPS:

They’re all hosted on the same (virtual) box as adactio.com—Ubuntu 14.04 running Apache 2.4.7 on Digital Ocean. If you’ve got a similar configuration, this might be useful for you.

First off, I’m using Let’s Encrypt. Except I’m not. It’s called Certbot now (I’m not entirely sure why).

I installed the Let’s Encertbot client with this incantation (which, like everything else here, will need root-level access so if none of these work, retry using sudo in front of the commands):

wget https://dl.eff.org/certbot-auto
chmod a+x certbot-auto

Seems like a good idea to put that certbot-auto thingy into a directory like /etc:

mv certbot-auto /etc

Rather than have Certbot generate conf files for me, I’m just going to have it generate the certificates. Here’s how I’d generate a certificate for yourdomain.com:

/etc/certbot-auto --apache certonly -d yourdomain.com

The first time you do this, it’ll need to fetch a bunch of dependencies and it’ll ask you for an email address for future reference (should anything ever go screwy). For subsequent domains, the process will be much quicker.

The result of this will be a bunch of generated certificates that live here:

  • /etc/letsencrypt/live/yourdomain.com/cert.pem
  • /etc/letsencrypt/live/yourdomain.com/chain.pem
  • /etc/letsencrypt/live/yourdomain.com/privkey.pem
  • /etc/letsencrypt/live/yourdomain.com/fullchain.pem

Now you’ll need to configure your Apache gubbins. Head on over to…

cd /etc/apache2/sites-available

If you only have one domain on your server, you can just edit default.ssl.conf. I prefer to have separate conf files for each domain.

Time to fire up an incomprehensible text editor.

nano yourdomain.com.conf

There’s a great SSL Configuration Generator from Mozilla to help you figure out what to put in this file. Following the suggested configuration for my server (assuming I want maximum backward-compatibility), here’s what I put in.

Make sure you update the /path/to/yourdomain.com part—you probably want a directory somewhere in /var/www or wherever your website’s files are sitting.

To exit the infernal text editor, hit ctrl and o, press enter in response to the prompt, and then hit ctrl and x.

If the yourdomain.com.conf didn’t previously exist, you’ll need to enable the configuration by running:

a2ensite yourdomain.com

Time to restart Apache. Fingers crossed…

service apache2 restart

If that worked, you should be able to go to https://yourdomain.com and see a lovely shiny padlock in the address bar.

Assuming that worked, everything is awesome! …for 90 days. After that, your certificates will expire and you’ll be left with a broken website.

Not to worry. You can update your certificates at any time. Test for yourself by doing a dry run:

/etc/certbot-auto renew --dry-run

You should see a message saying:

Processing /etc/letsencrypt/renewal/yourdomain.com.conf

And then, after a while:

** DRY RUN: simulating 'certbot renew' close to cert expiry
** (The test certificates below have not been saved.)
Congratulations, all renewals succeeded.

You could set yourself a calendar reminder to do the renewal (without the --dry-run bit) every few months. Or you could tell your server’s computer to do it by using a cron job. It’s not nearly as rude as it sounds.

You can fire up and edit your list of cron tasks with this command:

crontab -e

This tells the machine to run the renewal task at quarter past six every evening and log any results:

15 18 * * * /etc/certbot-auto renew --quiet >> /var/log/certbot-renew.log

(Don’t worry: it won’t actually generate new certificates unless the current ones are getting close to expiration.) Leave the cronrab editor by doing the ctrl o, enter, ctrl x dance.

Hopefully, there’s nothing more for you to do. I say “hopefully” because I won’t know for sure myself for another 90 days, at which point I’ll find out whether anything’s on fire.

If you have other domains you want to secure, repeat the process by running:

/etc/certbot-auto --apache certonly -d yourotherdomain.com

And then creating/editing /etc/apache2/sites-available/yourotherdomain.com.conf accordingly.

I found these useful when I was going through this process:

That last one is good if you like the warm glow of accomplishment that comes with getting a good grade:

For extra credit, you can run your site through securityheaders.io to harden your headers. Again, not as rude as it sounds.

You know, I probably should have said this at the start of this post, but I should clarify that any advice I’ve given here should be taken with a huge pinch of salt—I have little to no idea what I’m doing. I’m not responsible for any flame-bursting-into that may occur. It’s probably a good idea to back everything up before even starting to do this.

Yeah, I definitely should’ve mentioned that at the start.

Saturday, May 14th, 2016

Certbot

For your information, the Let’s Encrypt client is now called Certbot for some reason.

Carry on.

Monday, April 25th, 2016

Adding HTTPS to your web site - Robert’s talk

Robert walks through the process he went through to get HTTPS up and running on his Media Temple site.

If you have any experience of switching to HTTPS, please, please share it.

Friday, April 1st, 2016

HTTPS is Hard – The Yell Blog

Finally! An article about moving to HTTPS that isn’t simply saying “Hey, it’s easy and everyone should do it!” This case study says “Hey, it’s hard …and everyone should do it.”

Thursday, February 4th, 2016

Generate Mozilla Security Recommended Web Server Configuration Files

This is useful if you’re making the switch to HTTPS: choose your web server software and version to generate a configuration file.

Friday, January 22nd, 2016

New – AWS Certificate Manager – Deploy SSL/TLS-Based Apps on AWS | AWS Official Blog

If you’re hosting with Amazon, you now get HTTPS for free.

Tuesday, January 5th, 2016

Installing Letsencrypt on Ubuntu 14.04 and nginx | gablaxian.com

If you’re planning the move to TLS and your server is on Digital Ocean running Nginx, Graham’s here to run you through the (surprisingly simple) process.

Sunday, December 6th, 2015

Taking Let’s Encrypt for a Spin - TimKadlec.com

Tim outlines the process for getting up and running with HTTPS using Let’s Encrypt. Looks like it’s pretty straightforward, which is very, very good news.

I’m using the Salter Cane site as a test ground for this. I was able to get everything installed fairly easily. The tricky thing will be having some kind of renewal reminder—the certificates expire after three months.

Still, all the signs are good that HTTPS is about to get a lot less painful.

Monday, September 14th, 2015

More Proof We Don’t Control Our Web Pages, From the Notebook of Aaron Gustafson

Aaron collects some recent examples that demonstrate

  1. why we should use HTTPS and
  2. why we should use progressive enhancement.

Monday, June 22nd, 2015

What a day out! What a lovely responsive day out!

The third and final Responsive Day Out is done and dusted. In short, it was fantastic. Every single talk was superb. Statistically that seems highly unlikely, but it’s true.

I was quite overcome by the outpouring of warmth and all the positive feedback I got from the attendees. That made me feel really good, if a little guilty. Guilty because the truth is that I don’t really consider the attendees when I’m putting the line-up together. Instead I take much greedier approach: I ask “who do I want to hear speak?” Still, it’s nice to know that there’s so much overlap in our collective opinion.

Despite the overwhelmingly positive reaction to the day, I had a couple of complaints myself, and they’re both related to the venue. My issues were with:

  1. the seats and
  2. the temperature.

The tiered seating in the Corn Exchange is great for giving everyone in the audience a good view, but the seats are awfully close together. That leaves taller people with some sore knees.

And the problem with having a conference in the middle of June is that, if the weather is good—which I’m glad it was—the Corn Exchange can get awfully hot and sweaty in the latter half of the day.

Both those issues would be solved by using a more salubrious venue, like the main Brighton Dome itself, but then that would also mean a doubling of the cost per ticket (hence why dConstruct and Responsive Day Out are in different price ranges). And one of the big attractions of Responsive Day Out is its ludicrously cheap ticket price. That meant sacrificing a lot of comforts—I just wish that comfortable seats and air temperature weren’t amongst them.

Still. Listen to me moaning about the things I didn’t like when in fact the day was really, really wonderful.

Orde liveblogged every single talk and Hidde wrote an in-depth overview of the whole day. If you were there, I would love it if you would share your thoughts, preferably on your own website.

Guess what? The audio from all the talks is already online. As always, Drew did an amazing job. You can subscribe to the RSS feed in your podcatching software of choice. Videos will be available after a while, but for now you’ll have to make do with the audio.

Oh, and speaking of audio, if you liked the music that was playing in the breaks, here’s the playlist. My thanks to all the artists for licensing their work under a Creative Commons license so that I could dodge one more expense that would otherwise have to be passed on to the ticket price.

Now. The number one question that people were asking me at the pub afterwards was “why is this the last one?” I really should’ve addressed that during my closing remarks.

But here’s the thing: the first Responsive Day Out was intended as a one-off. So really the question should be: why were there three? To which I have no good answer other than to say it felt about right. With three of them, it gave just about everyone a chance to get to at least one. If you didn’t make it to any of the responsive days out, well …you’ve only got yourself to blame.

If we ended up having Responsive Day Out 7 or 8, then something would have gone horribly wrong with the world of web design and development. The truth is that responsive web design is just plain ol’ web design: it’s the new normal. I guess the term “responsive” makes for a nice hook to hang a day’s talks off, but the truth is that, even by the third event, the specific connections to responsive design were getting more tenuous. There was plenty about accessibility, progressive enhancement, and the latest CSS and JavaScript APIs: all those things are enormously valuable when it comes to responsive web design …because all of those things are enormously valuable when it comes to just plain ol’ web web design.

In the end, I’m glad that I ended up doing three events. Now I can see the arc of all the events as one. Listening back to all the talks from all three years you can hear the trajectory from “ARGH! This responsive design stuff is really scary! How will we cope‽” to “Hey, this responsive design stuff is the way we do things now.” There are still many, many challenges of course, but the question is no longer if responsive design is the way to go. Instead we can talk about how we can help one other do it well.

At the end of the third and final Responsive Day Out, I thanked all the speakers from all three events. It’s quite a roll-call. And it was immensely gratifying to see so many of the names from previous years in the audience at the final event.

I am sincerely grateful to:

  • Sarah Parmenter,
  • David Bushell,
  • Tom Maslen,
  • Richard Rutter,
  • Josh Emerson,
  • Laura Kalbag,
  • Elliot Jay Stocks,
  • Anna Debenham,
  • Andy Hume,
  • Bruce Lawson,
  • Owen Gregory,
  • Paul Lloyd,
  • Mark Boulton,
  • Stephen Hay,
  • Sally Jenkinson,
  • Ida Aalen,
  • Rachel Andrew,
  • Dan Donald,
  • Inayaili de León Persson,
  • Oliver Reichenstein,
  • Kirsty Burgoine,
  • Stephanie Rieger,
  • Ethan Marcotte,
  • Alice Bartlett,
  • Rachel Shillcock,
  • Alla Kholmatova,
  • Peter Gasston,
  • Jason Grigsby,
  • Heydon Pickering,
  • Jake Archibald,
  • Ruth John,
  • Zoe Mickley Gillenwater,
  • Rosie Campbell,
  • Lyza Gardner, and
  • Aaron Gustafson.

Many thanks also to everyone who came along to the events, especially the hat-trickers who made it to all three.

I’ve organised a total of six conferences now and I’m extremely proud of all of them:

  1. dConstruct 2012: Playing With The Future,
  2. the first Responsive Day Out,
  3. dConstruct 2013: Communicating With Machines,
  4. Responsive Day Out 2: The Squishening,
  5. dConstruct 2014: Living With The Network, and
  6. Responsive Day Out 3: The Final Breakpoint.

…but they’ve also been a lot of work. dConstruct in particular took a lot out of me last year. That’s why I’m not involved with this year’s event—Andy has taken the reins instead. By comparison, Responsive Day Out is a much more low-key affair; not nearly as stressful to put together. Still, three in a row is plenty. It’s time to end it on a hell of a high note.

That’s not to say I won’t be organising some other event sometime in the future. Maybe I’ll even revive the format of Responsive Day Out—three back-to-back 20 minute talks makes for an unbeatable firehose of knowledge. But for now, I’m going to take a little break from event-organising.

Besides, it’s not as though Responsive Day Out is really gone. Its spirit lives on in its US equivalent, Responsive Field Day in Portland in September.

Friday, May 15th, 2015

This is for everyone with a certificate

Mozilla—like Google before them—have announced their plans for deprecating HTTP in favour of HTTPS. I’m all in favour of moving to HTTPS. I’ve done it myself here on adactio.com, on thesession.org, and on huffduffer.com. I have some concerns about the potential linkrot involved in the move to TLS everywhere—as outlined by Tim Berners-Lee—but still, anything that makes the work of GCHQ and the NSA more difficult is alright by me.

But I have a big, big problem with Mozilla’s plan to “encourage” the move to HTTPS:

Gradually phasing out access to browser features.

Requiring HTTPS for certain browser features makes total sense, given the security implications. Service Workers, for example, are quite correctly only available over HTTPS. Any API that has access to a device sensor—or that could be used for fingerprinting in any way—should only be available over HTTPS. In retrospect, Geolocation should have been HTTPS-only from the beginning.

But to deny access to APIs where there are no security concerns, where it is merely a stick to beat people with …that’s just wrong.

This is for everyone. Not just those smart enough to figure out how to add HTTPS to their site. And yes, I know, the theory is that is that it’s going to get easier and easier, but so far the steps towards making HTTPS easier are just vapourware. That makes Mozilla’s plan look like something drafted by underwear gnomes.

The issue here is timing. Let’s make HTTPS easy first. Then we can start to talk about ways of encouraging adoption. Hopefully we can figure out a way that doesn’t require Mozilla or Google as gatekeepers.

Sven Slootweg outlines the problems with Mozilla’s forced SSL. I highly recommend reading Yoav’s post on deprecating HTTP too. Ben Klemens has written about HTTPS: the end of an era …that era being the one in which anyone could make a website without having to ask permission from an app store, a certificate authority, or a browser manufacturer.

On the other hand, Eric Mill wrote We’re Deprecating HTTP And It’s Going To Be Okay. It makes for an extremely infuriating read because it outlines all the ways in which HTTPS is a good thing (all of which I agree with) without once addressing the issue at hand—a browser that deliberately cripples its feature set for political reasons.