HTTPS Auto-Promotion with Docker and NGINX

One of the promises of Docker is the ability to run your code like it’s in production everywhere, including when you’re writing it. Differences in the runtime setting are meant to be controlled with configuration. This is a great idea.

Something else that’s a great idea is always serving your web apps over HTTPS. End-to-end encryption isn’t something you only do on login pages, it’s an essential first step to keeping your users’ data safe and secure. Current best practice is to always redirect from http://yourcompany.com/some/path to https://yourcompany.com/some/path automatically.

That’s a pretty easy couple of lines of NGINX configuration. But what if you have public-facing microservices, and they’re written in four different languages?

Redirecting to HTTPS is a cross-cutting concern. We’d like to be able to apply that policy (a) to every public-facing API, and (b) only in production – I want to be able to use straight HTTP when I’m developing the service on my laptop.

Here’s one way to solve this problem: run an NGINX container which listens on port 80, and whose only job is to redirect to the HTTPS version of the same URL. Docker makes it easy to write this service, and not worry about how it’ll interact with any other sites configured under NGINX. Here’s what that looks like in production, in our case in AWS:

80

Here traffic arrives on port 80 of the ELB, which means this was a plain HTTP request. We have our load balancer configured to send that traffic to one of the hosts in its instance list, on port 30100. Docker is configured on each of those hosts to send port-30100 traffic to port 80 on the HTTPS redirector container, which responds with a 301 to the HTTPS version of that site. Here’s how that request gets routed:

443

Port 443 traffic is sent to instance port 30200, which is pinned using Docker to port 8000 on my application’s container. The load balancer handles all the certificate stuff, and HTTPS redirection is factored out into its own nanoservice.

And none of this makes development any harder. Here’s what it looks like on my laptop:

dev

HTTP on localhost port 8000 connects directly to my application’s code. My application is freed from worrying about whether its traffic is secure, we’ve ensured that in a different place altogether. And when I spin up another service using a new framework, I don’t have to go find the redirection logic and copy-paste it poorly, and when we want to make a change it’s all in one place.

Sometimes it’s the simple things that bring a smile to your face.

About Ben Straub

Ben Straub is a lifelong developer, and enthusiast of the craft of making great software. He enjoys reading, taking his kids on bike rides, chocolate, dogs, those little notebooks you carry around with you, photography, a good weekend hack, traveling, writing, food, craftsmanship, a great pen, Markdown, music, movies, and talking to amazing people. He can frequently be found exploring his hometown of Portland with his wife and two kids.

0 replies on “HTTPS Auto-Promotion with Docker and NGINX”

You may also be interested in...

Measuring Aurora query IO with Batch experiments
Measuring Aurora query IO with Batch experiments

Running production-scale tests with AWS Batch showed that using SQLAlchemy to modify only changed rows reduced our Storage IO costs, as well as the accumulation of dead tuples in our database.

Migrating to Aurora: easy except the bill
Migrating to Aurora: easy except the bill

Pay for usage sounds good, but how you use the database is key. Migrating our production database from Postgres to Aurora was easy, until we noticed that our daily database costs more than doubled. Our increased costs were driven by…

Fresh baked software
Fresh baked software

In the olden days, software used to come in a box. Now, we can make sure our users always have fresh baked software.