How serving Ember apps from S3 and Cloudfront simplified dev environments, sped up builds and deploys, and made our production infrastructure smaller and more scalable
Gridium’s Tikkit application has three separate front-end Ember apps that all talk to a common API. Our application uses a microservices architecture; each of our apps runs in a Docker container. Initially, we set up our Ember apps the same way: an nginx-based Docker container that serves the app’s assets and proxies API calls to another containerized service.
While this setup was working fine, it was more complicated than it needed to be.
In our development environments, we needed to run two Docker containers for each front-end app. Our builds were taking up to 10 minutes to run tests, build the Ember app for production, build a Docker image, and upload it to Docker Hub. Deploying took several minutes, as it fetched the container image from Docker Hub and restart one container at a time.
Ember apps are really just a collection of static files. Gridium servers don’t run our front-end apps; the user’s browser does. We don’t need to run, monitor, and scale services backed by hundreds of lines of configuration to serve static files; we just need to upload them to an S3 bucket and let Amazon do it.
Inspired by reading an awesome blog post by Kerry Gallagher, I set out to simplify our front-end setup by replacing Docker containers with S3 buckets that Amazon would manage and scale for us.
S3 bucket setup
The S3 bucket that serves the Ember application needs two sets of permissions: everyone can read the files, and a specific AWS user can upload and delete files (for deploys). I set up these permissions, then added a bucket policy to allow global read:
{ "Version": "2012-10-17", "Statement": [ { "Sid": "AddPerm", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::tikkit-buildings/*" } ] }
I also added a CORS configuration to allow getting fonts via Cloudfront, Amazon’s global content delivery network:
<?xml version="1.0" encoding="UTF-8"?> <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <CORSRule> <AllowedOrigin>*</AllowedOrigin> <AllowedMethod>GET</AllowedMethod> <AllowedMethod>HEAD</AllowedMethod> <AllowedHeader>*</AllowedHeader> </CORSRule> </CORSConfiguration>
Finally, I enabled static website hosting with the index document set to index.html
Cloudfront setup
The static files that make up an Ember app are excellent candidates for caching. Once we deploy a set of files, the content of those files never changes and can be cached indefinitely.
Cloudfront is Amazon’s content delivery network. It caches content at servers throughout the world, so that (hopefully) users can get content fast from a server near them.
I created a Cloudfront distribution pointing to the S3 bucket described above, and made a few changes to the default options. Our application uses HTTPS, so I told Cloudfront to redirect HTTP traffic to HTTPS. I whitelisted the Origin header, so that Cloudfront will forward the Origin header to the S3 bucket. I set object caching to use origin cache headers; we set long cache expirations for static assets as part of the deploy.
AWS is ready to serve cached content from a bucket; now I need to get the Ember app there.
Building the app to live in a bucket
By default, a production build of an Ember app adds an md5 checksum to the static asset filenames, and updates files to reference the new names. This means that these files can be cached indefinitely — they’ll get new filenames if their content changes.
In ember-cli-build.js
, I added a fingerprint
option to prefix fingerprinted filenames with the Cloudfront distribution set up previously:
fingerprint: { prepend: CLOUDFRONT_PREFIX }
Now, the app’s index.html will reference static assets with URLs that look like //d11x4o9j9cq6mf.cloudfront.net/assets/tenants-0bba37559567151d89bbc1bee6da0e54.js
, and browsers will get cached versions from Cloudfront.
Another big performance gain comes from gzipping the asset files; our gzipped JavaScript and CSS are on average 18% of their original size. I used the ember-cli-gzip
addon to enable gzipping JavaScript and CSS files as part of the build process.
In a modern browser, requesting tenants.js
will fetch and uncompress tenants.js.gz
if tenants.js
is not available. The ember-cli-zip
addon takes advantage of this to skip rewriting the filenames it modifies: the index.html file still references vendor-fingerprint.js
, not vendor-fingerprint.js.gz
. Cloudfront, on the other hand, only responds for requests that are an exact match to filenames available from its origin server. To work around this, I updated ember-cli-build.js
to gzip the files, but not add a .gz
extension.
gzip: { appendSuffix: false }
This means vendor-fingerprint.js
is actually a gzipped file, not a plain JavaScript file. The browser can handle this fine if the file comes with a Content-Encoding: gzip
header; I add this metadata as part of the deploy.
We use Circle CI to build our application on each push to GitHub. The build consists of these steps:
- check out code
- install depdendencies
- run tests
- create a production build
- tar and gzip the
dist
directory containing the app’s assets
CircleCI saves the output as a build artifact, and makes it available through their REST API. Removing the steps that built and uploaded Docker containers brought our build times down from about 10 minutes to about 3 minutes.
Deployment
Deploying is simple: it’s just copying the build output to the bucket.
We deploy fresh versions of our applications through a Slack Hubot integration that talks to an internal ops API. The ops API has AWS keys that allow it to upload and delete content in the S3 bucket. Here’s what the deploy script does:
- get the latest successful build number from the CircleCI REST API
- download and untar the build’s distribution artifact containing the application files
- run
aws s3 sync
to copy non-gzipped assets and set cache headers (everything except css, js, html) - run
aws s3 sync
to copy css and js files, setContent-Encoding: gzip
and appropriateContent-Type
headers - run
aws s3 sync
to copy the index.html file, and set a no-cache header - run
aws s3 sync --delete
to delete the old, no longer referenced asset files
Syncing the index.html last allows the site to continue to work during the deploy. Both the old and new asset files exist in the bucket, and the index.html tells the browser which files to use. Uploading a new (~4K) index.html file immediately updates the app. There’s no need to deploy to multiple instances or wait for containers to start up. The entire deploy takes less than a minute.
Talking to the API
Ready to flip the switch and point our DNS to the S3 bucket? Not quite. There’s more the app than just static assets; it needs to talk to the API server to get its data.
Previously, the nginx-based Docker container for the app looked at the request URL and did one of three things:
- if it started with
/api/
, forward the request to the API service - if it matched a static file, return that (for example, a css file)
- otherwise, return index.html and let Ember’s routing figure out what to do with it
One of the benefits of this setup is that both the Ember app and the API appear to be served from the same domain. An S3 bucket is great for serving static content, but doesn’t support conditional logic to serve some requests and forward others. I considered using full URLs to access the API, but it’s not ideal. Browsers send an extra OPTIONS pre-flight request on every cross-domain request to make sure it’s allowed, and they don’t seem to want to cache these.
Another issue is that S3 and Cloudfront don’t know what to do with the virtual paths handled by Ember routing, such as /requests/123
. It’s possible to configure the S3 bucket to redirect requests that don’t match an actual object to #/something
. Then, S3 sees /
and serves index.html, and Ember can find the path after the #
and route it appropriately. This works, but the redirect is visible to the user, which is not a great experience.
Running an nginx container is still useful to solve these issues, but it’s not necessary to run one for each front end application. I set up a single nginx instance and configured it to check the incoming URL and proxy requests either to the API service (for /api/*
requests), or to the index.html in the app’s S3 bucket (for everything else).
Here’s a portion of the nginx.conf; there’s one server
section like this for each front-end app:
server { listen 443; server_name *.tikkit.gridium.com; location /api { proxy_pass http://api; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } # everything else to index.html in s3 bucket location /index.html { proxy_ignore_headers set-cookie; proxy_hide_header set-cookie; proxy_set_header cookie ""; # avoid passing along amazon headers proxy_hide_header x-amz-delete-marker; proxy_hide_header x-amz-id-2; proxy_hide_header x-amz-request-id; proxy_hide_header x-amz-version-id; proxy_pass http://s3-us-west-1.amazonaws.com/tikkit-buildings/index.html; }
We only need to update or restart this nginx server when we add a new app, not every time we push new code.
Conclusion
When a user requests our app, they get an uncached copy of index.html from an S3 bucket. This file references static assets served from Cloudfront. Once the app starts, it makes API requests that appear to be from the same server, but are actually served from a separate API service running in a Docker container.
Here’s what it looks like:
On our laptops, we use ember server
to run a local server for development. On each push, we build and save a production distribution. To deploy, we sync updated files to an S3 bucket and we’re done.
Since we set this up, our development environments are simpler, and our builds and deploys are faster. Our users get the benefit of cached content, and we don’t need to run servers to serve static content.
Another benefit of hosting our app in an S3 bucket is that we can push different versions to different buckets. In a future post, I’ll write about how we set up a staging bucket to preview changes before we push to production.
Fantastic article. Looking forward to the next post.
Could you possibly post your fill NGINX conf? I’m having trouble getting the proxy’ing working…
I’ve posted what I hope is a minimal working configuration here: https://gist.github.com/ben/912ba9484b0ff825acd3
Does that help?