Image courtesy Flickr user Matt McGee

In a previous blog article, I described how we deploy Ember apps to S3 buckets, and promised a follow up article about how we use a staging bucket to preview changes before pushing to production. In this article, I’ll describe how we added shared development and staging buckets to our Ember deployment setup.

Our Ember app content lives in S3 buckets. We use nginx to proxy requests for index.html to the bucket, and /api/* requests to the backend API service. Our production environment looks like this:

bucket_single

Out of the box, Ember comes with three environments:

  • development: with source maps; not minified
  • test: with Testem config; not minified
  • production: minified, with fingerprinting

Those are all useful, but I want more:

A shared dev server. We call this dev, and it runs on servers in our AWS account. This is distinct from Ember’s development environment, which we run on our laptops when developing. The dev server runs against a dev API, with a limited set of test data. It gets deployed automatically after every successful build from our CI system. This is where we do the first test of production-like config. We run manual UI tests on this instance, since it’s easy to drop and re-create test data.  But how do we set this up? If we build and deploy to our dev servers with --environment=production, the production fingerprint configuration will serve assets from our production Cloudfront distribution. If we build and deploy with --environment=development, we won’t be able to test production-like configuration, such as whether the Content Security Policy works for serving fonts via Cloudfront.

A stage server. This runs against the production API. It lets us preview new UI changes with production data. For example, does this UI approach scale to a list of 1,000 requests? It’s also useful for isolating and verifying tricky-to-reproduce, data-dependent bugs. Our production environment already knows how to direct index.html to the production S3 bucket and /api requests to our backend API service. All we need to do to add another content source is add a new server section to the nginx configuration. As in dev, we need a way to create a build that points to a different Cloudfront distribution and S3 bucket.

double_bucket

There’s some discussion about how create a stage environment on Ember CLI issue #3176, but no real resolution. Stefan Penner suggests “shim your staging and stuff into what ember cli perceives as production.” Here’s how I did that.

First, in config/environment.js, add subsets to the production environment, selected by environment variables:

  if (environment === 'production') {
    if (process.env['BUILD'] === 'dev') {
      // dev content bucket, dev Cloudfront, and dev API
      ENV.S3_BUCKET_NAME = 'user_content_dev';
      ENV.GOOGLE_ANALYTICS_ENABLED = false;
      ENV.CLOUDFRONT_PREFIX = '//dev_cloudfront.cloudfront.net/';
    } else if (process.env['BUILD'] === 'stage') {
      // production content bucket, staging Cloudfront, and production API
      ENV.S3_BUCKET_NAME = 'user_content_production';
      ENV.GOOGLE_ANALYTICS_ENABLED = false;
      ENV.CLOUDFRONT_PREFIX = '//stage_cloudfront.cloudfront.net/';
    } else {
      // production bucket, production Cloudfront, and production API
      ENV.S3_BUCKET_NAME = 'user_content_production';
      ENV.GOOGLE_ANALYTICS_ENABLED = true;
      ENV.CLOUDFRONT_PREFIX = '//production_cloudfront.cloudfront.net/';
    }
  }

Then, in ember-cli-build.js, get the settings from config/environment.js and use them to set the fingerprint prefix to the correct Cloudfront distribution:

var EmberAppConfig = require('./config/environment.js');

// get Cloudfront prefix from build environment
var CLOUDFRONT_PREFIX = EmberAppConfig(EmberApp.env()).CLOUDFRONT_PREFIX;

module.exports = function(defaults) {
    var app = new EmberApp(defaults, {
        fingerprint: {
            prepend: CLOUDFRONT_PREFIX
        },
    });

We use nginx to proxy requests for index.html to an S3 bucket, and /api/* requests to the backend API service. I added a server section that checks the request URL and serves content from the right S3 bucket:

  server {
    listen 80;
    server_name *.gridium-stage.com;

    # /api to api container
    location /api {
      proxy_pass http://api;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection 'upgrade';
      proxy_set_header Host $host;
      proxy_cache_bypass $http_upgrade;
    }

    # everything else to index.html in s3 bucket
    location /index.html {
      proxy_ignore_headers set-cookie;
      proxy_hide_header set-cookie;
      proxy_set_header cookie "";

      proxy_pass http://s3-us-west-1.amazonaws.com/staging-bucket/index.html;
    }

    location / {
      rewrite ^ /index.html;
    }
  }

Those are all of the required configuration changes. How we deploy varies based on the destination environment:

For production, we use a Slack Hubot integration that talks to an internal ops API. The ops API gets the saved build artifact from our CI system, and syncs it to the production S3 bucket.

For dev and stage, I wrote a simple script that exports the BUILD environment variable to dev or stage from the command-line arguments, builds with --environment=production, and runs aws s3 sync to push the build to the appropriate S3 bucket.

As part of our CI build, we deploy every passing build to the shared dev bucket. Developers can deploy to stage direct from their laptops to stage as needed by running ./deploy.sh stage from the Ember app directory.

While this is fast and convenient for pre-release demos, we don’t do this for dev or production for a couple of reasons. First, these builds should be reproducible. Deploying from a laptop allows any code to be deployed, even if it’s not checked in. If there’s a problem, it might be impossible for another developer to reproduce or debug. We also want to make sure we always run the tests, and that all of the tests pass before deploying to production. Finally, deploying via Slack lets the whole team know that production code has changed.

Eventually, Ember CLI will likely support more build environments. Until it does, it’s pretty simple to add subsets to the production environment and create builds that are production-like but can run in other environments.

About Kimberly Nicholls

Kimberly Nicholls is a full stack engineer at Gridium who loves to make data useful. She also enjoys reading books and playing outside, especially in the water.

0 replies on “A stage environment for Ember apps”

You may also be interested in...

Measuring Aurora query IO with Batch experiments

Running production-scale tests with AWS Batch showed that using SQLAlchemy to modify only changed rows reduced our Storage IO costs, as well as the accumulation of dead tuples in our database.

Migrating to Aurora: easy except the bill

Pay for usage sounds good, but how you use the database is key. Migrating our production database from Postgres to Aurora was easy, until we noticed that our daily database costs more than doubled. Our increased costs were driven by storage IO, a type of usage we’d never before optimized or even measured. We resized… Read More

Fresh baked software

In the olden days, software used to come in a box. Now, we can make sure our users always have fresh baked software.