6.4 Build, Release, Run

Build processes have a few different aspects to them. At the highest level, there’s the shared logic where we install and compile our assets so that they can be consumed by our runtime application. This can mean anything like installing system or application dependencies, copying files over to a different directory, compiling files into a different language or bundling them together, among a multitude of other requirements your application might have.

Having clearly defined and delineated build processes is key when it comes to successfully managing an application across development, staging, and production environments. Each of these commonplace environments, and other environments you might encounter, is used for a specific purpose and benefits from being geared towards that purpose.

For development, we focus on enhanced debugging facilities, using development versions of libraries, source maps, and verbose logging levels; custom ways of overriding behavior, so that we can easily mimic how the production environment would look like, and where possible we also throw in a real-time debugging server that takes care of restarting our application when code changes, applying CSS changes without refreshing the page, and so on.

In staging, we want an environment that closely resembles production, so we’ll avoid most debugging features, but we might still want source maps and verbose logging to be able to trace bugs with ease. Our primary goal with staging environments generally is to weed out as many bugs as possible before the production push, and thus it is vital that these environments are this middle ground between debugging affordance and production resemblance.

Production focuses more heavily on minification, optimizing images statically to reduce their byte size, and advanced techniques like route-based bundle splitting, where we only serve modules that are actually used by the pages visited by a user; tree shaking, where we statically analyze our module graph and remove functions that aren’t being used; critical CSS inlining, where we precompute the most frequently used CSS styles so that we can inline them in the page and defer the rest of the styles to an asynchronous model that has a quicker time to interactive; and security features, such as a hardened Content-Security-Policy policy that mitigates attack vectors like XSS or CSRF.

Testing also plays a significant role when it comes to processes around an application. Testing is typically done in two different stages. Locally, developers test before a build, making sure linters don’t produce any errors or that tests aren’t failing. Then, before merging code into the mainline repository, we often run tests in a continuous integration (CI) environment to ensure we don’t merge broken code into our application. When it comes to CI, we start off by building our application, and then test against that, making sure the compiled application is in order.

For these processes to be effective, they must be consistent. Intermittent test failures feel worse than not having tests for the particular part of our application we’re having trouble testing, because these failures affect every single test job. When tests fail in this way, we can no longer feel confident that a passing build means everything is in order, and this translates directly into decreased morale and increased frustration across the team as well. When an intermittent test failure is identified, the best course of action is to eliminate the intermittence as soon as possible, either by fixing the source of the intermittence, or by removing the test entirely. If the test is removed, make sure to file a ticket so that a well-functioning test is added later on. Intermittence in test failures can be a symptom of bad design, and in our quest to fix these failures we might resolve architecture issues along the way.

As we’ll extensively discuss in the fourth book in the Modular JavaScript series, there are numerous services that can aid with the CI process. Travis[3] offers a quick way to get started integration testing your applications by connecting to your project’s git repository and running a command of your choosing, where an exit code of 0 means the CI job passes and a different exit code will mean the CI job failed. Codecov[4] can help out on the code coverage side, ensuring most code paths in our application logic are covered by test cases. Solutions like WebPageTest[5], PageSpeed[6], and Lighthouse[7] can be integrated into the CI process we run on a platform like Travis to ensure that changes to our web applications don’t have a negative impact on performance. Running these hooks on every commit and even in Pull Request branches can help keep bugs and regressions out of the mainline of your applications, and thus out of staging and production environments.

Note how up until this point we have focused on how we build and test our assets, but not how we deploy them. These two processes, build and deployment, are closely related but they shouldn’t be intertwined. A clearly isolated build process where we end up with a packaged application we can easily deploy, and a deployment process that takes care of the specifics regardless of whether you’re deploying to your own local environment, or to a hosted staging or production environment, means that for the most part we won’t need to worry about environments during our build processes nor at runtime.