1.4 Modular Granularity

We can apply modular design concepts on every level of a given system. If a project’s demands outgrow its initial scope, maybe we should consider splitting that project into several, smaller projects with smaller teams that are more manageable. The same can be said of applications: when they become large or complex enough, we might want to split them into differentiated products.

When we want to make an application more maintainable, we should consider creating explicitly defined layers of code, so that we can grow each layer horizontally while preventing the complexity of those additions from spreading to other, unrelated, layers. The same thought process can be applied to individual components, splitting them into two or more smaller components that are then tied together by yet another small component, which could act as a composition layer whose sole responsibility is knitting together several underlying components.

At the module level, we should strive to keep functions simple and expressive, with descriptive names and not too many responsibilities. Maybe we’ll have a function dedicated exclusively to pulling together a group of tasks under a particular asynchronous flow, while having other functions for each task that we need to perform within that control flow. The topmost flow controlling function could be exposed as a public interface method for our module, but the only part of it that should be treated as public interface are the parameters that we receive as inputs for that function and the output produced by that same topmost function. Everything else becomes an implementation detail and is, as such, to be considered swappable.

The internal functions of a module won’t have as rigid of an interface either: as long as the public interface holds, we can change the implementation — including the interfaces of functions that make up that implementation — however we want. This is not to say, however, that we should treat those interfaces any less deliberately. The key to proper modular design is in having an utmost respect for all interfaces, and that includes the interfaces exposed by internal functions.

Within functions, we’ll also note a need to componentize aspects of the implementation, giving those aspects a name in the way of function calls, deferring complexity that doesn’t need to be immediately dealt with in the main body of the function until later in the read-through of a given piece of code. We’re writing programs that are meant to be readable and writable for other humans and even ourselves in the future. Virtually everyone who has done any amount of programming has experienced a feeling of frustration when glancing at a piece of code they themselves wrote a few months prior, only to later realize that, with a fresh pair of eyes the design they had then come up with wasn’t as solid as they originally intended.

Remember, computer program development is largely a human and collaborative endeavor. We’re not optimizing for computers to run programs as fast as possible. If we were, we’d be writing binary or hard-coding logic into circuit boards. Instead, our focus is to empower an organization so that its developers can remain productive and able to quickly understand and even modify pieces of code they haven’t ran across before. Working under the soft embrace of conventions and practices — that place developers on an even keel — closes that cycle by making sure future development is consistent with how the application has taken shape up until the present.

Going back to performance, we should be treating it as a feature, where for the most part we don’t place a higher premium on it than we would for other features. Unless performance needs to be a defining feature of our system for business reasons, we shouldn’t worry about ensuring the system runs at top speed on all code paths. Doing so is bound to result in highly complex applications that are hard to maintain, debug, extend, and justify.

We, as developers, often over-do architecture as well, and a lot of the reasoning about performance optimization applies here as well. Laying out an all-encompassing architecture that has the potential to save us trouble as we scale to billions of transactions per second might cost us considerable time spent upfront and possibly also lock us into a series of abstractions that will be hard to keep up with, for no foreseeable gains in the near term. It’s a lot better when we focus on problems we’re already running into, or might soon run into, instead of trying to plan for a hockey-stick growth of infrastructure and throughput without any data to back up the hockey-stick growth we’re anticipating.

When we don’t plan in such a long-term form, an interesting thing occurs: our systems grow more naturally, adapting to the needs of the near-term, gradually progressing towards support for a larger application and larger set of requirements. When that progression is gradual, we notice a corrective behavior in how abstractions are picked up or discarded as we grow. If we settle on abstractions too early, and they end up being the wrong abstractions, we pay dearly for that mistake. Bad abstractions force us to bend entire applications to their will, and once we’ve realized that the abstraction is bad and ought to be removed, we might be so heavily invested in it that pulling out might be costly. This, paired with the sunk cost fallacy, whereby we’re tempted to keep the abstraction just because we’ve spent a lot of time, sweat, and blood on it, can be very hazardous indeed.

We’ll devote an important part of this book to understanding how we can identify and leverage the right abstractions at the right time, so that the risk they incur is minimized.