2.1 Modular Design Essentials

Modularity tackles the complexity problem in program design by opting for small modules with a clear-cut and well-tested API that’s also documented. Defining a precise API attacks interconnection complexity, while small modules aim to make programs easier to understand and work with.

2.1.1 Single Responsibility Principle

The single responsibility principle (SRP) is perhaps the most widely agreed upon principle of successful modular application design. Components are said to follow SRP when they have a single, narrow objective.

Modules which follow SRP do not necessarily have to export a single function as their API. As long as the methods and properties we export from a component are related, we wouldn’t be breaking SRP.

When thinking in terms of SRP, it’s important to figure out what the responsibility is. Consider, as an example, a component used to send emails through the SMTP protocol. The fact that we chose to send emails using SMTP could be considered an implementation detail. If we later want the ability to render the HTML to be sent in those emails using a template and a model, would that also pertain to the email sending responsibility?

Imagine we developed email sending and templating in the same component. These would be tightly coupled. Furthermore, if we later wanted to switch from SMTP to the solution offered through the API for a transactional email provider, we’d have to be careful not to interfere with the templating capability that lies in the same module.

The following code snippet represents a tightly coupled piece of code where we mix templating, sanitization, email API client instantiation, and email sending.

  1. import insane from 'insane'
  2. import mailApi from 'mail-api'
  3. import { mailApiSecret } from './secrets'
  4. function sanitize (template, ...expressions) {
  5. return template.reduce((result, part, i) =>
  6. result + insane(expressions[i - 1]) + part
  7. )
  8. }
  9. export default function send (options, done) {
  10. const {
  11. to,
  12. subject,
  13. model: { title, body, tags }
  14. } = options
  15. const html = sanitize`
  16. <h1>${ title }</h1>
  17. <div>${ body }</div>
  18. <div>
  19. ${
  20. tags
  21. .map(tag => `${ <span>${ tag }</span> }`)
  22. .join(` `)
  23. }
  24. </div>
  25. `
  26. const client = mailApi({ mailApiSecret })
  27. client.send({
  28. from: `hello@mjavascript.com`,
  29. to,
  30. subject,
  31. html
  32. }, done)
  33. }

It might be better to create a separate component that’s in charge of rendering HTML based off of a template and a model, instead of adding templating directly in the email sending component. We could then add a dependency on the email module so that we can send that HTML, or we could create a third module where we’re only concerned with the wiring.

Provided its consumer-facing interface remained the same, an independent SMTP email component would be interchangeable with a component that sent emails some other way such as via an API, logging to a data store, or writing to standard output. In this scenario, the way in which emails are sent would be an implementation detail, while the interface becomes more rigid as it’s adopted by more modules. An inflexible interface gives us flexibility in how the task is performed, while allowing implementations to be replaced with ease according to the use case at hand.

The following example shows an email component that’s only concerned with configuring the API client and adhering to a thoughtful interface which receives the to recipient, the email subject and its html body and then sends the email. This component has the sole purpose of sending email.

  1. import mailApi from 'mail-api'
  2. import { mailApiSecret } from './secrets'
  3.  
  4. export default function send(options, done) {
  5. const { to, subject, html } = options
  6. const client = mailApi({ mailApiSecret })
  7. client.send({
  8. from: `hello@mjavascript.com`,
  9. to,
  10. subject,
  11. html
  12. }, done)
  13. }

It wouldn’t be hard to create a drop-in replacement by developing a module which adheres to the same send API but sends email in a different way. The following example uses a different mechanism, where we simply log to the console. Even though it doesn’t actually send any emails, this component could be useful for debugging purposes.

  1. export default function send(options, done) {
  2. const { to, subject, html } = options
  3. console.log(`
  4. Sending email.
  5. To: ${ to }
  6. Subject: ${ subject }
  7. ${ html }`
  8. )
  9. done()
  10. }

By the same token, a templating component could be developed orthogonally, with an implementation that’s not directly tied into email sending. The following example is extracted from our original, coupled implementation, but only concerned with producing a piece of sanitized HTML using a template and the user-provided model.

  1. import insane from 'insane'
  2.  
  3. function sanitize(template, ...expressions) {
  4. return template.reduce((result, part, i) =>
  5. result + insane(expressions[i - 1]) + part
  6. )
  7. }
  8.  
  9. export default function compile(model) {
  10. const { title, body, tags } = model
  11. const html = sanitize`
  12. <h1>${ title }</h1>
  13. <div>${ body }</div>
  14. <div>
  15. ${
  16. tags
  17. .map(tag => `${ <span>${ tag }</span> }`)
  18. .join(` `)
  19. }
  20. </div>
  21. `
  22. return html
  23. }

Slightly modifying the API shouldn’t be an issue, as long as it remains consistent across the components we want to make interchangeable. For instance, a different implementation could take a template identifier, in addition to the model object, so that the template itself is also decoupled from the compile function.

When we keep the API consistent across implementations[2], using the same signature across every module, it’s easy to swap out implementations depending on context such as the execution environment (development vs. staging vs. production) or any other dynamic context that we need to rely upon.

As we mentioned earlier, a third module could plumb together different components which handle separate concerns, such as templating and email sending. The following example leverages the logging email provider and the static templating function to join both concerns together. Interestingly, this module doesn’t break SRP either, as its only concern is to plumb other modules together.

  1. import { send } from './email/log-provider'
  2. import { compile } from './templating/static'
  3.  
  4. export default function send (options, done) {
  5. const { to, subject, model } = options
  6. const html = compile(model)
  7. send({ to, subject, html }, done)
  8. }

We’ve been discussing API design in terms of responsibility, but something equally interesting is that we’ve hardly worried about the implementation of those interfaces. Is there merit to designing an interface before digging into its implementation?

2.1.2 API First

A module is only as good as its public interface. A poor implementation may hide behind an excellent interface. More importantly, a great interface means we can swap out a poor implementation as soon as we find time to introduce a better one. Since the API remains the same, we can decide whether to replace the existing implementation altogether or if both should co-exist while we upgrade consumers to use the newer one.

A flawed API is a lot harder to repair. There may be several implementations which follow the interface we intend to modify, meaning that we’d have to change the API calls in each consumer whenever we want to make changes to the API itself. The amount of API calls that potentially have to adapt increases with time, entrenching the API as the project grows.

Having a mindful design focus on public interfaces is paramount to developing maintainable component systems. Well designed interfaces can stand the test of time by introducing new implementations that conform to that same interface. A properly designed interface should make it simple to access the most basic or common use cases for the component, while being flexible enough to support other use cases as they arise.

An interface often doesn’t have the necessity of supporting multiple implementations, but we must nonetheless think in terms of the public API first. Abstracting the implementation is only a small part of the puzzle. The answer to API design lies in figuring out which properties and methods consumers will need, while keeping the interface as small as possible.

When we need to implement a new component, a good rule of thumb is drawing up the API calls we’d need to make against that new component. For instance, we might want a component to interact with the Elasticsearch REST API. Elasticsearch is a database engine with advanced search and analytics capabilities, where documents are stored in indices and arranged by type.

In the following piece of code, we’re fantasizing with an ./elasticsearch component which has a public createClient binding, which returns an object with a client#get method that returns a Promise. Note how detailed the query is, making up what could be a real-world keyword search for blog articles tagged modularity and javascript.

  1. import { createClient } from './elasticsearch'
  2. import { elasticsearchHost } from './secrets'
  3.  
  4. const client = createClient({
  5. host: elasticsearchHost
  6. })
  7. client
  8. .get({
  9. index: `blog`,
  10. type: `articles`,
  11. body: {
  12. query: {
  13. match: {
  14. tags: [`modularity`, `javascript`]
  15. }
  16. }
  17. }
  18. })
  19. .then(response => {
  20. // …
  21. })

Using the createClient method we could create a client, establishing a connection to an Elasticsearch server. If the connection is dropped, the component we’re envisioning will seamlessly reconnect to the server, but on the consumer side we don’t necessarily want to worry about that.

Configuration options passed to createClient might tweak how aggressively the client attempts to reconnect. A backoff setting could toggle whether an exponential backoff mechanism should be used, where the client waits for increasing periods of time if it’s unable to establish a connection.

An optimistic setting that’s enabled by default could prevent queries from settling in rejection when a server connection isn’t established, by having them wait until a connection is established before they can be made.

Even though the only setting explicitly outlined in our imagined API usage example is host, it would be simple for the implementation to support new ones in its API without breaking backward compatibility.

The client#get method returns a promise that’ll settle with the results of asking Elasticsearch about the provided index, type, and query. When the query results in an HTTP error or an Elasticsearch error, the promise is rejected. To construct the endpoint we use the index, type, and the host that the client was created with. For the request payload, we use the body field, which follows the Elasticsearch Query DSL[3]. Adding more client methods, such as put and delete, would be trivial.

Following an API-first methodology is crucial in understanding how the API might be used. By placing our foremost focus on the interface, we are purposely avoiding the implementation until there’s a clear idea of what interface the component should have. Then, once we have a desired interface in mind, we can begin implementing the component. Always write code against an interface.

Note how the focus is not only on what the example at hand addresses directly but also on what it doesn’t address: room for improvement, corner cases, how the API might change going forward, and whether the existing API can accomodate more uses without breaking backward compatibility.

2.1.3 Revealing Pattern

When everything in a component is made public, nothing can be considered an implementation detail and thus making changes becomes hard. Prefixing properties with an underscore is not enough for consumers not to rely on them: a better approach is not to reveal private properties in the first place.

By exposing only what’s meant to be used by external consumers, a component avoids a world of trouble. Consumers don’t need to worry about undocumented touchpoints meant for internal use, however tempting, because they’re not exposed in the first place. Component makers don’t need to be concerned about consumers using touchpoints that were meant to be internal when they want to internalize them.

Consider the following piece of code, where we’re externalizing the entire implementation of a simple counter object. Even though it’s not meant to be part of the public API, as indicated by its underscore prefix, the _state property is still exposed.

  1. const counter = {
  2. _state: 0,
  3. increment() { counter._state++ },
  4. decrement() { counter._state-- },
  5. read() { return counter._state }
  6. }
  7. export default counter

It’d be better to explicitly expose the methods and properties we want to make public.

  1. const counter = {
  2. _state: 0,
  3. increment() { counter._state++ },
  4. decrement() { counter._state-- },
  5. read() { return counter._state }
  6. }
  7. const { increment, decrement, read } = counter
  8. const api = { increment, decrement, read }
  9. export default api

This is akin to how some libraries were written in the days before JavaScript had proper modules, where we would wrap everything in a closure so that it wouldn’t leak globals and our implementation would stay private, and then return a public API. For reference, the next code snippet shows an equivalent component using a closure instead.

  1. (function(){
  2. const counter = {
  3. _state: 0,
  4. increment() { counter._state++ },
  5. decrement() { counter._state-- },
  6. read() { return counter._state }
  7. }
  8. const { increment, decrement, read } = counter
  9. const api = { increment, decrement, read }
  10. return api
  11. })()

When exposing touchpoints on an interface, it’s important to gauge if consumers need the touchpoint at all, how it helps them, and whether it could be made simpler. For instance, it might be the case that instead of exposing several different touchpoints the user can pick from, they’re be better off with a single touchpoint that leverages the appropriate code path based on provided inputs while, at the same time, the component would couple a smaller part of its implementation to its interface.

Thinking in API-first terms can help, because then we have a decent idea of the kind of API surface we want, and armed with that we can decide how we want to allow consumers to interact with the component.

As new use cases arise and our component system grows, we should stick to an API-first mindset and the revealing pattern, so that the component doesn’t suddenly become more complex. Gradually introducing complexity can help us design the right interface for our component. One where we don’t offer every solution imaginable, but also one where we elegantly solve the consumer’s use cases, provided they fall within the responsibility of our component.

2.1.4 Finding the Right Abstractions

Open-source software components often get feature requests that are overly specific to the needs of one particular user. Taking feature requests or requirements at face value is not enough, instead we need to dive deeper and find commonalities between the feature that’s being requested, features that we may have planned for our roadmap, and features we might want to adapt our component to support in the future.

Granted, it’s important for a component to satisfy the needs of most of its consumers, but this doesn’t mean we should attempt to satisfy use cases one by one, or in isolation. Almost invariably, doing so results in duplicated logic, inconsistency at the API level, and several different ways of accomplishing the same goal, often with inconsistent observed results.

When a commonality can be found, abstractions involve less friction and help avoid the inconsistencies named earlier. Consider for example the case of DOM event listeners, where we have an HTML attribute and matching JavaScript DOM element property for each event handler, such as onclick, onchange, oninput, and so on. Each of these properties can be assigned a listener function that handles the event. Then there’s EventTarget#addEventListener, which has a signature like addEventListener(type, listener, options)[4], centralizing all event handling logic in a single method that takes the type of event as a parameter. Naturally, this API is better for a number of reasons. First off, EventTarget#addEventListener is a method, making its behavior clearly defined. Meanwhile on handlers are set through assignment, which isn’t as clearly defined: when does the effect of assigning an event handler begin? how is the handler removed? are we limited to a single event handler or is there a way around it? are we going to get an error when we assign a non-function value as an event listener, will the raised event result in an error when trying to invoke the non-function? Furthermore, new event types can be added transparently to addEventListener, without having to change the API surface, whereas with the on technique we would have to introduce yet another property.

Another case where abstractions come in handy might be whenever we are dealing with quirks in cross-browser DOM manipulation. Having a function like on(element, eventType, eventListener) would be superior than testing whether addEventListener is supported and deciding which of the various event listening options is optimal for each case, every time, as it drastically reduces code duplication while also handling every case consistently, limiting complexity.

The above are clear-cut examples of cases when an abstraction greatly improves poor interfaces, but that’s not always the end result. Abstractions can be a costly way of merging use cases when it’s unclear whether those are naturally related in the first place. If we merge use cases too early, we might find that the the complexity we’re tucking away in an abstraction is quite small — and thus offset by the abstraction’s own complexity. If we merge cases which weren’t all that related to begin with, we’d be effectively increasing complexity and end up creating a tighter coupling than needed — instead of lowering complexity like we set out to achieve, we end up obtaining the opposite result.

It is best to wait until a distinguishable pattern emerges and it becomes clear that introducing an abstraction would help keep complexity down. When such a pattern emerges, we can be confident that the use cases are indeed related, and we’ll have better information about whether an abstraction would simplify our code.

Abstractions can generate complexity by introducing new layers of indirection, chipping away at our ability to follow the different code flows around a program. On the other hand, state generates complexity by dynamically modifying the flow in our programs. Without state, programs would run in the same way from start to finish.

2.1.5 State Management

Applications wouldn’t do much of anything if we didn’t keep state. We need to keep track of things like user input or the page we’re currently on to determine what to display and how to help out the user. In this sense, state is a function of user input: as the user interacts with our application, state grows and mutates.

Application state comes from stores such as a persistent database or an API server’s memory cache. This kind of state can be affected by user interaction, such as when a user decides to write a comment.

Besides state for an individual user and application-wide state, there’s also the intermediate state which lies in our program’s code. This state is transient and is typically bound to a particular transaction: a server-side web request, a client-side browser tab, and — at a lower level — a class instance, a function call, or an object’s property.

We shall think of state as our program’s internal entropy. When state reigns, entropy reigns, and the application becomes unbearably hard to debug. One of the goals in modular design is to keep state to the minimum possible. As an application grows larger so does its state and the possible state permutations grow with it. Modularity takes aim at this issue by chopping a state tree into manageable bits and pieces, where each branch of the tree deals with a particular subset of the state. This approach enables us to contain the growing application state as our codebase grows in size.

A function is deemed pure when its output depends solely on its input. Pure functions do not produce any side effects other than the output that’s returned. In the following example, the sum function receives a list of numbers and returns the sum of adding all of them together. It is a pure function because it doesn’t take into account any external state, and it doesn’t emit any side effects.

  1. function sum(numbers) {
  2. return numbers.reduce((a, b) => a + b, 0)
  3. }

Sometimes we have a requirement to keep state across function calls. For instance, a simple incremental counter might lead to us to implement a module such as the following. The increment function isn’t pure, given that count is external state.

  1. let count = 0
  2. const increment = () => count++
  3. export default increment

An artifact of this module exporting an impure function is that the outcome of invoking increment hinges upon understanding how increment is used elsewhere in the application, as each call to increment changes its expected output. As the amount of code in our program increases, so do the potential ways in which an impure function like increment may behave, making impure functions increasingly undesirable.

One potential solution would be to expose a factory which is itself pure, even when the objects returned by the factory aren’t pure. In the following piece of code we’re now returning a factory of counters. The factory isn’t affected by external outputs, and is thus considered pure.

  1. const factory = () => {
  2. let count = 0
  3. const increment = () => count++
  4. return increment
  5. }
  6. export default factory

As long as we limit the usage of each counter spewed by the factory to a given portion of the application which knows about each other usage, the state becomes more manageable, as we end up with fewer moving parts involved. When we eliminate impurity in public interfaces, we’re effectively circumscribing entropy to the calling code. The consumer receives a brand new counter every time, and it’s entirely responsible for managing its state. It can still pass the counter down to its dependents, but it’s in control of how dependents get to manipulate that state, if at all.

This is something we observe in the wild, with popular libraries such as the request package[5] in Node.js, which can be used to make HTTP requests. The request function relies largely on sensible defaults for the options you can pass to it. Sometimes, we want to make requests using a different set of defaults.

The library might’ve offered a solution where we could change the default values for every call to request. This would’ve been poor design, as it’d make their handling of options more unstable, where we’d have to take into account every corner of our codebase before we could be confident about the options we’d ultimately end up with when calling request.

Request chose a solution where it has a request.defaults(options) method which returns an API identical to that of request, but with the new defaults applied on top of the existing defaults. This way it avoids surprises, since usage of the modified request is constrained to the calling code and its dependents.