You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: chapters/ch06.asciidoc
+21-13Lines changed: 21 additions & 13 deletions
Original file line number
Diff line number
Diff line change
@@ -38,13 +38,13 @@ One concrete and effective way of accomplishing this in real-world environments
38
38
- `.env.production.json`, `.env.staging.json`, and others can be used for environment-specific settings, such as the various production connection strings for databases, cookie encoding secrets, API keys, and so on
39
39
- `.env.json` could be your local, machine-specific settings, useful for secrets or configuration changes that shouldn't be shared with other team members
40
40
41
-
Furthermore, you could also accept simple modifications to environment settings through environment variables, such as when executing `PORT=3000 node app`, which is often convenient during development.
41
+
Furthermore, you could also accept simple modifications to environment settings through environment variables, such as when executing `PORT=3000 node app`, which is convenient during development.
42
42
43
43
There's an npm package called `nconf` which we can use to handle reading and merging all of these sources of application settings with ease.
44
44
45
45
The following piece of code shows how you could configure `nconf` to do what we've just described.
46
46
47
-
We import the `nconf` package, and declare configuration sources from highest priority to lowest priority, while `nconf` will do the merging (higher priority settings will always take precedence). We then set the actual `NODE_ENV` environment variable, because libraries often rely on this property to decide whether to instrument or optimize their output.
47
+
We import the `nconf` package, and declare configuration sources from highest priority to lowest priority, while `nconf` will do the merging (higher priority settings will always take precedence). We then set the actual `NODE_ENV` environment variable, because libraries rely on this property to decide whether to instrument or optimize their output.
48
48
49
49
```
50
50
// env
@@ -85,7 +85,7 @@ Assuming we have an `.env.defaults.json` that looks like the following, we could
85
85
}
86
86
```
87
87
88
-
Often, we find ourselves in need to replicate this sort of logic in the client-side. Naturally, we can't share server-side secrets in the client-side, as that'd leak our secrets to anyone snooping through our JavaScript files in the browser. Still, we might want to be able to access a few environment settings such as the `NODE_ENV`, our application's domain or port, Google Analytics tracking ID, and similarly safe-to-advertise configuration details.
88
+
We usually find ourselves in need to replicate this sort of logic in the client-side. Naturally, we can't share server-side secrets in the client-side, as that'd leak our secrets to anyone snooping through our JavaScript files in the browser. Still, we might want to be able to access a few environment settings such as the `NODE_ENV`, our application's domain or port, Google Analytics tracking ID, and similarly safe-to-advertise configuration details.
89
89
90
90
When it comes to the browser, we could use the exact same files and environment variables, but include a dedicated browser-specific object field, like so:
91
91
@@ -141,9 +141,9 @@ A secret service also takes care of encryption, secure storage, secret rotation
141
141
142
142
==== 6.2 Explicit Dependency Management
143
143
144
-
The reason why we often feel tempted to check our dependencies into source control is so that we get the exact same versions across the dependency tree, every time, in every environment.
144
+
The reason why we sometimes feel tempted to check our dependencies into source control is so that we get the exact same versions across the dependency tree, every time, in every environment.
145
145
146
-
Including dependency trees in our repositories is not practical, however, given these are often in the hundreds of megabytes and frequently include compiled assets that are built based on the target environment and operating system, meaning that the build process itself -- the act `npm` executing a `rebuild` step after `npm install` ends -- is environment dependant, and thus not suitable for a presumably platform-agnostic code repository.
146
+
Including dependency trees in our repositories is not practical, however, given these are typically in the hundreds of megabytes and frequently include compiled assets that are built based on the target environment and operating system, meaning that the build process itself -- the act `npm` executing a `rebuild` step after `npm install` ends -- is environment dependant, and thus not suitable for a presumably platform-agnostic code repository.
147
147
148
148
During development, we want to make sure we get non-breaking upgrades to our dependencies, which can help us resolve upstream bugs, tighten our grip around security vulnerabilities, and leverage new features or improvements. For deployments however, we want reproducible builds, where installing our dependencies yields the same results every time.
149
149
@@ -181,36 +181,44 @@ Always installing identical versions of our dependencies -- and identical versio
181
181
182
182
On a similar note to that of the last section, we should treat our own components no differently than how we treat third party libraries and modules. Granted, we can make changes to our own code a lot more quickly than we can effect change in third party code -- if that's at all possible, in some cases. However, when we treat all components and interfaces (including our own HTTP API) as if they were foreign to us, we can focus on consuming and testing against interfaces, while ignoring the underlying implementation.
183
183
184
-
One way to improve our interfaces is to write detailed documentation about the input an interface touchpoint expects, and how it affects the output it provides in each case. The process of writing documentation often leads us to uncover limitations in how the interface is designed, and we might decide to change it as a result. Consumers love good documentation because it means less fumbling about with the implementation (or often, implementors), to understand how the interface is meant to be consumed, and whether it can accomplish what they need.
184
+
One way to improve our interfaces is to write detailed documentation about the input an interface touchpoint expects, and how it affects the output it provides in each case. The process of writing documentation leads to uncovering limitations in how the interface is designed, and we might decide to change it as a result. Consumers love good documentation because it means less fumbling about with the implementation (or its implementors), to understand how the interface is meant to be consumed, and whether it can accomplish what they need.
185
185
186
186
Avoiding distinctions helps us write unit tests where we mock dependencies that aren't under test, regardless of whether they were developed in-house or by a third party. When writing tests we always assume that third party modules are generally well-tested enough that it's not our responsibility to include them in our test cases. The same thinking should apply to first party modules that just happen to be dependencies of the module we're currently writing tests for.
187
187
188
188
This same reasoning can be applied to security concerns such as input sanitization. Regardless of what kind of application we're developing, we can't trust user input unless it's sanitized. Malicious actors could be angling to take over our servers, our customers' data, or otherwise inject content onto our web pages. These users might be customers or even employees, so we shouldn't treat them differently depending on that, when it comes to input sanitization.
189
189
190
-
Putting ourselves in the shoes of the consumer is often the best tool to guard us against half-baked interfaces. When -- as a thought exercise -- you stop and think about how you'd want to consume an interface, and the different ways in which you might need to consume it, you often end up with a much better interface as a result. This is not to say we want to enable consumers to be able to do just about everything, but we want to make affordances where consuming an interface becomes as straightforward as possible and doesn't feel like a chore. If consumers are all but required to include long blocks of business logic right after they consume an interface, we need to stop ourselves and ask: would that business logic belong behind the interface rather than at its doorstep?
190
+
Putting ourselves in the shoes of the consumer is the best tool to guard us against half-baked interfaces. When -- as a thought exercise -- you stop and think about how you'd want to consume an interface, and the different ways in which you might need to consume it, you end up with a much better interface as a result. This is not to say we want to enable consumers to be able to do just about everything, but we want to make affordances where consuming an interface becomes as straightforward as possible and doesn't feel like a chore. If consumers are all but required to include long blocks of business logic right after they consume an interface, we need to stop ourselves and ask: would that business logic belong behind the interface rather than at its doorstep?
191
191
192
192
==== 6.4 Build, Release, Run
193
193
194
-
Having clearly defined and delineated build processes is key when it comes to successfully managing an application across development, staging, and production environments.
194
+
Build processes have a few different aspects to them. At the highest level, there's the shared logic where we install and compile our assets so that they can be consumed by our runtime application. This can mean anything like installing system or application dependencies, copying files over to a different directory, compiling files into a different language or bundling them together, among a multitude of other requirements your application might have.
195
195
196
-
Build processes have a few different aspects to them. At the highest level, there's the shared logic where we install and compile our assets so that they can be consumed by our runtime application.
196
+
Having clearly defined and delineated build processes is key when it comes to successfully managing an application across development, staging, and production environments. Each of these commonplace environments, and other environments you might encounter, is used for a specific purpose and benefits from being geared towards that purpose.
197
197
198
198
For development, we focus on enhanced debugging facilities, using development versions of libraries, source maps, and verbose logging levels; custom ways of overriding behavior, so that we can easily mimic how the production environment would look like, and where possible we also throw in a real-time debugging server that takes care of restarting our application when code changes, applying CSS changes without refreshing the page, and so on.
199
199
200
-
In staging, we want an environment that closely resembles production, so we'll avoid most debugging features, but we might still want source maps and verbose logging to be able to trace bugs with ease.
200
+
In staging, we want an environment that closely resembles production, so we'll avoid most debugging features, but we might still want source maps and verbose logging to be able to trace bugs with ease. Our primary goal with staging environments generally is to weed out as many bugs as possible before the production push, and thus it is vital that these environments are this middle ground between debugging affordance and production resemblance.
201
201
202
-
Production focuses more heavily on minification, image optimization, and advanced techniques like route-based bundle splitting, where we only serve modules that are actually used by the pages visited by a user; tree shaking, where we statically analyze our module graph and remove functions that aren't being used; and security features, such as a hardened `Content-Security-Policy` policy that mitigates attack vectors like XSS or CSRF.
202
+
Production focuses more heavily on minification, optimizing images statically to reduce their byte size, and advanced techniques like route-based bundle splitting, where we only serve modules that are actually used by the pages visited by a user; tree shaking, where we statically analyze our module graph and remove functions that aren't being used; critical CSS inlining, where we precompute the most frequently used CSS styles so that we can inline them in the page and defer the rest of the styles to an asynchronous model that has a quicker time to interactive; and security features, such as a hardened `Content-Security-Policy` policy that mitigates attack vectors like XSS or CSRF.
203
203
204
-
Note how up until this point we have focused on how we build our assets, but not how we deploy them. These two processes, build and deployment, are closely related but they shouldn't be intertwined. A clean build process where we end up with a packaged application we can easily deploy, and a deployment process that takes care of the specifics regardless of whether you're deploying to your own local environment, or to a hosted staging or production environment, means that for the most part we won't need to worry about environments during our build processes or at runtime.
204
+
Testing also plays a significant role when it comes to processes around an application. Testing is typically done in two different stages. Locally, developers test before a build, making sure linters don't produce any errors or that tests aren't failing. Then, before merging code into the mainline repository, we often run tests in a continuous integration (CI) environment to ensure we don't merge broken code into our application. When it comes to CI, we start off by building our application, and then test against that, making sure the compiled application is in order.
205
+
206
+
For these processes to be effective, they must be consistent. Intermittent test failures feel worse than not having tests for the particular part of our application we're having trouble testing, because these failures affect every single test job. When tests fail in this way, we can no longer feel confident that a passing build means everything is in order, and this translates directly into decreased morale and increased frustration across the team as well. When an intermittent test failure is identified, the best course of action is to eliminate the intermittence as soon as possible, either by fixing the source of the intermittence, or by removing the test entirely. If the test is removed, make sure to file a ticket so that a well-functioning test is added later on. Intermittence in test failures can be a symptom of bad design, and in our quest to fix these failures we might resolve architecture issues along the way.
207
+
208
+
Note how up until this point we have focused on how we build and test our assets, but not how we deploy them. These two processes, build and deployment, are closely related but they shouldn't be intertwined. A clearly isolated build process where we end up with a packaged application we can easily deploy, and a deployment process that takes care of the specifics regardless of whether you're deploying to your own local environment, or to a hosted staging or production environment, means that for the most part we won't need to worry about environments during our build processes nor at runtime.
205
209
206
210
==== 6.5 Statelessness
207
211
208
-
We've already explored how state, if left unchecked, can lead us straight to the heat death of our applications. Keeping state to a minimum translates directly into applications that are easier to debug. The least global state there is, the less unpredictable the current conditions of an application will be at any one point in time, and the least surprises we'll run into while debugging.
212
+
We've already explored how state, if left unchecked, can lead us straight to the heat death of our applications. Keeping state to a minimum translates directly into applications that are easier to debug. The less global state there is, the less unpredictable the current conditions of an application will be at any one point in time, and the least surprises we'll run into while debugging.
209
213
210
214
One particularly insidious form of state is caching. A cache is a great way to increase performance in an application by avoiding expensive lookups most of the time. When state management tools are used as a caching mechanism, we might fall into a trap where different bits and pieces of derived application state were derived at different points in time, thus rendering different bits of the application using data computed at different points in time.
211
215
212
216
Derived state should seldom be treated as state that's separate from the data it was derived from. When it's not, we might run into situations where the original data is updated, but the derived state is not, becoming stale and inaccurate. When, instead, we always compute derived state from the original data, we reduce the likelyhood that this derived state will become stale.
213
217
218
+
State is almost ubiquitous, and practically a synonym with application. Applications without state aren't particularly useful, but how can we better manage state? If we look at applications such as your typical web server, their main job is to receive requests, process them, and send a response. Consequently, web servers associate state to each request, keeping it close to the request handlers, the most relevant party. There should be as little global state as possible when it comes to web servers, with the vast majority of state contained in each request/response cycle instead. In this way, we save ourselves from a world of trouble when setting up horizontal scaling with multiple server nodes that don't need to communicate with each other in order to maintain consistency across web server nodes, leaving that job to the data persistance layer instead, which is ultimately responsible for the state as its source of truth.
219
+
220
+
When a request results in a long running job (such as sending out an email campaign, modifying records in a persistant database, etc), it's best to hand that off into a separate service that -- again -- mostly keeps state regarding said job. Separating services into specific needs means we can keep web servers lean, stateless, and improve our flows by adding more servers, persistent queues (so that we don't drop jobs), and so on. When every task is tethered together through tight coupling and state, it becomes challenging to maintain, upgrade, and scale a service over time.
221
+
214
222
==== 6.6 Disposability
215
223
216
224
Whenever we hook up an event listener, regardless of whether we're listening for DOM events or those from an event emitter, we should also strongly consider disposing of the listener when the concerned parties are no longer interested in the event being raised. For instance, if we have a React component that, upon mount, starts listening for `resize` events on the `window` object, we should also make sure we remove those event listeners upon the component being unmounted.
0 commit comments