You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: chapters/ch06.asciidoc
+43-6Lines changed: 43 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -30,6 +30,8 @@ Another benefit of going down this road is that, given we have all environment c
30
30
31
31
When it comes to sharing the secrets, given we're purposely excluding them from source version control, we can take many approaches, such as using environment variables, storing them in JSON files kept in an Amazon S3 bucket, or using an encrypted repository dedicated to our application secrets.
32
32
33
+
Using what's commonly referred to as "dot env" files is an effective way of securely managing secrets in Node.js applications, and there's a module called `nconf` that can aid us in setting these up. These files typically contain two types of data: secrets that mustn't be shared outside of execution environments, and configuration values that should be editable and which we don't want to hardcode.
34
+
33
35
One concrete and effective way of accomplishing this in real-world environments is using several "dot env" files, each with a clearly defined purpose. In order of precedence:
34
36
35
37
- `.env.defaults.json` can be used to define default values that aren't necessarily overwritten across environments, such as the application listening port, the `NODE_ENV` variable, and configurable options you don't want to hard-code into your application code. These default settings should be safe to check into source control
@@ -66,7 +68,14 @@ function accessor(key) {
66
68
export default accessor
67
69
```
68
70
69
-
The module also exposes an interface through which we can consume these application settings by making a function call such as `env('PORT')`.
71
+
The module also exposes an interface through which we can consume these application settings by making a function call such as `env('PORT')`. Whenever we need to access one of the configuration settings, we can import `env.js` and ask for the computed value of the relevant setting, and `nconf` takes care of the bulk of figuring out which settings take precedence over what, and what the value should be for the current environment.
72
+
73
+
[source,javascript]
74
+
----
75
+
import env from './env'
76
+
77
+
const port = env('PORT')
78
+
----
70
79
71
80
Assuming we have an `.env.defaults.json` that looks like the following, we could pass in the `NODE_ENV` flag when starting our staging, test, or production application and get the proper environment settings back, helping us simplify the process of loading up an environment.
72
81
@@ -76,6 +85,8 @@ Assuming we have an `.env.defaults.json` that looks like the following, we could
76
85
}
77
86
```
78
87
88
+
Often, we find ourselves in need to replicate this sort of logic in the client-side. Naturally, we can't share server-side secrets in the client-side, as that'd leak our secrets to anyone snooping through our JavaScript files in the browser. Still, we might want to be able to access a few environment settings such as the `NODE_ENV`, our application's domain or port, Google Analytics tracking ID, and similarly safe-to-advertise configuration details.
89
+
79
90
When it comes to the browser, we could use the exact same files and environment variables, but include a dedicated browser-specific object field, like so:
80
91
81
92
```
@@ -100,21 +111,27 @@ console.log(prettyJson)
100
111
101
112
Naturally, we don't want to mix server-side settings with browser settings, because browser settings are usually accessible to anyone with a user agent, the ability to visit our website, and basic programming skills, meaning we would do well not to bundle highly sensitive secrets with our client-side applications. To resolve the issue, we can have a build step that prints the settings for the appropriate environment to an `.env.browser.json` file, and then only use that file on the client-side.
102
113
114
+
We could incorporate this encapsulation into our build process, adding the following command-line call.
Note that in order for this pattern to work properly, we'll need to know the environment we're building for at the time when we compile the browser dot env file, as passing in a different `NODE_ENV` environment variable would produce different results depending on our target environment.
121
+
122
+
By compiling client-side configuration settings in this way, we avoid leaking server-side configuration secrets onto the client-side.
123
+
107
124
Furthermore, we should replicate the `env` file from the server-side in the client-side, so that application settings are consumed in much of the same way in both sides of the wire.
108
125
109
126
```
110
127
// browser/env
111
128
import env from './env.browser.json'
112
129
113
130
export default function accessor(key) {
114
-
if (typeof key === 'string') {
115
-
return env[key]
131
+
if (typeof key !== 'string') {
132
+
return env
116
133
}
117
-
return env
134
+
return key in env ? env[key] : null
118
135
}
119
136
```
120
137
@@ -152,14 +169,16 @@ Every dependency in our application should be explicitly declared in our manifes
152
169
}
153
170
```
154
171
155
-
Using the information in a package lock file, package managers can take steps to install the same bits every time, preserving our ability to quickly iterate and install package updates, while keeping our code safe.
172
+
Using the information in a package lock file, which contains details about every package we depend upon and all of their dependencies as well, package managers can take steps to install the same bits every time, preserving our ability to quickly iterate and install package updates, while keeping our code safe.
156
173
157
174
Always installing identical versions of our dependencies -- and identical versions of our dependencies' dependencies -- brings us one step closer to having development environments that closely mirror what we do in production. This increases the likelyhood we can swiftly reproduce bugs that occurred in production in our local environments, while decreasing the odds that something that worked during development fails in staging.
158
175
159
176
==== 6.3 Interfaces as Black Boxes
160
177
161
178
On a similar note to that of the last section, we should treat our own components no differently than how we treat third party libraries and modules. Granted, we can make changes to our own code a lot more quickly than we can effect change in third party code -- if that's at all possible. However, when we treat all components and interfaces (including our own HTTP API) as if they were foreign to us, we can focus on consuming and testing against interfaces, while ignoring the underlying implementation.
162
179
180
+
One way to improve our interfaces is to write detailed documentation about the input an interface touchpoint expects, and how it affects the output it provides in each case. The process of writing documentation often leads us to uncover limitations in how the interface is designed, and we might decide to change it as a result. Consumers love good documentation because it means less fumbling about with the implementation (or often, implementors), to understand how the interface is meant to be consumed, and whether it can accomplish what they need.
181
+
163
182
Avoiding distinctions helps us write unit tests where we mock dependencies that aren't under test, regardless of whether they were developed in-house or by a third party. When writing tests we always assume that third party modules are generally well-tested enough that it's not our responsibility to include them in our test cases. The same thinking should apply to first party modules that just happen to be dependencies of the module we're currently writing tests for.
164
183
165
184
This same reasoning can be applied to security concerns such as input sanitization. Regardless of what kind of application we're developing, we can't trust user input unless it's sanitized. Malicious actors could be angling to take over our servers, our customers' data, or otherwise inject content onto our web pages. These users might be customers or even employees, so we shouldn't treat them differently depending on that, when it comes to input sanitization.
@@ -213,3 +232,21 @@ The problem here is that we had a divergence in parity which prevented us from i
213
232
As much as possible, we should strive to keep these kinds of divergences to a minimum, because if we don't, bugs might find their way to production, and a customer might end up reporting the bug to us. Merely being aware of discrepancies like this is not enough, because it's not practical nor effective to keep these logic gates in your head so that whenever you're implementing a change you mentally go through the motions of how the change would differ if your code was running in production instead.
214
233
215
234
Proper integration testing might catch many of these kinds of mistakes, but that won't always be the case.
235
+
236
+
==== 6.8 Abstraction Matters
237
+
238
+
Eager abstraction can result in catastrophe. Conversely, failure to identify and abstract away sources of major complexity can be incredibly costly as well. When we consume complex interfaces directly, but don't necessarily take advantage of all the advanced configuration options that interface has to offer, we are missing out on a powerful abstraction we could be using. The alternative would be to create a middle layer in front of the complex interface, and have consumers go through that layer instead.
239
+
240
+
This intermediate layer would be in charge of calling the complex abstraction itself, but offers a simpler interface with less configuration options and improved ease of use for the use cases that matter to us. Often, complicated or legacy interfaces demand that we offer up data that could be derived from other parameters being passed into the function call. For example, we might be asked how many adults, how many children, and how many people in total are looking to make a flight booking, even though the latter can be derived from the former. Other examples include expecting fields to be in a particular string format (such as a date string that could be derived from a native JavaScript date instead), using nomenclature that's relevant to the implmentation but not so much to the consumer, or a lack of sensible defaults (required fields which are rarely changed into anything other than a recommended value that isn't set by default).
241
+
242
+
When we're building out a web application which consumes a highly parametized API in order to search for the cheapest hassle-free flights -- to give an example -- and we anticipate consuming this API in a few different ways, it would cost us dearly not to abstract away most of the parameters demanded by the API which do not fit our use case. This middle layer can take care of establishing sensible default values and of converting reasonable data structures such as native JavaScript dates or case insensitive airport codes into the formats demanded by the API we're using.
243
+
244
+
In addition, our abstraction could also take care of any follow up API calls that need to be made in order to hydrate data. For example, a flight search API might return an airline code for each different flight, such as AA for American Airlines, but a UI consumer would also necessitate to hydrate AA into a display name for the airline, accompanied by a logo to embed on the user interface, and perhaps even a quick link to their check-in page.
245
+
246
+
When we call into the backing API every time, with the full query, appeasing its quirks and shortcomings instead of taking the abstracted approach, it will not only be difficult to maintain an application that consumes those endpoints in more than one place, but it will also become a challenge down the road, when we want to include results from a different provider, which of course would have their own set of quirks and shortcomings. At this point we would have two separate sets of API calls, one for each provider, and each massaging the data to accomodate provider-specific quirks in a module which shouldn't be concerned with such matters, but only the results themselves.
247
+
248
+
A middle layer could leverage a normalized query from the consumer, such as the one where we took a native date and then format it when calling the flight search API, and then adapt that query into either of the backing services that actually produce flight search results. This way, the consumer only has to deal with a single, simplified interface, while having the ability to seamlessly interact with two similar backing services that offer different interfaces.
249
+
250
+
The same case could, and should, be made for the data structures returned from either of these backing services. By normalizing the data into a structure that only contains information that's relevant to our consumers, and augmenting it with the derived information they need (such as the airline name and details as explained earlier), the consumer can focus on their own concerns while leveraging a data structure that's close to their needs. At the same time, this normalization empowers our abstraction to merge results from both backing services and treat them as if they came from a single source: the abstraction itself, leaving the backing services as mere implementation details.
251
+
252
+
When we rely directly on the original responses, we may find ourselves writing view components that are more verbose than they need be, containing logic to pull together the different bits of metadata needed to render our views, mapping data from the API representation into what we actually want to display, and then mapping user input back into what the API expects. With a layer in between, we can keep this mapping logic contained in a single place, and leave the rest of our application unencumbered by it.
0 commit comments