Skip to content

Latest commit

 

History

History
41 lines (23 loc) · 8.9 KB

best-practices-for-resource-design.md

File metadata and controls

41 lines (23 loc) · 8.9 KB

Best Practices for Resource Design

This chapter is intended to put some context around the previous two. While they focused on how to write resources, this chapter will focus on what to put in a resource. As you've seen, the physical act of creating a resource ain't hard. What's hard is deciding what code to put into it.

Now, I should point out that Microsoft's resources aren't necessarily a universally great example of good resources. Many of the original Microsoft "DSC Resource Kit" resources were cranked out with great rapidity, before "practices" really came into question. And while the practices I'm going to present here are the ones I've developed, I can toss out a litany of names: Murawski. Helmick. Snover. Gates. Clinton. Eastwood. All of those names have either endorsed my design ideas, or have no idea whatsoever that this stuff even exists. Heh.

Principle One: Resources are an Interface

There's a guiding concept in software development that implementation and presentation should be as separate as possible. There are, in fact, entire dev frameworks dedicated to making that separation easier, such as Model-View-Controller (MVC) frameworks. The idea is that all of your functional code should live in one place, and the code that users interact with - the "presentation" - should be entirely distinct. The only code in the presentation piece is the code that makes the presentation actually work.

Take the Exchange Management Console as an example. It's a GUI, so it's an interface, or presentation element. It has relatively little code in it. When you ask the EMC to create a new mailbox, none of the code that creates the mailbox lives in the EMC itself. Instead, the EMC is simply calling New-Mailbox. The EMC takes the results of that command and creates some kind of graphical display - updating a list, or displaying an icon, or whatever it does. The actual functional code lives in the command, which lives in a PowerShell module. You can, of course, run the same command from a different interface - such as the PowerShell console. That demonstrates true independence between the functional code and the interface, and it's a hallmark of solid PowerShell design.

DSC resources work the same way, or they should. Resources should contain as little code as humanly possible, and what code they contain should be focused entirely on making the resource itself work. You should be able to look at almost every line of code in a resource and say, "yup, that's only there because DSC needs it to be there." The rest of your code should live in a traditional PowerShell module, and the resource should simply call that module's commands.

Thinking About Design

The reason Principle One (and there are almost no other principles here) can be a little hard for people to understand is that we're not always accustomed to designing software from the end state, and a DSC resource is definitely the end state.

Let me explain that.

When you build a house, you tend to start from the end state. That is, you hire an architect to draw pictures of what the house will look like. You nearly always start out with conceptual drawings - sketches, and so on - that show the completed house from various angles. You might have a rough floor plan, a sketch of the front elevation, and so on. But you don't hand those to the general contractor and start swinging hammers. You take that end state, and start designing the bits that will lead to it. You design the foundation. You figure out where plumbing is going to go through the foundation. You architect and engineer the walls and the roof. You design openings for doors and windows. So you begin with the end state in mind, but then you go all the way back to basics - to, literally, the foundation. A house is made up of all those foundational bits - cement, walls, wires, pipes, and so on.

Microsoft did that with DSC resources, but they did it over the course of an entire decade, so you don't really focus on it. In The Monad Manifesto, written before PowerShell existed, inventor Jeffrey Snover envisioned DSC. The book is on GitBook and LeanPub; go review it if you don't believe me. Way back before PowerShell even existed as a product name, Snover sketched out what DSC would look like, and it was the "end state" he envisioned. But what we got in 2006, when PowerShell was first released to the world, wasn't DSC. It was a concrete foundation. Version 2 of PowerShell added some walls and roofs. Version 3, some plumbing. It wasn't until version 4 that we got DSC - some ten years later.

Microsoft started, in version 1, by creating cmdlets for us. Individually, these cmdlets were fairly useful, but also pretty basic. Version 2 of PowerShell gave us the ability to create our own "script cmdlets" in the form of advanced functions. We also got Remoting, a key communications technology. In wasn't until version 4 that we could start combining cmdlets (script or otherwise) into DSC resources. So we were walked through that progression - foundation, walls, roof - pretty slowly. Slowly enough that now, looking at the product we have today, it's easy to forget that the progression even happened.

But when you design a DSC resource, you need to have that progression firmly in mind. You need to start not with the resource, but with the commands that the resource will use.

For Example

Suppose you have a Sales Order Entry application, and you want to instrument it for DSC management. The first thing to do is start thinking about your nouns. What are the elements of the system that you will need to administer? Perhaps you have users which will need to be maintained. Perhaps there are servers that need to be configured, with each server having multiple configuration elements. These nouns, these "things you configure," become the nouns of your commands. Now, what do you do with them? You add users. You delete users. You set the properties of a user. Right there, you've got three commands: New-AppUser, Remove-AppUser, Set-AppUser. So you write those commands. You test them.

When the time comes to use all of that in a DSC resource, it should be easy. Your "AppUser" resource, for example, simply needs to implement its Test/Set/Get functions or methods, and in those it simply calls the commands you've already written. The resource isn't doing anything novel, it's just reusing commands that you've already written and tested. This is exactly how Microsoft actually approaches most of their resources. The ADUser resource doesn't contain code to talk to Active Directory; it just calls New-ADUser or Remove-ADUser or Set-ADUser, commands which already existed.

Advantages of the Approach

The main advantage here is the separation of presentation and implementation. By having your commands stand alone in a normal PowerShell module, they can be written by someone who has domain expertise in whatever is being managed, but who might know nothing about DSC. The commands can be tested independently. They can be verified independently. Each command is a small, standalone, testable unit - and "small code" equals "easier to maintain code."

Once you start using those commands in a resource, you know that the commands themselves already work. Resources, because they run remotely on a target node, are already amongst the hardest elements to test and debug. But using this approach, you've eliminated almost all of the uncertainty and opportunity for bugs, because you've tested the actual functionality. The resource, with its minimal code, introduces few new moving parts, and therefore fewer opportunities for new bugs.

Another advantage is that you can flip between function-based resources and class-based resources pretty easily. I mean, literally a few keyword changes in the script file, and you're more or less done. That's because all the actual code lives elsewhere, and the resource - whether function-based or class-based - is just an "interface" between the LCM and your functional code. Should Microsoft change the way resources need to be written, you should be able to update quickly and easily, because none of it will impact the functional code you wrote.

Disadvantage of the Approach

A potential disadvantage is that, in addition to distributing your DSC resource to target nodes - something the pull server is more than happy to do for you, if you're using one - you also have to distribute your "functional module" that contains all the commands the DSC resource uses. I don't regard this as an enormous problem. Most organizations should already have some kind of NuGet repository, or a private "PowerShell Gallery," if you will. If you don't, set one up. It's relatively easy. Then, your DSC configurations can simply start by ensuring all dependent modules are installed from that repository. Anything relying on a module should contain a DependsOn, to make sure it runs after the dependent module is installed. This isn't difficult, and it's a more structured and reliable way to run your infrastructure.