Pros & Cons: IaC Modules

This post will explore the merits and cons of designing and using Infrastructure-as-Code (IaC) modules.

Why Modules?

Anyone who has done even a small amount of template development understands that ARM templates may rapidly become overly large. Even a simple deployment, such as a hub and spoke network architecture, can quickly grow to several hundred lines with little further effort. When Microsoft first provided the Cloud Adoption Framework "Enterprise Scale" example architecture, one of the ARM/JSON files contained approximately 20,000 lines!

The length of a template file can generate a variety of problems, including but not limited to:

  • It gets difficult to find anything.
  • Big code becomes hard code to update—one change has many unexpected consequences.
  • Collaboration becomes almost impossible.
  • Agility is lost.

One of my colleagues was particularly irritated by the fact that "big code" frequently devolves into non-standardised code; this becomes a major issue when a "service organisation" supports various customers.

Modularisation

The principle behind modularisation is that commonly written code is written only once as a module. When the module's functionalities are required, other code references it. This is not a new concept; the term "include" or "DLL" has a long history in computing.

For instance, I can build a Bicep/ARM/Terraform module for an Azure App Service. My module can deploy App Services in the manner that I consider is appropriate for my "clients" and coworkers. It may even incorporate some governance, such as a name standard, by automating the naming of the new resource based on a predetermined naming pattern. Any resource customisations will be sent in as parameters, while any necessary values for inter-module dependencies can be passed out as outputs.

I can quickly create a library of modules, each deploying a distinct resource type; I now have a module library. All I need now is code to call the modules, model dependencies, pass arguments, and extract outputs from one module and use them as parameters in others.

The Benefits

Quickly, the advantages emerge:

  • You write less code because it is written once and then reused.
  • The code is standardised. You can switch between workloads or clients and still understand how the code works.
  • Governance is integrated into the programming. Naming conventions, for example, are written as code rather than by humans.
  • You have the ability to use new Azure capabilities, such as Template Specs.
  • A smaller code is easier to troubleshoot.
  • Organising your code into smaller sections facilitates collaboration.

The Issues

Most of the concerns stem from the fact that you have now created a software product that must be versioned and maintained. Few people outside the development world have the knowledge to achieve this. And, quite simply, the work is time-consuming and takes away from the tasks that we should be doing.

  • Regardless of how effectively you write a module, it will always require updates. There is always a new feature or a previously unforeseen use case that necessitates additional code in the module.
  • New code indicates new versions. No matter how well you plan, new versions will alter how parameters are used and cause breaking changes to some or all past module usage.
  • Trying to design a one-size-fits-all module is difficult. Azure App Services are an excellent example because they offer dozens, if not hundreds, of distinct configuration possibilities. Your code will become longer.
  • The complexity of the code adds to its length. Many values, including NULL, require some form of input. You'll quickly have if-then-else statements all over your code.
  • You will need to develop a code release and versioning mechanism that must be maintained. These are abilities that most Ops people lack.
  • Changes to the code will now be slower. If a project requires a previously unwritten module or feature, the new code cannot be used until it has gone through the software release process. You've now lost one of the Cloud's primary features: agility.

So, what is correct?

The answer is that I do not know. I understand that "big code" without some optimisation is not the way ahead. I believe that the form of micro-modularisation (one module per resource type) that is commonly associated with "IaC Modules" does not work either.

One of the reasons I've been working on and blogging about Bicep/Azure Firewall/DevSecOps is to experiment with concepts like modularisation. I'm starting to believe that, while the modularisation notion is necessary, the way we've done it is incorrect.

My main problem about the micro-module method is that it slowed me down. I ended myself spending more time trying to get the modules to operate properly than if I had built the code myself.

The module could be a smaller amount of code, but it should not be read-only. Perhaps it should be an example that I can use and adapt to my own needs. That's the method I took in my DevSecOps project. My Bicep code is divided into smaller files, with each handling a subset of the tasks. A "cloud centre of excellence" might simply publish that code as a reference library, and a "standard workload" repo could be made available as a starting point for future projects.