How Terraform Modules Work¶
Creating a Terraform module involves defining input variables, output values, and one or more resource definitions that can be reused in other Terraform configurations.
A Terraform module should be specific to a resource it is creating. By maintaining a modular approach to module creation, you can more easily combine the modules in a configuration file that builds the infrastructure you want. If you opt to make modules too broad in scope, you are in danger of creating too many resources for what you require at run time. Of course, more modules mean more files and potentially more complexity in managing things, but on the other hand, having fewer lines of code to read, understand, and troubleshoot can pay dividends.
The file structure of a module¶
A Terraform module has the following files. They can exist in one single file as all Terraform does when it executes is it loads the content of each file and run as one big configuration. But to make life easier and follow good practices, creating a file dedicated to each function is cleaner and easier to work with as you can see below these files can become quite large for even relatively simple resources like an Azure Resource Group.
The files must exist in the same working directory, and Terraform will not traverse a folder structure.
main.tf
where you define the resource the module creates
variables.tf
where you define any input variables required at runtime
outputs.tf
where you define any output variables that can be used at runtime
readme.md
is not essential for runtime but is good practice to explain how your module works
What each file does¶
As mentioned earlier, each file serves a function, and the contents of each file should be primarily dedicated to that function. I say primarily, as sometimes there is a case for not creating a separate locals.tf
file if you only need a couple of lines to cover your needs, but on the other hand, if you think it's easier to manage code then separating locals
and resource
code into locals.tf
and main.tf
is up to you. There are few hard and fast rules which is why most modules that create the same resources can look different from each other. So long as the code works, does not produce security vulnerabilities, and is reusable by others, then the code is good.
main.tf
- resource code¶
You define your infrastructure and supporting resources in this file. In the example code, a supporting resource block called resource "random_id"
generates random string text to define part of the name of the storage account plus the resource block resource "azurerm_storage_account"
that describes how the storage account is created.
To make the module as reusable as possible, avoid hard-coding parameter values here and instead use input variables. This allows the settings to be defined at runtime, depending on the need of the project. Where you do want to code parameter values would be things that should not be choice values, such as security-sensitive values. For example, setting the minimum TLS value to 1.2 could be a security requirement, and this is not something to be chosen.
variables.tf
- input parameter values¶
Each parameter value defined as a variable in main.tf
is described in this file. This file can appear complex, but in its basic form, it's a key: value pair structure. The variable name is the key
, and whatever data is provided at runtime is the value
.
The complexity comes from the description and validation blocks. While these are not technical necessities, it is good practice to document the variable choices and control what values are allowed. For example, you are defining validation for the environment tag
by limiting the accepted values to prod test dev
, resulting in improved governance of your cloud resources.
outputs.tf
- output parameter values¶
You would define outputs for two main reasons. Firstly, to enhance user experience, you can choose to expose, for example, an IP address or URL value of a resource when it is built, so the user does not need to hunt for the value in the portal. Secondly, these output values can be used downstream in your project as input variables when building additional resources. You may create a VM storing sensitive data in a key vault. In your project's configuration code, you would have two modules defined. First, you create the key vault and output its resource ID, which is then used by the VM module.
readme.md
- technical documentation¶
This file is nice to have but highly recommended as, by nature of the module code, it's meant for sharing amongst multiple people, and documenting how the code functions and what is required to create a resource successfully is essential. If you want to save time, automation tools can help create a document based on the code itself, such as TerraDocs.
An example of how the modules are used¶
Let's say your module library is stored in a local folder repository and the user executing the code has access to both directories. The tree view below shows the module and project directories plus the files within.
+---modules
| +---azure_resource_group
| | main.tf
| | outputs.tf
| | readme.md
| | variables.tf
| |
| \---azure_storage_account
| main.tf
| outputs.tf
| README.md
| variables.tf
|
\---projects
\---project00
locals.tf
providers.tf
resource_group.tf
storage_account.tf
Each module directory is configured as described earlier. In the projects directory you create a working directory called project00
, where you define your resource requirements you create files named locals.tf
, providers.tf
, resource_group.tf
, and storage_account.tf
. Each of these files plays a specific function for the project build.
Info
It is not the only way of doing this, but I prefer it for some use cases as I can maintain a single project working directory and have multiple Terraform files executed "as one". So I can have a file called resource_group.tf
one called storage_account.tf
that work with each when created and updated, making for a unified infrastructure deployment. If you choose to separate your resources by folders, then each resource becomes distinct and, in my opinion, more challenging to manage. But again, rules are limited, and whichever method you choose is best for you.
locals.tf
common values¶
A handy way of defining values used amongst all the resources. Often used to define the tag values or names of resources as you write them once here and then in each resource you refer to this file such as local.prefix
.
providers.tf
common values¶
You need to inform Terraform what providers you want to use to execute the files. In this example we're uisng Azure. This is also where you can define any authentication paramaters, but this example is relying on local environment variables so nothing is needed in this file, this includes the target Azure subscription.
resource:group.tf
describes the resource group¶
A simple resource that requires no direct editing as it only needs the values from the locals.tf
file as input variables. But it produces outputs that are handy for remaining resources to call upon when they're created, as in Azure, every resource requires a resource group. Coding the module for each resource to take the output of the resource group created in the same project working directory means you never need to define it manually at runtime. The essence of automation!
storage_accountf.tf
describes the storage account¶
A relatively complex resource when you look at the module code, but here at run time you are required to provide just three input variables yourself, all others are taken by locals.tf
or as outputs from the resource_group.tf
code such as resource_group_name = module.resource_group.resource_group_name
Warning
This code does not go into authentication, as that's not the documents goal This code does not go into state file storage, as that's not the documents goal
Build the resources¶
To execute the code, you navigate to the project working directory project00
and execute the standard Terraform commands
terraform init
terraform plan
terraform apply