Guest Blog: Henrique Rodrigues - Infrastructure Orchestration of Multiple Cloud Accounts in a Single Step


Too many accounts, are they all up to date?

Many organisations use multiple accounts in their cloud provider as a way to enforce another layer of separation between different sets of infrastructure. One example is to separate development and production environments, another one is to separate accounts of different customers. This usually ends up in multiple infrastructure runs to get to the desired state, which might lead to a feeling of disconnection between them.

It might not be an issue to you or it might be that you use different tools that glue all of it together. For all others, let's dive deeper into specific ways of achieving this with AWS and some well-known orchestration tools: Terraform, Ansible and CloudFormation. The same pattern can be applied to other tools and cloud providers, even mixing them.

The authentication problem

If you have multiple accounts, then you must have a mechanism to authenticate against all of them. Tools deal with this in different ways, which we'll cover further on.

On AWS the two most common authentication patterns are having an IAM user per account, which doesn't scale very well, or use a hub-spoke model in which the user will authenticate against a landing zone account and then assume a role within a different account.

So far the easiest way to deal with this is to use AWS Control Tower when setting up a new AWS Organization. Not only it takes care of provisioning sub-accounts in an integrated way, but you can then manage all accesses out of the box via AWS SSO, which itself can integrate with other SSO systems like Okta, G Suite or Azure AD.

For simplicity sake, we'll focus on the latter in the examples below and assume that you have configured your environment to be able to assume account roles via the landing zone account. This also simplifies CI/CD, since there's a single set of credentials to deal with. Change the assumed role names as you see fit.


Let's say we have two different AWS accounts that we want to keep the same. For that purpose, we'll create a module named provision_account that creates an S3 bucket. Since S3 bucket names are global, we'll append the account ID to the name of the bucket.

On the main Terraform code we'll define two providers, one for each account we want to manage:

When you run terraform apply this will create one S3 bucket on each account as expected. terraform plan will show you all the changes in all the accounts in a single step.

Not all accounts need to be the same. You can have different versions of the same module, different modules for different accounts or even not have any modules at all and simply create the target resources.

A consequence of having multiple accounts managed this way is that you end up with a single state file. It's up to you to decide if this is a good thing or a bad thing, depending on your Terraform experience and the size of your team.


Continuing with the goal of creating two S3 buckets on two AWS accounts, let's first create a role named provision_account with the following tasks:

The playbook can then contain the following:

Of course, there are many different ways to write this in Ansible, it's just an example.


CloudFormation stacks are applied to a single account, but since February 2020 AWS has the capability to apply a CloudFormation template to multiple accounts on your AWS Organization via StackSets.

In a hub-spoke account model you create a StackSet, define your CloudFormation template and select which accounts it should be deployed to. CloudFormation will then create stacks on each of the accounts.

In order for this to work properly, you must have the correct roles created on the target accounts, so that CloudFormation can assume them. If using AWS Control Tower then this is already done by default, since it's actually what AWS Control Tower itself uses to provision accounts.

Drift detection is possible on a StackSet, which gives you a unified view of the state of each account. This uses the same drift detection mechanism of CloudFormation stacks and it takes a while to run. There are ways to make it run periodically, but might require a Lambda function to trigger it, which seems an unfortunate afterthought. There is also no way of getting outputs of target stacks from the hub account, making cross-account dependencies impossible.

Still, even with all these caveats, if you're a pure CloudFormation user then StackSets can prove to be very useful.

Do we really want this?

This approach has both pros and cons. There's a simplicity in having a single provisioning step, but the risk of failure and the consequences thereof are also greater. The tools used here also have different capabilities that might mitigate or exacerbate failures.

In the end, this is just another pattern to enrich your toolkit.

This Guest Blog was written by Henrique Rodrigues, a DevOps Engineer at Lytt.
17 years ago his family held an intervention because he was "using Linux too much". It failed. Considering his main system was Gentoo running on PowerPC, they might have had a valid point, though.
Henrique recently spoke at our November DevOps Exchange event on the subject of Pulumi - you can check out his full talk in our event recap here. 

Don't forget to follow DevOps Exchange on LinkedIn and Twitter to keep up to date with all things DOX.