Trigger policy
Create complex workflows using trigger policies
Last updated
Was this helpful?
Create complex workflows using trigger policies
Last updated
Was this helpful?
Frequently, your infrastructure consists of a number of projects ( in Spacelift parlance) that are connected in some way - either depend logically on one another, or must be deployed in a particular order for some other reason - for example, a rolling deploy in multiple regions.
Enter trigger policies. Trigger policies are evaluated at the end of each stack-blocking run (which includes and ) and allow you to decide if some tracked Runs should be triggered. This is a very powerful feature, effectively turning Spacelift into a Turing machine.
Note that in order to support various use cases this policy type is currently evaluated every time a blocking Run reaches a terminal state, which includes states like , , or in addition to the more obvious . This allows for very interesting and complex workflows (eg. automated retry logic) but please be aware of that when writing your own policies.
All runs triggered - directly or indirectly - by trigger policies as a result of the same initial run are grouped into a so-called workflow. In the trigger policy you can access all other runs in the same workflow as the currently finished run, regardless of their Stack. This lets you coordinate executions of multiple Stacks and build workflows which require multiple runs to finish in order to commence to the next stage (and trigger another Stack).
This is the schema of the data input that each policy request will receive:
The purpose here is to create a complex workflow that spans multiple Stacks. We will want to trigger a predefined list of Stacks when a Run finishes successfully. Here's our first take:
So how's that:
Can we do better? Sure, we can even have stacks use labels to decide which types of runs or state changes they care about. Here's a mind-bending example:
Here's another use case - sometimes Terraform or Pulumi deployments fail for a reason that has nothing to do with the code - think eventual consistency between various cloud subsystems, transient API errors etc. It would be great if you could restart the failed run. Oh, and let's make sure new runs are not created in a crazy loop - since policy-triggered runs trigger another policy evaluation:
The diamond problem happens when your stacks and their dependencies form a shape like in the following diagram:
Which means that Stack 1 triggers both Stack 2a and 2b, and we only want to trigger Stack 3 when both predecessors finish. This can be elegantly solved using workflows.
First we'll have to create a trigger policy for Stack 1:
This will trigger both Stack 2a and 2b whenever a run finishes on Stack 1.
Now onto a trigger policy for Stack 2a and 2b:
Here we trigger Stack 3, whenever the runs in Stack 2a and 2b are both finished.
You can also easily extend this to work with a label-based approach, so that you could define Stack 3's dependencies by attaching a depends-on:stack-2a,stack-2b
label to it:
Since trigger policies turn Spacelift into a Turing machine, you could probably use them to implement Conway's , but there are a few more obvious use cases. Let's have a look at two of them - interdependent Stacks and automated retries.
Here's a minimal example of this rule in the . But it's far from ideal. We can't be guaranteed that stacks with these IDs still exist in this account. Spacelift will handle that just fine, but you'll likely find if confusing. Also, for any new Stack that appears you will need to explicitly add it to the list. That's annoying.
We can do better, and to do that, we'll use Stack . Labels are completely arbitrary strings that you can attach to individual Stacks, and we can use them to do something magical - have "client" Stacks "subscribe" to "parent" ones.
Here's a minimal example of this rule in the . The benefit of this policy is that you can attach it to all your stacks, and it will just work for your entire organization.
. Now, how cool is that?