Spacelift
PrivacyT&Cs
  • 👋Hello, Spacelift!
  • 🚀Getting Started
  • 🌠Main concepts
    • Stack
      • Creating a stack
      • Stack settings
      • Organizing stacks
      • Stack locking
      • Drift detection
    • Configuration
      • Environment
      • Context
      • Runtime configuration
        • YAML reference
    • Run
      • Task
      • Proposed run (preview)
      • Tracked run (deployment)
      • Module test case
      • User-Provided Metadata
      • Run Promotion
      • Pull Request Comments
    • Policy
      • Login policy
      • Access policy
      • Approval policy
      • Initialization policy
      • Plan policy
      • Push policy
      • Task policy
      • Trigger policy
    • Resources
    • Worker pools
    • VCS Agent Pools
  • 🛰️Platforms
    • Terraform
      • Module registry
      • External modules
      • Provider
      • State management
      • Terragrunt
      • Version management
      • Handling .tfvars
      • CLI Configuration
      • Cost Estimation
      • Resource Sanitization
      • Storing Complex Variables
      • Debugging Guide
    • Pulumi
      • Getting started
        • C#
        • Go
        • JavaScript
        • Python
      • State management
      • Version management
    • CloudFormation
      • Getting Started
      • Reference
      • Integrating with SAM
      • Integrating with the serverless framework
    • Kubernetes
      • Getting Started
      • Authenticating
      • Custom Resources
      • Helm
      • Kustomize
  • ⚙️Integrations
    • Audit trail
    • Cloud Integrations
      • Amazon Web Services (AWS)
      • Microsoft Azure
      • Google Cloud Platform (GCP)
    • Source Control
      • GitHub
      • GitLab
      • Azure DevOps
      • Bitbucket Cloud
      • Bitbucket Datacenter/Server
    • Docker
    • GraphQL API
    • Single sign-on
      • GitLab OIDC Setup Guide
      • Okta OIDC Setup Guide
      • OneLogin OIDC Setup Guide
      • Azure AD OIDC Setup Guide
      • AWS IAM Identity SAML 2.0 Setup Guide
    • Slack
    • Webhooks
  • 📖Product
    • Privacy
    • Security
    • Support
      • Statement of Support
    • Disaster Continuity
    • Billing
      • Stripe
      • AWS Marketplace
    • Terms and conditions
    • Refund Policy
  • Cookie Policy
Powered by GitBook
On this page
  • Cloud storage
  • Git repositories
  • Using HTTPS
  • Using SSH
  • Dedicated third-party registries
  • To mount or not to mount?

Was this helpful?

  1. Platforms
  2. Terraform

External modules

PreviousModule registryNextProvider

Last updated 3 years ago

Was this helpful?

Those of our customers who are not yet using our private module registry may want to pull modules from various external sources supported by Terraform. This article discusses a few most popular types of module sources and how to use them in Spacelift.

Cloud storage

The easiest ones to handle are cloud sources - and buckets. Access to these can be granted using our and integrations - or - if you're using hosted on either of these clouds, you may not require any authentication at all!

Git repositories

Git is by far the most popular external module source. This example will focus on as the most popular one, but the advice applies to other VCS providers. In general, Terraform retrieves Git-based modules using one of the two supported transports - HTTPS or SSL. Assuming your repository is private, you will need to give Spacelift credentials required to access it.

Using HTTPS

Git with HTTPS is slightly simpler than SSH - all you need is a personal access token, and you need to make sure that it ends up in the ~/.netrc file, which Terraform will use to log in to the host that stores your source code.

Assuming you already have a token you can use, create a file like this:

machine github.com
login $yourLogin
password $yourToken

Then, upload this file to your stack's Spacelift environment as a . In this example, we called that file github.netrc:

cat /mnt/workspace/github.netrc >> ~/.netrc
chmod 600 ~/.netrc

Using SSH

mkdir -p ~/.ssh
cp /mnt/workspace/id_ed25519 ~/.ssh/id_ed25519
chmod 400 ~/.ssh/id_ed25519
ssh-keyscan -t rsa github.com >> ~/.ssh/known_hosts

The above example warrants a little explanation. First, we're making sure that the ~/.ssh directory exists - otherwise, we won't be able to put anything in there. Then we copy the private key file mounted in our workspace to the SSH configuration directory and give it proper permissions. Last but not least, we're using the ssh-keyscan utility to retrieve the public SSH host key for github.com and add it to the list of known hosts - this will avoid your code checkout failing due to what would otherwise be an interactive prompt asking you whether to trust that key.

Dedicated third-party registries

To mount or not to mount?

That is the question. And there isn't a single right answer. Instead, there is a list of questions to consider. By mounting a file, you're giving us access to its content. No, we're not going to read it, and yes, we have it encrypted using a fancy multi-layered mechanism, but still - we have it. So the main question is how sensitive the credentials are. Read-only deploy keys are probably the least sensitive - they only give read access to a single repository, so these are the ones where convenience may outweigh other concerns. On the other hand, personal access tokens may be pretty powerful, even if you generate them from service users. The same thing goes for personal SSH keys. Guard these well.

Add the following commands as to append the content of this file to the ~/.netrc proper:

Using SSH isn't much more complex, but it requires a bit more preparation. Once you have a public-private key pair (whether it's a SSH key or a single-repo ), you will need to pass it to Spacelift and make sure it's used to access your VCS provider. Once again, we're going to use the functionality to pass the private key called id_ed25519 to your stack's environment:

Add the following commands as to "teach" our SSH agent to use this key for GitHub:

For users storing their modules in dedicated external private registries, like , you will need to supply credentials in the .terraformrc file - this approach is documented in the .

In order to faciliate that, we've introduced a special mechanism for extending the CLI configuration that does not even require using before_init hooks. You can read more about it .

So if you don't want to mount these credentials, what are your options? First, you can put these credentials directly into your . But that means that anyone in your organization who uses the private runner image gets access to your credentials - and that may or may not be what you wanted.

The other option is to store the credentials externally in one of the secrets stores - like or and retrieve them in one of your scripts before putting them in the right place (~/.netrc file, ~/.ssh directory, etc.).

If you decide to mount, we advise that you store credentials in and attach these to stacks that need them. This way you can avoid credentials sprawl and leak.

🛰️
"before init" hooks
personal
deploy key
mounted file
"before init" hooks
Terraform Cloud's one
official documentation
here
private runner image
AWS Secrets Manager
HashiCorp Vault
before_init
contexts
S3
GCS
AWS
GCP
private Spacelift workers
GitHub
mounted file