Spacelift
PrivacyT&Cs
  • 👋Hello, Spacelift!
  • 🚀Getting Started
  • 🌠Main concepts
    • Stack
      • Creating a stack
      • Stack settings
      • Organizing stacks
      • Stack locking
      • Drift detection
    • Configuration
      • Environment
      • Context
      • Runtime configuration
        • YAML reference
    • Run
      • Task
      • Proposed run (preview)
      • Tracked run (deployment)
      • Module test case
      • User-Provided Metadata
      • Run Promotion
      • Pull Request Comments
    • Policy
      • Login policy
      • Access policy
      • Approval policy
      • Initialization policy
      • Plan policy
      • Push policy
      • Task policy
      • Trigger policy
    • Resources
    • Worker pools
    • VCS Agent Pools
  • 🛰️Platforms
    • Terraform
      • Module registry
      • External modules
      • Provider
      • State management
      • Terragrunt
      • Version management
      • Handling .tfvars
      • CLI Configuration
      • Cost Estimation
      • Resource Sanitization
      • Storing Complex Variables
      • Debugging Guide
    • Pulumi
      • Getting started
        • C#
        • Go
        • JavaScript
        • Python
      • State management
      • Version management
    • CloudFormation
      • Getting Started
      • Reference
      • Integrating with SAM
      • Integrating with the serverless framework
    • Kubernetes
      • Getting Started
      • Authenticating
      • Custom Resources
      • Helm
      • Kustomize
  • ⚙️Integrations
    • Audit trail
    • Cloud Integrations
      • Amazon Web Services (AWS)
      • Microsoft Azure
      • Google Cloud Platform (GCP)
    • Source Control
      • GitHub
      • GitLab
      • Azure DevOps
      • Bitbucket Cloud
      • Bitbucket Datacenter/Server
    • Docker
    • GraphQL API
    • Single sign-on
      • GitLab OIDC Setup Guide
      • Okta OIDC Setup Guide
      • OneLogin OIDC Setup Guide
      • Azure AD OIDC Setup Guide
      • AWS IAM Identity SAML 2.0 Setup Guide
    • Slack
    • Webhooks
  • 📖Product
    • Privacy
    • Security
    • Support
      • Statement of Support
    • Disaster Continuity
    • Billing
      • Stripe
      • AWS Marketplace
    • Terms and conditions
    • Refund Policy
  • Cookie Policy
Powered by GitBook
On this page
  • Manual Configuration
  • AWS
  • Azure
  • Using the Spacelift Azure Integration
  • Using private workers with Managed Identities
  • GCP
  • Single Zone Deployment

Was this helpful?

  1. Platforms
  2. Kubernetes

Authenticating

PreviousGetting StartedNextCustom Resources

Last updated 2 years ago

Was this helpful?

The Kubernetes integration relies on using kubectl's native authentication to connect to your cluster. You can use the $KUBECONFIG environment variable to find the location of the Kubernetes configuration file, and configure any credentials required.

You should perform any custom authentication as part of a before init hook to make sure that kubectl is configured correctly before any commands are run, as shown in the following example:

The following sections provide examples of how to configure the integration manually, as well as using Cloud-specific tooling.

Manual Configuration

Manual configuration allows you to connect to any Kubernetes cluster accessible by your Spacelift workers, regardless of whether your cluster is on-prem or hosted by a cloud provider. The Kubernetes integration automatically sets the $KUBECONFIG environment variable to point at /mnt/workspace/.kube/config, giving you a number of options:

  • You can use a before init hook to create a kubeconfig file, or to download it from a trusted location.

AWS

The simplest way to connect to an AWS EKS cluster is using the AWS CLI tool. To do this, add the following before init hook to your Stack:

aws eks update-kubeconfig --region $REGION_NAME --name $CLUSTER_NAME
  • The $REGION_NAME and $CLUSTER_NAME environment variables must be defined in your Stack's environment.

Azure

Please note that both examples assume that your stack has an $AKS_CLUSTER_NAME and $AKS_RESOURCE_GROUP environment variable configured containing the name of the AKS cluster and the resource group name of the cluster respectively.

Using the Spacelift Azure Integration

az login --service-principal -u "$ARM_CLIENT_ID" -t "$ARM_TENANT_ID" -p "$ARM_CLIENT_SECRET"
az aks get-credentials --name "$AKS_CLUSTER_NAME" --resource-group "$AKS_RESOURCE_GROUP"
kubelogin convert-kubeconfig -l spn
export AAD_SERVICE_PRINCIPAL_CLIENT_ID="$ARM_CLIENT_ID"
export AAD_SERVICE_PRINCIPAL_CLIENT_SECRET="$ARM_CLIENT_SECRET"

Using private workers with Managed Identities

az login --identity
az aks get-credentials --name "$AKS_CLUSTER_NAME" --resource-group "$AKS_RESOURCE_GROUP"
kubelogin convert-kubeconfig -l msi

GCP

The Spacelift GCP integration automatically generates an access token for your GCP service account, and this token can be used for getting your cluster credentials as well as accessing the cluster. To do this, add the following before init hooks to your Stack:

# Output the token into a temporary file, use gcloud to get
# the cluster credentials, then remove the tmp file
echo "$GOOGLE_OAUTH_ACCESS_TOKEN" > /mnt/workspace/gcloud-access-token
gcloud container clusters get-credentials $GKE_CLUSTER_NAME \
  --region $GKE_CLUSTER_REGION \
  --project $GCP_PROJECT_NAME \
  --access-token-file /mnt/workspace/gcloud-access-token
rm /mnt/workspace/gcloud-access-token

# Remove and re-create the user, using the automatically generated access token
kubectl config delete-user $(kubectl config current-context)
kubectl config set-credentials $(kubectl config current-context) --token=$GOOGLE_OAUTH_ACCESS_TOKEN

Please note, your Stack needs to have the following environment variables set for this script to work:

  • GKE_CLUSTER_NAME - the name of your cluster.

  • GKE_CLUSTER_REGION - the region the cluster is deployed to.

  • GCP_PROJECT_NAME - the name of your GCP project.

The get-credentials command configures your kubeconfig file to use the gcloud config config-helper command to allow token refresh. Unfortunately this command will not work when we only have an access token available. The script provided works around this by manually removing and re-creating the user details in the config file.

Single Zone Deployment

If your cluster is deployed to a single zone, you can use the --zone flag instead of the --region flag in the gcloud container clusters get-credentials command:

gcloud container clusters get-credentials $GKE_CLUSTER_NAME \
  --zone $GKE_CLUSTER_ZONE \
  --project $GCP_PROJECT_NAME \
  --access-token-file /mnt/workspace/gcloud-access-token

You can use a to mount a pre-prepared config file into your workspace at /mnt/workspace/.kube/config.

Please refer to the for more information on configuring kubectl.

This relies on either using the Spacelift , or ensuring that your workers have permission to access the EKS cluster.

The simplest way to connect to an AKS cluster in Azure is using the to automatically add credentials to your kubeconfig. To do this your stack needs to use a custom runner image with the and installed, and needs to run some before init hooks to authenticate with your cluster. Depending on your exact use-case, you may need to use slightly different commands. This guide outlines two main scenarios.

When using our , you can use the computed $ARM_* environment variables to login as the Service Principal for the integration:

When using , you can use the identity of that worker to login:

You can use the gcloud CLI to authenticate with a GKE cluster when using the using the gcloud container clusters get-credentials command. For this to work, you need to use a custom runner image that has the and installed.

🛰️
mounted file
Kubernetes documentation
AWS Integration
Azure CLI
Azure CLI
kubelogin
Azure integration
private workers with a managed identity
Spacelift GCP integration
gcloud CLI
kubectl