This example creates the following:
- a VPC and related resources including a NAT Gateway
- an EKS cluster with a managed node group
- a Kubernetes namespace for the Tailscale operator
- the Tailscale Kubernetes Operator deployed via Helm
- a high availability API server proxy
- The EKS cluster is configured with both public and private API server access for flexibility
- The Tailscale operator is deployed in a dedicated
tailscalenamespace - The operator will create a Tailscale device for API server proxy access
- Any additional Tailscale resources (like ingress controllers) created by the operator will appear in your Tailnet
- Follow the Kubernetes Operator prerequisites.
- For the high availability API server proxy:
- The configuration as-is currently only works on macOS or Linux clients. Remove or comment out the
null_resourceprovisioners that deploytailscale-api-server-ha-proxy.yamlto run from other platforms. - Requires the kubectl CLI and AWS CLI.
- The configuration as-is currently only works on macOS or Linux clients. Remove or comment out the
Follow the documentation to configure the Terraform providers:
Create a terraform.tfvars file with your Tailscale OAuth credentials:
tailscale_oauth_client_id = "your-oauth-client-id"
tailscale_oauth_client_secret = "your-oauth-client-secret"terraform init
terraform applyAfter deployment, configure kubectl to access your cluster:
aws eks update-kubeconfig --region $AWS_REGION --name $(terraform output -raw cluster_name)Check that the Tailscale operator is running:
kubectl get pods -n tailscale
kubectl logs -n tailscale -l app.kubernetes.io/name=$(terraform output -raw operator_name)Verify connectivity via the API server proxy
After deployment, configure kubectl to access your cluster using Tailscale:
tailscale configure kubeconfig ${terraform output -raw operator_name}kubectl get pods -n tailscaleterraform destroy
# remove leftover Tailscale devices at https://login.tailscale.com/admin/machines and services at https://login.tailscale.com/admin/services- The HA API server proxy is deployed using a terraform null_resource instead of kubernetes_manifest due to a Terraform limitation that results in
cannot create REST client: no client configerrors on first run.