This example creates the following:
- a VPC and related resources including a NAT Gateway
- an EKS cluster with a managed node group
- a Kubernetes namespace for the Tailscale operator
- the Tailscale Kubernetes Operator deployed via Helm
- The EKS cluster is configured with both public and private API server access for flexibility
- The Tailscale operator is deployed in a dedicated
tailscalenamespace - The operator will create a Tailscale device for API server proxy access
- Any additional Tailscale resources (like ingress controllers) created by the operator will appear in your Tailnet
- Create a Tailscale OAuth Client with appropriate scopes
- Ensure you have AWS CLI configured with appropriate permissions for EKS
- Install
kubectlfor cluster access after deployment
Follow the documentation to configure the Terraform providers:
Create a terraform.tfvars file with your Tailscale OAuth credentials:
tailscale_oauth_client_id = "your-oauth-client-id"
tailscale_oauth_client_secret = "your-oauth-client-secret"terraform init
terraform applyAfter deployment, configure kubectl to access your cluster:
aws eks update-kubeconfig --region $AWS_REGION --name $(terraform output -raw cluster_name)Check that the Tailscale operator is running:
kubectl get pods -n tailscale
kubectl logs -n tailscale -l app.kubernetes.io/name=tailscale-operatorVerify connectivity via the API server proxy
After deployment, configure kubectl to access your cluster using Tailscale:
tailscale configure kubeconfig ${terraform output -raw operator_name}kubectl get pods -n tailscaleterraform destroy
# remove leftover Tailscale devices at https://login.tailscale.com/admin/machines and services at https://login.tailscale.com/admin/services- The HA API server proxy is deployed using a terraform null_resource instead of kubernetes_manifest due to a Terraform limitation that results in
cannot create REST client: no client configerrors on first run.