NixMox uses a phased deployment approach with Terraform and NixOS:
Phase 1: Infrastructure Foundation
├── PostgreSQL (VMID 902) - Database backend
├── Caddy (VMID 901) - Reverse proxy & load balancer
├── DNS (VMID 904) - Internal DNS resolution
Phase 2: Authentication Foundation
└── Authentik (VMID 903) - Identity & access management
(Required by all other services)
Phase 3: Application Services
├── Vaultwarden (VMID 905) - Password manager
├── Nextcloud (VMID 906) - File storage
├── Guacamole (VMID 907) - Remote desktop gateway
├── Media (VMID 908) - Media server
├── Monitoring (VMID 909) - System monitoring
└── Mail (VMID 910) - Email services
- Proxmox VE server running
- Terraform installed
- SOPS for secrets management
- SSH key configured in
nixos/modules/common/default.nix
# From project root
just deploy-phase1 env=dev
# Or manually:
cd terraform/phases
terraform apply \
-var="environment=dev" \
-var="deployment_phase=1" \
-var="secrets_file=../environments/dev/secrets.sops.yaml" \
-auto-approvejust deploy-phase2 env=dev
# Or manually:
terraform apply \
-var="environment=dev" \
-var="deployment_phase=2" \
-var="secrets_file=../environments/dev/secrets.sops.yaml" \
-auto-approvejust deploy-phase3 env=dev
# Or manually:
terraform apply \
-var="environment=dev" \
-var="deployment_phase=3" \
-var="secrets_file=../environments/dev/secrets.sops.yaml" \
-auto-approveAfter containers are running, deploy NixOS configurations:
# Deploy to specific service
nix run nixpkgs#nixos-rebuild -- switch \
--flake .#caddy \
--target-host root@caddy.nixmox.lan \
--fast
# Deploy to all services
nix run nixpkgs#colmena -- apply-local# Deploy Authentik (includes database setup)
just deploy-authentik
# Deploy with custom configuration
just deploy-authentik config=productionnixmox/
├── nixos/
│ ├── hosts/ # Host-specific configurations
│ ├── modules/ # Reusable NixOS modules
│ └── flake.nix # NixOS flake configuration
├── terraform/
│ ├── phases/ # Phased deployment configuration
│ ├── modules/ # Terraform modules
│ └── environments/ # Environment-specific settings
├── scripts/ # Deployment scripts
└── Justfile # Deployment commands
terraform/phases/main.tf- Main deployment configurationterraform/environments/dev/secrets.sops.yaml- Encrypted secretsterraform/environments/dev/terraform.tfvars.example- Configuration template
nixos/hosts/*.nix- Host-specific configurationsnixos/modules/*/default.nix- Service modulesnixos/modules/common/default.nix- Common configuration (SSH, users)
# Check container status
just deployment-status env=dev
# SSH to container
ssh root@caddy.nixmox.lan
# Access via Proxmox console
# Go to Proxmox UI → Container → Console# Check Terraform state
cd terraform/phases
terraform state list
# Destroy and recreate specific container
terraform destroy -target='module.lxc.proxmox_lxc.container["caddy"]'
# Redeploy phase
just deploy-phase1 env=dev# Check configuration syntax
nix run nixpkgs#nixos-rebuild -- build --flake .#caddy
# View generated configuration
nix run nixpkgs#nixos-rebuild -- build-vm --flake .#caddy- Add container definition to
terraform/phases/main.tf - Create NixOS host configuration in
nixos/hosts/ - Create service module in
nixos/modules/ - Deploy container with Terraform
- Deploy NixOS configuration
- Modify NixOS module
- Deploy configuration:
nixos-rebuild switch --flake .#service - Test service functionality
- Modify Terraform configuration
- Plan changes:
terraform plan -var="environment=dev" - Apply changes:
terraform apply -var="environment=dev"
- DNS is in Phase 1 so containers can resolve each other's hostnames
- SSH keys are configured in
nixos/modules/common/default.nix - Secrets are managed with SOPS and stored in
secrets.sops.yaml - Container networking uses VLAN 99 with IP range 192.168.99.10-19
- Hostname mapping should be added to your local
/etc/hostsfile for development
- Check
just --listfor available commands - Review Terraform logs for infrastructure issues
- Check NixOS logs:
journalctl -u service-name - Use Proxmox console for container access when SSH fails