I replaced $489/mo in AWS Client VPN with a $3 t4g.nano running Headscale
A finops sprint surfaced $489/mo in AWS Client VPN charges. Three endpoints across two accounts, plus connection-hour fees. For a VPN that four people used. I had provisioned two of them.
At the time, they felt indispensable — secure customer access, familiar tooling, predictable behavior. In reality, they were architectural inertia.
I replaced all three with a single t4g.nano running Headscale — the open-source Tailscale coordination server. Total cost: ~$3/mo.
I genericized the Terraform and open-sourced the module.
Why WireGuard changes the baseline
Headscale and Tailscale are built on WireGuard, which has been in the Linux kernel since 5.6. The entire codebase is ~4,000 lines — OpenVPN is over 100,000. Connections are stateless: your laptop sleeps, wakes up on a different network, and the tunnel is already there. No reconnect handshake. One UDP port, modern cryptography (Curve25519, ChaCha20-Poly1305), and throughput that saturates a gigabit link without pegging the CPU.
Tailscale built a coordination layer on top of WireGuard — NAT traversal, key exchange, access control. Headscale replaces Tailscale’s SaaS control plane with a self-hosted server, so you own the entire stack.
What Headscale does
Headscale is a self-hosted implementation of the Tailscale coordination server. It handles device registration, key exchange, and route coordination. The actual traffic flows peer-to-peer through Tailscale’s DERP relay network or direct connections — the coordination server is not in the data path.
This means the server can be tiny. A t4g.nano (2 vCPU, 0.5 GB RAM) handles it without breaking a sweat.
The architecture
One EC2 instance runs both Headscale (the coordinator) and a Tailscale client (as a subnet router). This single box does three things:
- Coordinates the mesh network (device auth, key distribution, route advertisements)
- Routes traffic to the local VPC via Tailscale subnet routing
- Terminates TLS with Let’s Encrypt (HTTP-01 challenge, automatic renewal)
For a single-account setup, that’s everything. One box, one terraform apply.
For multi-account setups, you add subnet router instances in other VPCs. These are Tailscale-only — they connect outbound to the coordination server and advertise their local routes. No Headscale installed, no inbound ports, no EIP.
┌─────────────────────────┐ ┌──────────────────────────┐
│ Dev account │ │ Prod account │
│ │ │ │
│ coordination-server │◀────│ subnet-router │
│ (Headscale + Tailscale) │ │ (Tailscale only) │
│ │ │ │
│ Advertises: │ │ Advertises: │
│ 10.0.0.0/16 (EKS VPC) │ │ 172.31.5.32/32 (RDS) │
│ 172.31.0.0/16 (default)│ │ 10.0.57.47/32 (MySQL) │
└─────────────────────────┘ └──────────────────────────┘
What Client VPN cost vs what Headscale costs
AWS Client VPN charges two fees: a per-endpoint association fee ($0.10/hr = $72/mo per subnet association) and a per-connection-hour fee ($0.05/hr per connected client). With three endpoints and regular use, we were paying $489/mo.
Headscale on a t4g.nano: $3.02/mo. Plus a few cents for the EBS cache volume and CloudWatch logs.
That’s a 99.4% reduction.
What you lose
Managed HA. Client VPN is a managed service — AWS handles availability. Headscale is a single EC2 instance. If it dies, the coordination server is down until you rebuild it. The mitigation: Let’s Encrypt state and the SQLite database live on a persistent EBS volume. A terraform taint + apply rebuilds the instance in minutes, reattaching the existing data.
SAML SSO. Client VPN integrates with SAML providers. Headscale uses OIDC instead — Google Workspace, Okta, or any OIDC provider. Different protocol, same outcome.
Split tunneling config. Client VPN configures split tunneling server-side. With Tailscale, the client controls this. In practice, Tailscale’s approach is more flexible.
The Terraform module
Two modules:
coordination-server — Headscale + Tailscale on one EC2 instance. Creates the EIP, security group, IAM role, EBS cache volume, CloudWatch log group, and optionally a Route53 record. The userdata installs Headscale from GitHub releases, configures OIDC, sets up Let’s Encrypt, installs Tailscale, creates a preauth key, and joins itself as a subnet router.
subnet-router — Tailscale only, for additional accounts/VPCs. Creates an EC2 instance with egress-only security group. The userdata installs Tailscale, reads an auth key from SSM, and advertises local routes.
Both modules are AL2023 ARM64, parameterized for any domain, OIDC provider, VPC, and route configuration. No org-specific values baked in.
A reference GitHub Actions workflow does directory-based environment discovery — add an account by adding a directory, not by editing the workflow.
The repo
github.com/fizz/terraform-aws-headscale
Apache 2.0. The examples/single-account/ directory gets you running with one terraform apply.