Introduction
In this article, we will set up a HashiCorp Vault cluster using the Raft backend, accessed via an Nginx reverse proxy, and ensure high availability for the Nginx layer with Keepalived.
My Commentary: This introduction clearly states the objective: building a highly available (HA) and resilient HashiCorp Vault setup.
This combination of technologies is a common and robust pattern for providing HA to critical services like Vault.
Prerequisites
My Commentary: These prerequisites suggest a local development or testing environment.
For a production environment, one would typically move beyond simple Docker Compose. Consider:
Vault Server Configuration (vault.hcl)
Vault servers are configured using an HCL (HashiCorp Configuration Language) file. Here’s an example vault.hcl
:
storage "raft" {
path = "/vault/data"
node_id = "node1" # This should be unique for each node
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = "true" # IMPORTANT: Only for development/testing!
}
cluster_addr = "http://node1:8201" # This should be unique for each node
api_addr = "http://node1:8200" # This should be unique for each node
My Commentary: This is the core configuration for a Vault server in a Raft cluster.
storage "raft"
:listener "tcp"
:cluster_addr
: The address Vault uses to communicate with other Vault nodes in the Raft cluster. This is essential for inter-node communication and Raft consensus. It's often on a dedicated "cluster" port (e.g., 8201). Again, for each node, this should point to its own unique address.api_addr
: The address where the Vault API is exposed. Clients (and the Nginx proxy) will connect to this address. Also, for each node, this should point to its own unique address.The node_id
, cluster_addr
, and api_addr
will need to be dynamically set for each Vault container, which Docker Compose can help with.
Vault Client Configuration for Auto-Unseal (client_vault.hcl)
Vault can be configured for auto-unseal using cloud-native Key Management Services (KMS) like AWS KMS, Azure Key Vault, GCP KMS, or HashiCorp's own Transit Secrets Engine. This removes the manual unsealing step, crucial for automated deployments and recovery.
# This section demonstrates AWS KMS for auto-unseal
seal "awskms" {
region = "eu-west-1"
kms_key_id = "your-kms-key-id" # Replace with your actual KMS key ID
}
My Commentary:
seal "awskms"
: The example shows AWS KMS. You'd need to configure the Vault server's IAM role (or credentials) to allow it to interact with the specified KMS key.This client_vault.hcl
snippet would be merged into the main vault.hcl
or provided as an additional configuration snippet to the Vault server.
Docker Compose Setup
Here's the docker-compose.yml
file to orchestrate the services:
version: '3.8'
services:
vault1:
image: hashicorp/vault:1.15.2
container_name: vault1
cap_add:
- IPC_LOCK
ports:
- "8200:8200"
- "8201:8201"
environment:
VAULT_ADDR: "http://0.0.0.0:8200"
VAULT_API_ADDR: "http://vault1:8200"
VAULT_CLUSTER_ADDR: "http://vault1:8201"
VAULT_LOG_LEVEL: "info"
volumes:
- ./vault/config/vault1.hcl:/vault/config/vault.hcl # Mount config for each node
- ./vault/data1:/vault/data # Mount persistent data volume for each node
networks:
- vault_network
command: "server -config=/vault/config/vault.hcl"
# vault2 and vault3 would be similar, with unique node_id, data paths, and container names/hostnames
vault2:
image: hashicorp/vault:1.15.2
container_name: vault2
cap_add:
- IPC_LOCK
ports:
- "8202:8200" # Exposing on different host port for local access if needed
- "8203:8201"
environment:
VAULT_ADDR: "http://0.0.0.0:8200"
VAULT_API_ADDR: "http://vault2:8200"
VAULT_CLUSTER_ADDR: "http://vault2:8201"
VAULT_LOG_LEVEL: "info"
volumes:
- ./vault/config/vault2.hcl:/vault/config/vault.hcl
- ./vault/data2:/vault/data
networks:
- vault_network
command: "server -config=/vault/config/vault.hcl"
depends_on:
- vault1 # Simple dependency, not for HA
vault3:
image: hashicorp/vault:1.15.2
container_name: vault3
cap_add:
- IPC_LOCK
ports:
- "8204:8200"
- "8205:8201"
environment:
VAULT_ADDR: "http://0.0.0.0:8200"
VAULT_API_ADDR: "http://vault3:8200"
VAULT_CLUSTER_ADDR: "http://vault3:8201"
VAULT_LOG_LEVEL: "info"
volumes:
- ./vault/config/vault3.hcl:/vault/config/vault.hcl
- ./vault/data3:/vault/data
networks:
- vault_network
command: "server -config=/vault/config/vault.hcl"
depends_on:
- vault1 # Simple dependency, not for HA
nginx1:
image: nginx:latest
container_name: nginx1
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/ssl:/etc/nginx/ssl:ro # For SSL certificates
networks:
- vault_network
depends_on:
- vault1 # Nginx depends on at least one Vault node to start
nginx2:
image: nginx:latest
container_name: nginx2
ports:
# Nginx2 will likely not have 80/443 exposed directly on host, Keepalived handles VIP
# but if you need to access it directly for testing, you could map different ports
# - "81:80"
# - "444:443"
# For Keepalived VIP, these ports are usually not mapped to host
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/ssl:/etc/nginx/ssl:ro
networks:
- vault_network
depends_on:
- vault1
keepalived:
image: osixia/keepalived:latest
container_name: keepalived
cap_add:
- NET_ADMIN # Required for VIP management
- NET_BROADCAST
- NET_RAW
environment:
KEEPALIVED_STATE: MASTER # For the first instance, the other would be BACKUP
KEEPALIVED_INTERFACE: eth0 # Or the correct network interface inside the container
KEEPALIVED_VIRTUAL_IPS: "172.18.0.100/24" # Example VIP, adjust subnet
KEEPALIVED_UNICAST_PEERS: "172.18.0.x,172.18.0.y" # IPs of other keepalived containers
KEEPALIVED_PASSWORD: "your_vrrp_password" # Important for security
KEEPALIVED_PRIORITY: "101" # Higher for MASTER
KEEPALIVED_VIRTUAL_ROUTER_ID: "51" # Unique ID for VRRP instance
volumes:
- ./keepalived/keepalived.conf:/etc/keepalived/keepalived.conf:ro # Custom config if needed
networks:
- vault_network
sysctls:
- net.ipv4.ip_nonlocal_bind=1 # Allow binding to non-local IP (VIP)
depends_on:
- nginx1
- nginx2
networks:
vault_network:
driver: bridge
ipam:
config:
- subnet: 172.18.0.0/24 # Example subnet
My Commentary: This docker-compose.yml
provides a comprehensive blueprint.
networks
: Defining a custom bridge network provides better isolation and allows using service names for internal communication.This setup creates a robust, containerized environment for demonstration. For production, consider:
Vault Initialization & Unseal
Once the Vault containers are running, you need to initialize the cluster. This is typically done from one of the Vault containers:
docker exec vault1 vault operator init -key-shares=3 -key-threshold=2 -format=json > cluster_keys.json
My Commentary:
vault operator init
: This command performs the initial setup of the Vault cluster.Nginx Configuration (nginx.conf)
This configuration enables Nginx to act as a reverse proxy for the Vault cluster, handling SSL/TLS.
http {
upstream vault_servers {
server vault1:8200;
server vault2:8200;
server vault3:8200;
# You can add load balancing algorithms here, e.g., least_conn, ip_hash
}
server {
listen 80;
server_name your.vault.domain.com; # Replace with your domain
return 301 https://$host$request_uri; # Redirect HTTP to HTTPS
}
server {
listen 443 ssl;
server_name your.vault.domain.com; # Replace with your domain
ssl_certificate /etc/nginx/ssl/vault.crt; # Your SSL certificate
ssl_certificate_key /etc/nginx/ssl/vault.key; # Your SSL private key
ssl_protocols TLSv1.2 TLSv1.3; # Enforce strong protocols
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"; # Strong ciphers
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
location / {
proxy_pass http://vault_servers; # Proxy to the upstream Vault cluster
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
}
}
}
My Commentary: This nginx.conf
is a solid starting point for proxying Vault.
upstream vault_servers
:listen 80
block): Excellent practice for security. All HTTP traffic is forced to HTTPS.listen 443 ssl
block):location /
block:Keepalived Configuration (keepalived.conf)
Keepalived provides high availability for the Nginx instances by using VRRP to manage a floating IP address.
vrrp_script check_nginx {
script "killall -0 nginx" # Checks if nginx process is running
interval 2 # Check every 2 seconds
weight 50 # If script fails, priority decreases by 50
}
vrrp_instance VI_1 {
state MASTER # For the primary nginx instance, set to BACKUP for the secondary
interface eth0 # The network interface Keepalived will monitor
virtual_router_id 51 # Unique ID for this VRRP instance
priority 101 # Higher priority for MASTER, e.g., 100 for BACKUP
advert_int 1 # Advertisement interval in seconds
authentication {
auth_type PASS
auth_pass your_vrrp_password # Must match across all Keepalived instances
}
virtual_ipaddress {
172.18.0.100/24 # The Virtual IP address
}
track_script {
check_nginx # Link to the script defined above
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
My Commentary: This Keepalived configuration is standard for a simple active-passive (or active-backup) HA setup.
vrrp_script check_nginx
:vrrp_instance VI_1
:Overall for Keepalived:
Testing
To verify the setup, you can check the status of Vault and Nginx.
Vault Status:
docker exec vault1 vault status
You should see output similar to this, indicating the cluster is initialized, sealed (if not auto-unsealed), and showing the leader.
Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 3
Threshold 2
Version 1.15.2
Build Date 2023-11-20T12:35:48Z
Storage Type raft
Cluster Name vault-cluster-d6d7e0d7
Cluster ID 2430ae1c-2234-7a32-1b1a-8252277d0180
HA Enabled true
HA Cluster https://vault1:8201 # This will vary based on your env
HA Mode active
Active Since 2023-12-01T10:00:00Z
Nginx Status (via VIP): Access your configured your.vault.domain.com
(or the VIP directly) in your browser or with curl
.
curl -k https://your.vault.domain.com/v1/sys/health
You should get a JSON response indicating Vault's health status. The -k
flag is important if you're using self-signed certificates for testing.
My Commentary:
vault status
: This is a fundamental check. Pay attention to:Conclusion
By following these steps, you can set up a highly available HashiCorp Vault cluster using the Raft backend, accessed securely via an Nginx reverse proxy with Keepalived ensuring Nginx's high availability. This robust architecture provides a solid foundation for managing your secrets in a resilient manner.
My Final Commentary: This article provides a solid, practical guide for setting up a high-availability Vault cluster using a common pattern. It's excellent for understanding the components and their interaction.
Beyond this setup, for full production readiness, consider: