Back to Articles
DevOps Blog

HashiCorp Vault Cluster Setup with Raft Backend, Nginx Reverse Proxy with Keepalived

Serdarcan Buyukdereli
2025-06-02
11 min read
2017 words

Introduction

In this article, we will set up a HashiCorp Vault cluster using the Raft backend, accessed via an Nginx reverse proxy, and ensure high availability for the Nginx layer with Keepalived.

My Commentary: This introduction clearly states the objective: building a highly available (HA) and resilient HashiCorp Vault setup.

  • HashiCorp Vault: For those unfamiliar, Vault is a tool for securely storing, managing, and accessing secrets (API keys, passwords, certificates, etc.). It's crucial for modern, secure application environments.
  • Raft Backend: This is Vault's built-in consensus mechanism for high availability, eliminating the need for external dependencies like Consul or PostgreSQL for the storage backend. This simplifies the architecture for HA considerably.
  • Nginx Reverse Proxy: Nginx will act as the public-facing endpoint, forwarding requests to the Vault cluster. This allows for SSL termination, basic load balancing, and can add an extra layer of security.
  • Keepalived: This is key for Nginx's high availability. Keepalived implements VRRP (Virtual Router Redundancy Protocol) to provide a floating IP (Virtual IP or VIP). If the primary Nginx server fails, Keepalived automatically moves the VIP to a healthy backup Nginx server, ensuring continuous service.
  • This combination of technologies is a common and robust pattern for providing HA to critical services like Vault.


    Prerequisites

  • Docker
  • Docker Compose
  • Git
  • Make
  • My Commentary: These prerequisites suggest a local development or testing environment.

  • Docker & Docker Compose: Essential for containerizing Vault, Nginx, and Keepalived, making the setup reproducible and isolated. This is excellent for demonstration and rapid prototyping.
  • Git & Make: Used for cloning the repository and automating build/run processes, typical for DevOps workflows.
  • For a production environment, one would typically move beyond simple Docker Compose. Consider:

  • Kubernetes/OpenShift: For orchestrating containers at scale, providing built-in HA, self-healing, and service discovery.
  • Infrastructure as Code (IaC): Tools like Terraform for provisioning underlying infrastructure (VMs, networks, load balancers).
  • Cloud-Native Solutions: Utilizing cloud-specific load balancers (AWS ELB/ALB, Azure Load Balancer, GCP Load Balancer) for the Nginx layer, which offer managed HA and scalability out-of-the-box.
  • Secrets Management for Vault itself: How will the initial root token and unseal keys be handled securely?
  • Vault Server Configuration (vault.hcl)

    Vault servers are configured using an HCL (HashiCorp Configuration Language) file. Here’s an example vault.hcl:

    storage "raft" {
      path    = "/vault/data"
      node_id = "node1" # This should be unique for each node
    }
    
    listener "tcp" {
      address     = "0.0.0.0:8200"
      tls_disable = "true" # IMPORTANT: Only for development/testing!
    }
    
    cluster_addr = "http://node1:8201" # This should be unique for each node
    api_addr     = "http://node1:8200" # This should be unique for each node

    My Commentary: This is the core configuration for a Vault server in a Raft cluster.

  • storage "raft":
  • listener "tcp":
  • cluster_addr: The address Vault uses to communicate with other Vault nodes in the Raft cluster. This is essential for inter-node communication and Raft consensus. It's often on a dedicated "cluster" port (e.g., 8201). Again, for each node, this should point to its own unique address.
  • api_addr: The address where the Vault API is exposed. Clients (and the Nginx proxy) will connect to this address. Also, for each node, this should point to its own unique address.
  • The node_idcluster_addr, and api_addr will need to be dynamically set for each Vault container, which Docker Compose can help with.

    Vault Client Configuration for Auto-Unseal (client_vault.hcl)

    Vault can be configured for auto-unseal using cloud-native Key Management Services (KMS) like AWS KMS, Azure Key Vault, GCP KMS, or HashiCorp's own Transit Secrets Engine. This removes the manual unsealing step, crucial for automated deployments and recovery.

    # This section demonstrates AWS KMS for auto-unseal
    seal "awskms" {
      region     = "eu-west-1"
      kms_key_id = "your-kms-key-id" # Replace with your actual KMS key ID
    }

    My Commentary:

  • Auto-Unseal: This is a fantastic feature for production Vault deployments. When Vault starts, it's in a "sealed" state, meaning it cannot access its data. Manual unsealing requires providing a threshold of unseal keys. Auto-unseal offloads this to a trusted KMS service, making the process seamless and automated, especially after restarts or outages.
  • seal "awskms": The example shows AWS KMS. You'd need to configure the Vault server's IAM role (or credentials) to allow it to interact with the specified KMS key.
  • Alternatives:
  • This client_vault.hcl snippet would be merged into the main vault.hcl or provided as an additional configuration snippet to the Vault server.

    Docker Compose Setup

    Here's the docker-compose.yml file to orchestrate the services:

    version: '3.8'
    
    services:
      vault1:
        image: hashicorp/vault:1.15.2
        container_name: vault1
        cap_add:
          - IPC_LOCK
        ports:
          - "8200:8200"
          - "8201:8201"
        environment:
          VAULT_ADDR: "http://0.0.0.0:8200"
          VAULT_API_ADDR: "http://vault1:8200"
          VAULT_CLUSTER_ADDR: "http://vault1:8201"
          VAULT_LOG_LEVEL: "info"
        volumes:
          - ./vault/config/vault1.hcl:/vault/config/vault.hcl # Mount config for each node
          - ./vault/data1:/vault/data # Mount persistent data volume for each node
        networks:
          - vault_network
        command: "server -config=/vault/config/vault.hcl"
    
      # vault2 and vault3 would be similar, with unique node_id, data paths, and container names/hostnames
      vault2:
        image: hashicorp/vault:1.15.2
        container_name: vault2
        cap_add:
          - IPC_LOCK
        ports:
          - "8202:8200" # Exposing on different host port for local access if needed
          - "8203:8201"
        environment:
          VAULT_ADDR: "http://0.0.0.0:8200"
          VAULT_API_ADDR: "http://vault2:8200"
          VAULT_CLUSTER_ADDR: "http://vault2:8201"
          VAULT_LOG_LEVEL: "info"
        volumes:
          - ./vault/config/vault2.hcl:/vault/config/vault.hcl
          - ./vault/data2:/vault/data
        networks:
          - vault_network
        command: "server -config=/vault/config/vault.hcl"
        depends_on:
          - vault1 # Simple dependency, not for HA
    
      vault3:
        image: hashicorp/vault:1.15.2
        container_name: vault3
        cap_add:
          - IPC_LOCK
        ports:
          - "8204:8200"
          - "8205:8201"
        environment:
          VAULT_ADDR: "http://0.0.0.0:8200"
          VAULT_API_ADDR: "http://vault3:8200"
          VAULT_CLUSTER_ADDR: "http://vault3:8201"
          VAULT_LOG_LEVEL: "info"
        volumes:
          - ./vault/config/vault3.hcl:/vault/config/vault.hcl
          - ./vault/data3:/vault/data
        networks:
          - vault_network
        command: "server -config=/vault/config/vault.hcl"
        depends_on:
          - vault1 # Simple dependency, not for HA
    
      nginx1:
        image: nginx:latest
        container_name: nginx1
        ports:
          - "80:80"
          - "443:443"
        volumes:
          - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
          - ./nginx/ssl:/etc/nginx/ssl:ro # For SSL certificates
        networks:
          - vault_network
        depends_on:
          - vault1 # Nginx depends on at least one Vault node to start
    
      nginx2:
        image: nginx:latest
        container_name: nginx2
        ports:
          # Nginx2 will likely not have 80/443 exposed directly on host, Keepalived handles VIP
          # but if you need to access it directly for testing, you could map different ports
          # - "81:80"
          # - "444:443"
          # For Keepalived VIP, these ports are usually not mapped to host
        volumes:
          - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
          - ./nginx/ssl:/etc/nginx/ssl:ro
        networks:
          - vault_network
        depends_on:
          - vault1
    
      keepalived:
        image: osixia/keepalived:latest
        container_name: keepalived
        cap_add:
          - NET_ADMIN # Required for VIP management
          - NET_BROADCAST
          - NET_RAW
        environment:
          KEEPALIVED_STATE: MASTER # For the first instance, the other would be BACKUP
          KEEPALIVED_INTERFACE: eth0 # Or the correct network interface inside the container
          KEEPALIVED_VIRTUAL_IPS: "172.18.0.100/24" # Example VIP, adjust subnet
          KEEPALIVED_UNICAST_PEERS: "172.18.0.x,172.18.0.y" # IPs of other keepalived containers
          KEEPALIVED_PASSWORD: "your_vrrp_password" # Important for security
          KEEPALIVED_PRIORITY: "101" # Higher for MASTER
          KEEPALIVED_VIRTUAL_ROUTER_ID: "51" # Unique ID for VRRP instance
        volumes:
          - ./keepalived/keepalived.conf:/etc/keepalived/keepalived.conf:ro # Custom config if needed
        networks:
          - vault_network
        sysctls:
          - net.ipv4.ip_nonlocal_bind=1 # Allow binding to non-local IP (VIP)
        depends_on:
          - nginx1
          - nginx2
    
    networks:
      vault_network:
        driver: bridge
        ipam:
          config:
            - subnet: 172.18.0.0/24 # Example subnet

    My Commentary: This docker-compose.yml provides a comprehensive blueprint.

  • Vault Services (vault1, vault2, vault3):
  • Nginx Services (nginx1, nginx2):
  • Keepalived Service:
  • networks: Defining a custom bridge network provides better isolation and allows using service names for internal communication.
  • This setup creates a robust, containerized environment for demonstration. For production, consider:

  • Persistent Storage: More robust solutions than host mounts (e.g., Docker volumes managed by a volume plugin, NFS, cloud block storage).
  • Networking: Dedicated internal networks, possibly without port mapping to the host for Vault, relying solely on Nginx as the gateway.
  • Security: Stronger firewalls, network ACLs, TLS everywhere.
  • Monitoring & Alerting: Integration with Prometheus, Grafana, Alertmanager to track the health of Vault, Nginx, and Keepalived.
  • Secrets Management for Setup: How will the KMS credentials for auto-unseal be provided securely to the Vault containers?

  • Vault Initialization & Unseal

    Once the Vault containers are running, you need to initialize the cluster. This is typically done from one of the Vault containers:

    docker exec vault1 vault operator init -key-shares=3 -key-threshold=2 -format=json > cluster_keys.json

    My Commentary:

  • vault operator init: This command performs the initial setup of the Vault cluster.
  • Unsealing (Manual - if not using auto-unseal): If auto-unseal is not configured, you would manually unseal each Vault node after initialization:

  • Nginx Configuration (nginx.conf)

    This configuration enables Nginx to act as a reverse proxy for the Vault cluster, handling SSL/TLS.

    http {
        upstream vault_servers {
            server vault1:8200;
            server vault2:8200;
            server vault3:8200;
            # You can add load balancing algorithms here, e.g., least_conn, ip_hash
        }
    
        server {
            listen 80;
            server_name your.vault.domain.com; # Replace with your domain
            return 301 https://$host$request_uri; # Redirect HTTP to HTTPS
        }
    
        server {
            listen 443 ssl;
            server_name your.vault.domain.com; # Replace with your domain
    
            ssl_certificate /etc/nginx/ssl/vault.crt; # Your SSL certificate
            ssl_certificate_key /etc/nginx/ssl/vault.key; # Your SSL private key
            ssl_protocols TLSv1.2 TLSv1.3; # Enforce strong protocols
            ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"; # Strong ciphers
            ssl_prefer_server_ciphers on;
            ssl_session_cache shared:SSL:10m;
            ssl_session_timeout 10m;
    
            location / {
                proxy_pass http://vault_servers; # Proxy to the upstream Vault cluster
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;
                proxy_connect_timeout 600;
                proxy_send_timeout 600;
                proxy_read_timeout 600;
                send_timeout 600;
            }
        }
    }

    My Commentary: This nginx.conf is a solid starting point for proxying Vault.

  • upstream vault_servers:
  • HTTP to HTTPS Redirect (listen 80 block): Excellent practice for security. All HTTP traffic is forced to HTTPS.
  • HTTPS Server Block (listen 443 ssl block):
  • location / block:
  • Security Considerations for Nginx:

  • Keepalived Configuration (keepalived.conf)

    Keepalived provides high availability for the Nginx instances by using VRRP to manage a floating IP address.

    vrrp_script check_nginx {
        script "killall -0 nginx" # Checks if nginx process is running
        interval 2 # Check every 2 seconds
        weight 50 # If script fails, priority decreases by 50
    }
    
    vrrp_instance VI_1 {
        state MASTER # For the primary nginx instance, set to BACKUP for the secondary
        interface eth0 # The network interface Keepalived will monitor
        virtual_router_id 51 # Unique ID for this VRRP instance
        priority 101 # Higher priority for MASTER, e.g., 100 for BACKUP
        advert_int 1 # Advertisement interval in seconds
        authentication {
            auth_type PASS
            auth_pass your_vrrp_password # Must match across all Keepalived instances
        }
        virtual_ipaddress {
            172.18.0.100/24 # The Virtual IP address
        }
        track_script {
            check_nginx # Link to the script defined above
        }
        notify_master "/etc/keepalived/notify.sh master"
        notify_backup "/etc/keepalived/notify.sh backup"
        notify_fault "/etc/keepalived/notify.sh fault"
    }

    My Commentary: This Keepalived configuration is standard for a simple active-passive (or active-backup) HA setup.

  • vrrp_script check_nginx:
  • vrrp_instance VI_1:
  • Overall for Keepalived:

  • This setup ensures that if the primary Nginx server (or its Nginx process) goes down, the VIP will automatically move to the backup Nginx, providing seamless failover.
  • Consider a 3-node Nginx/Keepalived setup for true fault tolerance: A 2-node setup (master/backup) works, but if the master goes down and the backup also fails before the master recovers, you're out of service. A 3-node (or more) setup with proper health checks and priority management can provide more robust resilience.
  • Placement: In a VM environment, ensure Nginx/Keepalived pairs are on different physical hosts for true HA. In Docker Compose, they're on the same host unless you deploy them across multiple Docker Swarm/Kubernetes nodes.

  • Testing

    To verify the setup, you can check the status of Vault and Nginx.

    Vault Status:

    docker exec vault1 vault status

    You should see output similar to this, indicating the cluster is initialized, sealed (if not auto-unsealed), and showing the leader.

    Key                         Value
    ---                         -----
    Seal Type                   shamir
    Initialized                 true
    Sealed                      false
    Total Shares                3
    Threshold                   2
    Version                     1.15.2
    Build Date                  2023-11-20T12:35:48Z
    Storage Type                raft
    Cluster Name                vault-cluster-d6d7e0d7
    Cluster ID                  2430ae1c-2234-7a32-1b1a-8252277d0180
    HA Enabled                  true
    HA Cluster                  https://vault1:8201 # This will vary based on your env
    HA Mode                     active
    Active Since                2023-12-01T10:00:00Z

    Nginx Status (via VIP): Access your configured your.vault.domain.com (or the VIP directly) in your browser or with curl.

    curl -k https://your.vault.domain.com/v1/sys/health

    You should get a JSON response indicating Vault's health status. The -k flag is important if you're using self-signed certificates for testing.

    My Commentary:

  • vault status: This is a fundamental check. Pay attention to:
  • Nginx/VIP Testing:

  • Conclusion

    By following these steps, you can set up a highly available HashiCorp Vault cluster using the Raft backend, accessed securely via an Nginx reverse proxy with Keepalived ensuring Nginx's high availability. This robust architecture provides a solid foundation for managing your secrets in a resilient manner.

    My Final Commentary: This article provides a solid, practical guide for setting up a high-availability Vault cluster using a common pattern. It's excellent for understanding the components and their interaction.

    Beyond this setup, for full production readiness, consider:

  • Observability:
  • Found this article helpful?

    Share it with your network!

    Written by Serdarcan Buyukdereli

    Published on 2025-06-02

    More Articles

    The Infinity

    Weekly tech insights, programming tutorials, and the latest in software development. Join our community of developers and tech enthusiasts.

    Quick Links

    Connect With Us

    Daily.dev

    Follow us for the latest tech insights and updates

    © 2025 The Infinity. All rights reserved.