Overview
This page provides production-ready configuration examples for common deployment scenarios. Copy and adapt these configurations for your infrastructure.Microservices
Load balance across microservice instances
API Gateway
API gateway configuration
High Availability
Multi-region high-availability setup
Development
Local development configuration
Common Scenarios
Simple Web Application
Basic setup for a web application with 3 backend servers:Copy
Ask AI
listen_address: "0.0.0.0"
listen_port: 80
algorithm: "round-robin"
backends:
- host: "web1.internal"
port: 8080
- host: "web2.internal"
port: 8080
- host: "web3.internal"
port: 8080
health_check:
enabled: true
interval_ms: 5000
path: "/health"
logging:
level: "info"
format: "json"
API Gateway
Load balancer as an API gateway with multiple services:config-api-gateway.yaml
Copy
Ask AI
listen_address: "0.0.0.0"
listen_port: 443
algorithm: "least-connections"
max_connections: 50000
# User service backends
backends:
- host: "user-service-1.prod.internal"
port: 8080
weight: 100
- host: "user-service-2.prod.internal"
port: 8080
weight: 100
health_check:
enabled: true
interval_ms: 3000
timeout_ms: 1500
max_failures: 3
path: "/api/health"
timeout:
connect_ms: 3000
request_ms: 30000
idle_ms: 60000
tls:
enabled: true
cert_path: "/etc/ssl/certs/api.example.com.crt"
key_path: "/etc/ssl/private/api.example.com.key"
min_version: "1.3"
logging:
level: "info"
format: "json"
output: "/var/log/ultrabalancer/api-gateway.log"
metrics:
enabled: true
endpoint: "/metrics"
prometheus_endpoint: "/prometheus"
Microservices Architecture
Configuration for microservices with different weights:config-microservices.yaml
Copy
Ask AI
listen_address: "0.0.0.0"
listen_port: 8080
algorithm: "weighted"
workers: auto
# Mix of different instance types
backends:
# High-performance instances
- host: "ms-large-1.internal"
port: 8080
weight: 300 # 3x capacity
max_connections: 3000
- host: "ms-large-2.internal"
port: 8080
weight: 300
max_connections: 3000
# Standard instances
- host: "ms-medium-1.internal"
port: 8080
weight: 200 # 2x capacity
max_connections: 2000
- host: "ms-medium-2.internal"
port: 8080
weight: 200
max_connections: 2000
# Small instances
- host: "ms-small-1.internal"
port: 8080
weight: 100 # 1x capacity
max_connections: 1000
health_check:
enabled: true
interval_ms: 5000
max_failures: 3
path: "/actuator/health"
circuit_breaker:
enabled: true
failure_threshold: 5
success_threshold: 2
timeout_seconds: 30
logging:
level: "info"
format: "json"
output: "/var/log/ultrabalancer/microservices.log"
Session Affinity (Sticky Sessions)
Use IP hash for session persistence:config-sticky-sessions.yaml
Copy
Ask AI
listen_address: "0.0.0.0"
listen_port: 80
algorithm: "ip-hash" # Same client → same backend
backends:
- host: "app1.internal"
port: 8080
- host: "app2.internal"
port: 8080
- host: "app3.internal"
port: 8080
health_check:
enabled: true
interval_ms: 5000
path: "/health"
timeout:
connect_ms: 5000
request_ms: 60000 # Longer for sessions
idle_ms: 120000
logging:
level: "info"
format: "json"
High-Availability Setup
Multi-region HA configuration with circuit breaker:config-ha.yaml
Copy
Ask AI
listen_address: "0.0.0.0"
listen_port: 80
algorithm: "least-connections"
workers: 16
max_connections: 100000
backends:
# Primary region (us-east-1)
- host: "app-useast1a-1.prod.internal"
port: 8080
weight: 150
max_connections: 5000
- host: "app-useast1a-2.prod.internal"
port: 8080
weight: 150
max_connections: 5000
- host: "app-useast1b-1.prod.internal"
port: 8080
weight: 150
max_connections: 5000
# Failover region (us-west-2)
- host: "app-uswest2a-1.prod.internal"
port: 8080
weight: 100
max_connections: 3000
- host: "app-uswest2a-2.prod.internal"
port: 8080
weight: 100
max_connections: 3000
health_check:
enabled: true
interval_ms: 2000 # Fast detection
timeout_ms: 1000
max_failures: 2
circuit_breaker:
enabled: true
failure_threshold: 5
success_threshold: 3
timeout_seconds: 60
half_open_requests: 5
timeout:
connect_ms: 3000
request_ms: 30000
idle_ms: 60000
logging:
level: "info"
format: "json"
output: "/var/log/ultrabalancer/ha.log"
max_size_mb: 500
max_files: 30
metrics:
enabled: true
Environment-Specific Configurations
Development
config-dev.yaml
Copy
Ask AI
listen_address: "127.0.0.1"
listen_port: 8080
algorithm: "round-robin"
backends:
- host: "localhost"
port: 3001
- host: "localhost"
port: 3002
health_check:
enabled: true
interval_ms: 10000 # Less aggressive
max_failures: 5 # More tolerant
timeout:
connect_ms: 10000 # Generous timeouts
request_ms: 60000
logging:
level: "debug" # Verbose logging
format: "text" # Human-readable
output: "stdout"
Staging
config-staging.yaml
Copy
Ask AI
listen_address: "0.0.0.0"
listen_port: 8080
algorithm: "least-connections"
workers: 4
backends:
- host: "backend1.staging.internal"
port: 8080
- host: "backend2.staging.internal"
port: 8080
health_check:
enabled: true
interval_ms: 5000
path: "/health"
timeout:
connect_ms: 5000
request_ms: 30000
logging:
level: "info"
format: "json"
output: "/var/log/ultrabalancer/staging.log"
metrics:
enabled: true
Production
config-prod.yaml
Copy
Ask AI
listen_address: "0.0.0.0"
listen_port: 80
algorithm: "least-connections"
workers: auto
max_connections: 50000
backends:
- host: "backend1.prod.internal"
port: 8080
weight: 100
max_connections: 2000
- host: "backend2.prod.internal"
port: 8080
weight: 100
max_connections: 2000
- host: "backend3.prod.internal"
port: 8080
weight: 100
max_connections: 2000
health_check:
enabled: true
interval_ms: 3000
timeout_ms: 1500
max_failures: 3
path: "/health"
circuit_breaker:
enabled: true
failure_threshold: 5
success_threshold: 2
timeout_seconds: 60
timeout:
connect_ms: 5000
request_ms: 30000
idle_ms: 60000
logging:
level: "info"
format: "json"
output: "/var/log/ultrabalancer/prod.log"
max_size_mb: 100
max_files: 30
metrics:
enabled: true
Cloud Provider Configurations
AWS
- EC2 Instances
- ECS Tasks
- Lambda Integration
config-aws-ec2.yaml
Copy
Ask AI
listen_address: "0.0.0.0"
listen_port: 80
algorithm: "least-connections"
backends:
# Private subnet instances
- host: "10.0.1.10" # us-east-1a
port: 8080
weight: 100
- host: "10.0.1.11" # us-east-1a
port: 8080
weight: 100
- host: "10.0.2.10" # us-east-1b
port: 8080
weight: 100
health_check:
enabled: true
interval_ms: 5000
path: "/health"
logging:
level: "info"
format: "json"
output: "/var/log/ultrabalancer/access.log"
config-aws-ecs.yaml
Copy
Ask AI
listen_address: "0.0.0.0"
listen_port: 8080
algorithm: "least-connections"
# Service discovery backends
backends:
- host: "service.prod.local" # ECS Service Discovery
port: 8080
weight: 100
health_check:
enabled: true
interval_ms: 5000
path: "/health"
logging:
level: "info"
format: "json"
output: "stdout" # CloudWatch Logs
config-aws-lambda.yaml
Copy
Ask AI
listen_address: "0.0.0.0"
listen_port: 80
algorithm: "round-robin"
backends:
# Lambda Function URLs
- host: "abc123.lambda-url.us-east-1.on.aws"
port: 443
- host: "def456.lambda-url.us-east-1.on.aws"
port: 443
health_check:
enabled: true
interval_ms: 10000
path: "/health"
tls:
enabled: true # HTTPS to Lambda
Google Cloud Platform
config-gcp.yaml
Copy
Ask AI
listen_address: "0.0.0.0"
listen_port: 80
algorithm: "least-connections"
backends:
# Compute Engine instances
- host: "10.128.0.2" # us-central1-a
port: 8080
- host: "10.128.0.3" # us-central1-a
port: 8080
- host: "10.132.0.2" # us-central1-b
port: 8080
health_check:
enabled: true
interval_ms: 5000
path: "/health"
logging:
level: "info"
format: "json"
output: "stdout" # Cloud Logging
Azure
config-azure.yaml
Copy
Ask AI
listen_address: "0.0.0.0"
listen_port: 80
algorithm: "least-connections"
backends:
# VM Scale Set instances
- host: "10.0.1.4" # Availability Zone 1
port: 8080
- host: "10.0.1.5" # Availability Zone 1
port: 8080
- host: "10.0.2.4" # Availability Zone 2
port: 8080
health_check:
enabled: true
interval_ms: 5000
path: "/api/health"
logging:
level: "info"
format: "json"
output: "/var/log/ultrabalancer/access.log"
DigitalOcean
config-digitalocean.yaml
Copy
Ask AI
listen_address: "0.0.0.0"
listen_port: 80
algorithm: "round-robin"
backends:
# Droplets in same VPC
- host: "10.116.0.2"
port: 8080
- host: "10.116.0.3"
port: 8080
- host: "10.116.0.4"
port: 8080
health_check:
enabled: true
interval_ms: 5000
path: "/health"
logging:
level: "info"
format: "json"
Special Use Cases
WebSocket Support
config-websocket.yaml
Copy
Ask AI
listen_address: "0.0.0.0"
listen_port: 8080
algorithm: "ip-hash" # Sticky sessions for WebSocket
backends:
- host: "ws1.internal"
port: 8080
- host: "ws2.internal"
port: 8080
timeout:
connect_ms: 5000
request_ms: 300000 # 5 minutes for long-lived connections
idle_ms: 600000 # 10 minutes idle timeout
health_check:
enabled: true
interval_ms: 10000
path: "/ws/health"
gRPC Services
config-grpc.yaml
Copy
Ask AI
listen_address: "0.0.0.0"
listen_port: 50051
algorithm: "least-connections"
backends:
- host: "grpc-service-1.internal"
port: 50051
- host: "grpc-service-2.internal"
port: 50051
health_check:
enabled: true
interval_ms: 5000
path: "/grpc.health.v1.Health/Check"
timeout:
connect_ms: 3000
request_ms: 30000
Container Orchestration
- Docker Compose
- Kubernetes
- Nomad
docker-compose.yml
Copy
Ask AI
version: '3.8'
services:
ultrabalancer:
image: ultrabalancer/ultrabalancer:latest
ports:
- "80:8080"
volumes:
- ./config.yaml:/etc/ultrabalancer/config.yaml
command: --config /etc/ultrabalancer/config.yaml
backend1:
image: myapp:latest
expose:
- "8080"
backend2:
image: myapp:latest
expose:
- "8080"
backend3:
image: myapp:latest
expose:
- "8080"
config.yaml
Copy
Ask AI
listen_address: "0.0.0.0"
listen_port: 8080
algorithm: "round-robin"
backends:
- host: "backend1"
port: 8080
- host: "backend2"
port: 8080
- host: "backend3"
port: 8080
k8s-config.yaml
Copy
Ask AI
apiVersion: v1
kind: ConfigMap
metadata:
name: ultrabalancer-config
data:
config.yaml: |
listen_address: "0.0.0.0"
listen_port: 8080
algorithm: "least-connections"
backends:
- host: "backend-service"
port: 8080
health_check:
enabled: true
interval_ms: 5000
path: "/health"
logging:
level: "info"
format: "json"
nomad-job.hcl
Copy
Ask AI
job "ultrabalancer" {
datacenters = ["dc1"]
group "lb" {
count = 1
task "ultrabalancer" {
driver = "docker"
config {
image = "ultrabalancer/ultrabalancer:latest"
port_map {
http = 8080
}
}
template {
data = <<EOF
listen_address: "0.0.0.0"
listen_port: 8080
algorithm: "round-robin"
backends:
{{- range service "backend" }}
- host: "{{ .Address }}"
port: {{ .Port }}
{{- end }}
EOF
destination = "local/config.yaml"
}
env {
ULTRA_CONFIG = "local/config.yaml"
}
}
}
}
Blue-Green Deployment
config-blue-green.yaml
Copy
Ask AI
listen_address: "0.0.0.0"
listen_port: 80
algorithm: "weighted"
backends:
# Blue environment (current production)
- host: "blue-1.internal"
port: 8080
weight: 100 # 100% traffic
- host: "blue-2.internal"
port: 8080
weight: 100
# Green environment (new version, no traffic)
- host: "green-1.internal"
port: 8080
weight: 0 # 0% traffic initially
- host: "green-2.internal"
port: 8080
weight: 0
health_check:
enabled: true
interval_ms: 3000
Copy
Ask AI
# Update weights and reload
backends:
- host: "blue-1.internal"
weight: 0 # 0% traffic
- host: "blue-2.internal"
weight: 0
- host: "green-1.internal"
weight: 100 # 100% traffic
- host: "green-2.internal"
weight: 100
Copy
Ask AI
# Reload configuration
kill -HUP $(pidof ultrabalancer)
Canary Deployment
config-canary.yaml
Copy
Ask AI
listen_address: "0.0.0.0"
listen_port: 80
algorithm: "weighted"
backends:
# Stable version (95% traffic)
- host: "stable-1.internal"
port: 8080
weight: 475
- host: "stable-2.internal"
port: 8080
weight: 475
# Canary version (5% traffic)
- host: "canary-1.internal"
port: 8080
weight: 50
health_check:
enabled: true
interval_ms: 3000