EServer Docker Deployment Guide
Overview
Run Circuitry EServer in Docker containers for portable, isolated deployment. Perfect for cloud servers, Kubernetes clusters, and scalable infrastructure.
Table of Contents
- Quick Start
- Docker Compose
- Environment Variables
- Volume Mounting
- GPU Passthrough
- Cloud Deployment
- Kubernetes
- Configuration
- Networking
- Monitoring
- Troubleshooting
Quick Start
Pull and Run
# Pull the latest image
docker pull johnwylie/eserver:latest
# Run with default settings
docker run -d -p 3030:3030 --name eserver johnwylie/eserver:latest
# Get the initial access key
docker exec eserver cat /etc/eserver/initial-key.txt
Build from Source
# Clone the repository
git clone https://github.com/victorum/circuitry.git
cd circuitry
# Build the image
docker build -f Dockerfile.eserver -t johnwylie/eserver:latest .
# Run the container
docker run -d -p 3030:3030 --name eserver johnwylie/eserver:latest
Docker Compose
The easiest way to run EServer with proper configuration:
# docker-compose.yml
version: '3.8'
services:
eserver:
image: johnwylie/eserver:latest
container_name: circuitry-eserver
ports:
- "3030:3030"
volumes:
- eserver-data:/var/lib/eserver
- eserver-config:/etc/eserver
environment:
- NODE_ENV=production
- MAX_CONCURRENT_EXECUTIONS=10
- MAX_EXECUTION_TIME=600000
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3030/ping"]
interval: 30s
timeout: 3s
retries: 3
start_period: 5s
networks:
- eserver-network
volumes:
eserver-data:
driver: local
eserver-config:
driver: local
networks:
eserver-network:
driver: bridge
Start the service:
docker-compose up -d
Environment Variables
Configure EServer using environment variables:
| Variable | Description | Default |
|---|---|---|
NODE_ENV | Node environment | production |
SERVER_PORT | Server port | 3030 |
SERVER_HOST | Bind address | 0.0.0.0 |
MAX_CONCURRENT_EXECUTIONS | Max parallel executions | 5 |
MAX_EXECUTION_TIME | Max execution time (ms) | 300000 |
ALLOW_NETWORK_ACCESS | Allow network access | true |
CORS_ALLOW_ALL_ORIGINS | Allow all CORS origins | true |
PYTHON_COMMAND | Python executable | python3 |
ENABLE_DEBUG_LOGGING | Enable debug logs | false |
Example:
docker run -d \
-p 3030:3030 \
-e MAX_CONCURRENT_EXECUTIONS=20 \
-e MAX_EXECUTION_TIME=600000 \
-e ENABLE_DEBUG_LOGGING=true \
--name eserver \
johnwylie/eserver:latest
Volume Mounting
Persistent Data
Mount volumes to persist data and configuration:
docker run -d \
-p 3030:3030 \
-v $(pwd)/eserver-data:/var/lib/eserver \
-v $(pwd)/eserver-config:/etc/eserver \
--name eserver \
johnwylie/eserver:latest
File Access
Mount additional directories for workflow file access:
docker run -d \
-p 3030:3030 \
-v $(pwd)/eserver-data:/var/lib/eserver \
-v $(pwd)/workflows:/workflows:ro \
-v $(pwd)/output:/output \
--name eserver \
johnwylie/eserver:latest
Custom Python Packages
Mount a custom Python environment:
docker run -d \
-p 3030:3030 \
-v $(pwd)/python-packages:/usr/local/lib/python3.11/site-packages \
--name eserver \
johnwylie/eserver:latest
GPU Passthrough
Enable GPU access for AI/ML workflows using NVIDIA GPUs.
Requirements
- NVIDIA GPU on host machine
- NVIDIA Docker Runtime installed
- CUDA-capable GPU
Install NVIDIA Docker Runtime
# Ubuntu/Debian
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | \
sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker
Run with GPU
# Using --gpus flag
docker run -d \
--gpus all \
-p 3030:3030 \
--name eserver \
johnwylie/eserver:latest
# Specify GPU device
docker run -d \
--gpus '"device=0"' \
-p 3030:3030 \
--name eserver \
johnwylie/eserver:latest
Docker Compose with GPU
version: '3.8'
services:
eserver:
image: johnwylie/eserver:latest
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
ports:
- "3030:3030"
restart: unless-stopped
Install GPU Libraries in Container
Create a custom Dockerfile:
FROM johnwylie/eserver:latest
# Install CUDA-enabled PyTorch
RUN pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
# Install TensorFlow with GPU support
RUN pip3 install tensorflow[and-cuda]
# Install other GPU libraries
RUN pip3 install cupy-cuda11x jax[cuda11_local]
Build and run:
docker build -t johnwylie/eserver:gpu .
docker run -d --gpus all -p 3030:3030 johnwylie/eserver:gpu
Cloud Deployment
AWS ECS
1. Create Task Definition
{
"family": "circuitry-eserver",
"containerDefinitions": [
{
"name": "eserver",
"image": "johnwylie/eserver:latest",
"cpu": 1024,
"memory": 2048,
"essential": true,
"portMappings": [
{
"containerPort": 3030,
"hostPort": 3030,
"protocol": "tcp"
}
],
"environment": [
{
"name": "NODE_ENV",
"value": "production"
},
{
"name": "MAX_CONCURRENT_EXECUTIONS",
"value": "10"
}
],
"mountPoints": [
{
"sourceVolume": "eserver-data",
"containerPath": "/var/lib/eserver"
}
],
"healthCheck": {
"command": ["CMD-SHELL", "curl -f http://localhost:3030/ping || exit 1"],
"interval": 30,
"timeout": 5,
"retries": 3,
"startPeriod": 10
},
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/circuitry-eserver",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "ecs"
}
}
}
],
"volumes": [
{
"name": "eserver-data",
"efsVolumeConfiguration": {
"fileSystemId": "fs-1234567",
"transitEncryption": "ENABLED"
}
}
],
"requiresCompatibilities": ["FARGATE"],
"networkMode": "awsvpc",
"cpu": "1024",
"memory": "2048"
}
2. Create Service
aws ecs create-service \
--cluster my-cluster \
--service-name eserver \
--task-definition circuitry-eserver \
--desired-count 2 \
--launch-type FARGATE \
--network-configuration "awsvpcConfiguration={subnets=[subnet-12345],securityGroups=[sg-12345],assignPublicIp=ENABLED}"
Google Cloud Run
# Build and push to GCR
gcloud builds submit --tag gcr.io/PROJECT_ID/eserver
# Deploy to Cloud Run
gcloud run deploy eserver \
--image gcr.io/PROJECT_ID/eserver \
--platform managed \
--region us-central1 \
--allow-unauthenticated \
--port 3030 \
--memory 2Gi \
--cpu 2 \
--max-instances 10 \
--set-env-vars "MAX_CONCURRENT_EXECUTIONS=10,MAX_EXECUTION_TIME=600000"
Azure Container Instances
az container create \
--resource-group myResourceGroup \
--name eserver \
--image johnwylie/eserver:latest \
--dns-name-label eserver \
--ports 3030 \
--cpu 2 \
--memory 4 \
--environment-variables \
NODE_ENV=production \
MAX_CONCURRENT_EXECUTIONS=10 \
--restart-policy Always
DigitalOcean
# Create App Platform spec
cat > app.yaml <<EOF
name: circuitry-eserver
services:
- name: eserver
image:
registry_type: DOCKER_HUB
repository: johnwylie/eserver
tag: latest
http_port: 3030
instance_count: 2
instance_size_slug: professional-xs
envs:
- key: NODE_ENV
value: "production"
- key: MAX_CONCURRENT_EXECUTIONS
value: "10"
health_check:
http_path: /ping
initial_delay_seconds: 10
period_seconds: 30
EOF
# Deploy
doctl apps create --spec app.yaml
Kubernetes
Deployment
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: eserver
labels:
app: eserver
spec:
replicas: 3
selector:
matchLabels:
app: eserver
template:
metadata:
labels:
app: eserver
spec:
containers:
- name: eserver
image: johnwylie/eserver:latest
ports:
- containerPort: 3030
env:
- name: NODE_ENV
value: "production"
- name: MAX_CONCURRENT_EXECUTIONS
value: "20"
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "2Gi"
cpu: "1000m"
volumeMounts:
- name: eserver-data
mountPath: /var/lib/eserver
- name: eserver-config
mountPath: /etc/eserver
livenessProbe:
httpGet:
path: /ping
port: 3030
initialDelaySeconds: 10
periodSeconds: 30
readinessProbe:
httpGet:
path: /ping
port: 3030
initialDelaySeconds: 5
periodSeconds: 10
volumes:
- name: eserver-data
persistentVolumeClaim:
claimName: eserver-data-pvc
- name: eserver-config
configMap:
name: eserver-config
---
apiVersion: v1
kind: Service
metadata:
name: eserver
spec:
type: LoadBalancer
ports:
- port: 3030
targetPort: 3030
protocol: TCP
selector:
app: eserver
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: eserver-data-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
Apply:
kubectl apply -f deployment.yaml
Horizontal Pod Autoscaling
# hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: eserver-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: eserver
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
Configuration
Custom Config File
Mount a custom configuration:
# Create config.json
cat > config.json <<EOF
{
"version": "1.0",
"server": {
"port": 3030,
"host": "0.0.0.0",
"allowNetworkAccess": true,
"maxConcurrentExecutions": 20,
"requestTimeout": 600000
},
"cors": {
"enabled": true,
"origins": ["https://circuitry.dev"],
"allowAllOrigins": false
},
"security": {
"allowedIPs": ["192.168.1.0/24"],
"rateLimit": {
"enabled": true,
"maxRequests": 200,
"windowMs": 60000
}
},
"execution": {
"maxExecutionTime": 600000,
"saveHistory": true,
"maxHistorySize": 1000
}
}
EOF
# Run with custom config
docker run -d \
-p 3030:3030 \
-v $(pwd)/config.json:/etc/eserver/config.json \
johnwylie/eserver:latest
Networking
Bridge Network
# Create custom network
docker network create --driver bridge eserver-network
# Run container on network
docker run -d \
--network eserver-network \
--name eserver \
-p 3030:3030 \
johnwylie/eserver:latest
Host Network (for maximum performance)
docker run -d \
--network host \
johnwylie/eserver:latest
Reverse Proxy with Nginx
# nginx.conf
server {
listen 80;
server_name eserver.example.com;
location / {
proxy_pass http://localhost:3030;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Monitoring
Prometheus Metrics
EServer exposes metrics at /metrics:
# prometheus.yml
scrape_configs:
- job_name: 'eserver'
static_configs:
- targets: ['eserver:3030']
metrics_path: '/metrics'
Health Checks
# Check container health
docker inspect --format='{{.State.Health.Status}}' eserver
# Check endpoint directly
curl http://localhost:3030/ping
Logging
# View logs
docker logs eserver
# Follow logs
docker logs -f eserver
# Last 100 lines
docker logs --tail 100 eserver
Troubleshooting
Container Won't Start
# Check logs
docker logs eserver
# Inspect container
docker inspect eserver
# Check port conflicts
lsof -i :3030
Permission Issues
# Run as specific user
docker run -d \
--user $(id -u):$(id -g) \
-p 3030:3030 \
johnwylie/eserver:latest
Network Access Issues
# Allow all CORS origins
docker run -d \
-p 3030:3030 \
-e CORS_ALLOW_ALL_ORIGINS=true \
johnwylie/eserver:latest
GPU Not Detected
# Verify NVIDIA runtime
docker run --rm --gpus all nvidia/cuda:11.8.0-base-ubuntu22.04 nvidia-smi
# Check Docker daemon config
cat /etc/docker/daemon.json
# Should contain: {"default-runtime": "nvidia"}
Performance Tuning
Resource Limits
docker run -d \
-p 3030:3030 \
--memory="4g" \
--cpus="2.0" \
--memory-swap="8g" \
johnwylie/eserver:latest
Production Best Practices
- Use volume mounts for persistent data
- Set resource limits to prevent resource exhaustion
- Enable health checks for automatic recovery
- Use restart policies for high availability
- Monitor metrics with Prometheus/Grafana
- Enable logging to external systems
- Use secrets management for access keys
- Implement rate limiting at reverse proxy level
- Use CDN for static assets
- Enable SSL/TLS at reverse proxy
Security
Secrets Management
# Create Docker secret
echo "your-access-key" | docker secret create eserver_key -
# Use secret in container
docker service create \
--name eserver \
--secret eserver_key \
-p 3030:3030 \
johnwylie/eserver:latest
Network Security
# Run on private network only
docker run -d \
--network private-network \
--name eserver \
johnwylie/eserver:latest