Deployment Guide#
Complete guide to deploying applications with podlift.
Basic Deployment Flow#
Standard deployment workflow:
# 1. Make changes
vim app/views.py
# 2. Test locally
docker build -t myapp .
docker run -p 8000:8000 myapp
# 3. Commit changes
git add -A
git commit -m "Fix bug in views"
# 4. Deploy
podlift deployThat’s it. podlift handles the rest.
Zero-Downtime Deployment#
How podlift ensures no requests are dropped.
The Process#
Current state: web-v1 running, serving traffic
1. Start web-v2 containers (old still running)
2. Wait for web-v2 health checks
3. Update nginx to route to web-v2
4. Wait 30s for connection draining
5. Stop web-v1 containers
New state: web-v2 running, serving trafficDuring Deployment#
$ podlift deploy
[5/7] Starting new containers...
Starting myapp-web-abc123-1...
Health check passed (5s)
Starting myapp-web-abc123-2...
Health check passed (4s)
[6/7] Updating nginx configuration...
Traffic switching to new version...
✓ 100% of traffic on new version
[7/7] Connection draining...
Waiting 30s for in-flight requests to complete...
✓ Old containers stoppedWhat nginx Does#
Before:
upstream myapp_web {
server 127.0.0.1:8001; # web-v1
server 127.0.0.1:8002; # web-v1
}After:
upstream myapp_web {
server 127.0.0.1:9001; # web-v2
server 127.0.0.1:9002; # web-v2
}nginx reloads gracefully. In-flight requests to v1 complete. New requests go to v2.
Failure Handling#
If new version fails health checks:
[5/7] Starting new containers...
Starting myapp-web-abc123-1...
✗ Health check failed (timeout 30s)
ERROR: Deployment failed
Container started but failed health checks.
Old containers still running. No downtime occurred.
Check logs: podlift logs webOld version keeps running. No traffic switches. No downtime.
Rollback#
Reverting to previous version.
Quick Rollback#
podlift rollbackReverts to the last successful deployment.
Rollback to Specific Version#
# By git tag
podlift rollback --to v1.2.3
# By git commit
podlift rollback --to a1b2c3d
# By looking at deployment history
podlift ps --all
podlift rollback --to x9y8z7wHow It Works#
$ podlift rollback
Finding previous deployment...
Found: x9y8z7w "Working version" (deployed 2h ago)
[1/4] Starting old containers...
✓ myapp-web-x9y8z7w-1 started
✓ Health check passed (3s)
[2/4] Updating nginx...
✓ Traffic routing to x9y8z7w
[3/4] Stopping current containers...
✓ myapp-web-abc123-1 stopped
[4/4] Cleanup...
✓ Complete
✓ Rollback successful!
Time: 34sSame zero-downtime process, just in reverse.
Rollback Failed Deployment#
If deployment fails, you don’t need to rollback—old version is still running:
$ podlift deploy
# Deployment fails
$ podlift ps
SERVICE VERSION STATUS
web x9y8z7w healthy ← Still running
# Fix the issue, redeploy
$ podlift deployMulti-Server Deployment#
Deploying to multiple servers.
Serial Deployment (Default)#
servers:
web:
- host: 192.168.1.10
- host: 192.168.1.11
- host: 192.168.1.12$ podlift deploy
Deploying to 192.168.1.10...
[1/7] Validate...
[2/7] Build (local)...
[3/7] Transfer...
[4/7] Load...
[5/7] Start containers...
[6/7] Update nginx...
[7/7] Cleanup...
✓ Complete
Deploying to 192.168.1.11...
[3/7] Transfer...
[4/7] Load...
[5/7] Start containers...
[6/7] Update nginx...
[7/7] Cleanup...
✓ Complete
Deploying to 192.168.1.12...
✓ Complete
✓ All servers deployed successfully!Benefits:
- If first server fails, stop before touching others
- Clear progress tracking
- Predictable order
When to use: Production deployments where safety matters.
Parallel Deployment#
podlift deploy --parallelAll servers deploy simultaneously:
Deploying to all servers in parallel...
192.168.1.10: [=====> ] 50%
192.168.1.11: [=======> ] 70%
192.168.1.12: [=========>] 90%Benefits:
- Faster (3x faster with 3 servers)
Drawbacks:
- If one fails, others continue (potential inconsistency)
- Less clear error reporting
When to use: Staging environments or when speed matters more than safety.
Load Balancing#
Automatic with 2+ servers. podlift sets up nginx load balancing automatically when you deploy to multiple servers - no extra configuration needed.
How It Works#
When deploying to multiple servers, podlift:
- Detects multiple servers in configuration
- Deploys your application to all of them
- Automatically configures nginx on the primary server to load balance across all servers
- Uses
least_connalgorithm for optimal distribution - Sets up health checks and connection pooling
Configuration#
servers:
web:
- host: 192.168.1.10
- host: 192.168.1.11
services:
web:
replicas: 2 # 2 containers per server = 4 totalnginx Configuration#
podlift generates:
upstream myapp_web {
# Server 1
server 192.168.1.10:8001;
server 192.168.1.10:8002;
# Server 2
server 192.168.1.11:8001;
server 192.168.1.11:8002;
}
server {
listen 80;
server_name myapp.com;
location / {
proxy_pass http://myapp_web;
}
}nginx distributes requests across all containers on all servers.
Adding Load Balancer#
For production, put a load balancer in front:
┌─> Server 1 (192.168.1.10)
Client → LB ─────┼─> Server 2 (192.168.1.11)
└─> Server 3 (192.168.1.12)Use:
- DigitalOcean Load Balancer
- AWS ALB
- Cloudflare
- HAProxy
Point LB to all server IPs on port 80/443.
Worker Servers#
Separate servers for background jobs.
Configuration#
servers:
web:
- host: 192.168.1.10
labels: [primary] # Dependencies run here
- host: 192.168.1.11
worker:
- host: 192.168.1.20
- host: 192.168.1.21
dependencies:
postgres:
image: postgres:16
redis:
image: redis:7
services:
web:
port: 8000
healthcheck:
path: /health
worker:
command: celery -A myapp worker
replicas: 2
healthcheck: false # Workers don't have HTTP endpointsWhat Happens#
Server 192.168.1.10 (primary):
- postgres
- redis
- web (2 containers)
Server 192.168.1.11:
- web (2 containers)
Server 192.168.1.20:
- worker (2 containers)
Server 192.168.1.21:
- worker (2 containers)Workers connect to postgres/redis on primary server.
Environment Variables#
services:
web:
env:
DATABASE_URL: postgres://postgres:${DB_PASSWORD}@primary:5432/myapp
REDIS_URL: redis://primary:6379
worker:
env:
DATABASE_URL: postgres://postgres:${DB_PASSWORD}@192.168.1.10:5432/myapp
REDIS_URL: redis://192.168.1.10:6379primary resolves to the server with labels: [primary].
Deployment Strategies#
Strategy 1: Simple Single Server#
Best for: Side projects, MVPs, small apps
service: myapp
servers:
- host: 192.168.1.10Pros:
- Simple
- Cheap ($5-10/month)
- Easy to debug
Cons:
- Single point of failure
- Limited scale
Strategy 2: Multi-Server Web#
Best for: Growing apps with traffic spikes
servers:
web:
- host: 192.168.1.10
labels: [primary]
- host: 192.168.1.11
- host: 192.168.1.12Pros:
- Handles more traffic
- Redundancy
- Can scale by adding servers
Cons:
- More expensive
- Dependencies still single point of failure
Strategy 3: Separate Web + Workers#
Best for: Apps with background jobs
servers:
web:
- host: 192.168.1.10
labels: [primary]
- host: 192.168.1.11
worker:
- host: 192.168.1.20
- host: 192.168.1.21Pros:
- Workers don’t affect web performance
- Can scale web and workers independently
Cons:
- More servers = more cost
- More complex
Strategy 4: Separate Database Server#
Best for: Apps with heavy database load
Not directly supported by podlift (use managed database):
services:
web:
env:
DATABASE_URL: postgres://user:pass@db-server.example.com:5432/myappUse DigitalOcean Managed Database, AWS RDS, etc.
Deployment Hooks#
Run commands after deployment.
Configuration#
hooks:
after_deploy:
- docker exec myapp-web-1 python manage.py migrate
- docker exec myapp-web-1 python manage.py collectstatic --noinput
- docker exec myapp-web-1 python manage.py clearsessionsWhen Hooks Run#
$ podlift deploy
[1/7] Validate...
[2/7] Build...
[3/7] Transfer...
[4/7] Load...
[5/7] Start containers...
[6/7] Update nginx...
[7/7] Cleanup...
Running post-deploy hooks...
✓ python manage.py migrate (2s)
✓ python manage.py collectstatic (3s)
✓ python manage.py clearsessions (1s)
✓ Deployment successful!Hook Types#
hooks:
before_deploy:
- echo "Deployment starting"
after_deploy:
- docker exec myapp-web-1 python manage.py migrate
after_rollback:
- echo "Rolled back to previous version"Hooks run on the primary server via SSH.
Environment-Specific Deploys#
Different configs for staging vs production.
Option 1: Separate Config Files#
podlift.staging.yml
podlift.production.ymlDeploy:
# Staging
podlift deploy --config podlift.staging.yml
# Production
podlift deploy --config podlift.production.ymlOption 2: Environment Variables#
# podlift.yml
servers:
- host: ${SERVER_HOST}
services:
web:
env:
ENVIRONMENT: ${ENVIRONMENT}
DEBUG: ${DEBUG}Deploy:
# Staging
ENVIRONMENT=staging SERVER_HOST=staging.server.com podlift deploy
# Production
ENVIRONMENT=production SERVER_HOST=prod.server.com podlift deployOption 3: Git Branches#
# Staging (deploy from staging branch)
git checkout staging
podlift deploy --config podlift.yml
# Production (deploy from main branch)
git checkout main
podlift deploy --config podlift.ymlUse same config, different branches = different code versions.
Database Migrations#
Handling schema changes.
Strategy 1: Post-Deploy Hook (Recommended)#
hooks:
after_deploy:
- docker exec myapp-web-1 python manage.py migrateFlow:
- Deploy new code
- New containers start
- Run migrations
- Traffic switches to new version
Safe for:
- Adding columns
- Adding tables
- Adding indexes (with CONCURRENT)
Unsafe for:
- Removing columns (old code still running)
- Renaming columns
Strategy 2: Manual Migrations#
# Deploy without traffic switch
podlift deploy --skip-healthcheck
# Run migrations
podlift exec web python manage.py migrate
# Manually test
curl http://server-ip:port/health
# If good, update nginx manually
ssh root@server 'systemctl reload nginx'Strategy 3: Two-Phase Deploy#
For breaking schema changes:
Phase 1: Make column optional
# Migration: make column nullable
ALTER TABLE users ALTER COLUMN email DROP NOT NULL;Deploy and run migration.
Phase 2: Remove column
# Migration: remove column
ALTER TABLE users DROP COLUMN email;Deploy and run migration.
Best Practices#
- Make migrations backward compatible
- Test migrations on staging first
- Backup database before risky migrations
- Use
--skip-healthcheckfor manual control
Monitoring Deployments#
Track deployment success.
During Deployment#
# Watch logs during deploy
podlift deploy --verbose
# In another terminal
podlift logs web --followAfter Deployment#
# Check status
podlift ps
# View logs
podlift logs web --tail 100
# Test endpoint
curl https://myapp.com/healthAutomated Monitoring#
Use external monitoring:
# After deployment, ping healthcheck
hooks:
after_deploy:
- curl -fsS https://hc-ping.com/your-uuidServices:
- Healthchecks.io
- UptimeRobot
- Pingdom
Get alerted if deployments fail.
Troubleshooting Deployments#
Deployment Fails at Health Check#
# Check logs
podlift logs web
# Common issues:
# - Missing env vars
# - Database not ready
# - Wrong healthcheck path
# Debug interactively
podlift exec web bash
curl http://localhost:8000/healthDeployment Succeeds but App Broken#
# Rollback immediately
podlift rollback
# Debug locally
git checkout <deployed-commit>
docker build -t myapp .
docker run -p 8000:8000 myappSlow Deployments#
# Skip build if image unchanged
podlift deploy --skip-build
# Use parallel for multi-server
podlift deploy --parallel
# Use registry instead of SCP
# (faster for multiple servers)Best Practices#
1. Always Commit Before Deploy#
git status # Check for uncommitted changes
git add -A
git commit -m "Descriptive message"
podlift deploy2. Test Locally First#
docker build -t myapp .
docker run -p 8000:8000 myapp
curl http://localhost:8000/health3. Deploy to Staging First#
# Staging
podlift deploy --config podlift.staging.yml
# Test staging
curl https://staging.myapp.com
# Production
podlift deploy --config podlift.production.yml4. Use Health Checks#
services:
web:
healthcheck:
path: /health
expect: [200]
timeout: 30sDon’t deploy without health checks.
5. Monitor After Deploy#
# Deploy
podlift deploy
# Watch for errors
podlift logs web --follow
# Check metrics
# (CPU, memory, error rate)Give it 5-10 minutes before considering it stable.
6. Keep Rollback Ready#
# If anything looks wrong
podlift rollbackDon’t hesitate to rollback. Debug offline.
7. Backup Before Risky Changes#
# Before schema migrations
ssh root@server
docker exec postgres pg_dump myapp > backup.sql
# Then deploy
podlift deployNext Steps#
- Commands Reference - All available commands
- Configuration Reference - All config options
- Troubleshooting - Common issues
- How It Works - Architecture details