If you are building an automation business or running production workflows for clients, the single best infrastructure decision you can make is self-hosting n8n. I say this as someone who has done both — we started on n8n Cloud, hit execution limits within the first month, and moved to self-hosted within two weeks. We have not looked back.
Self-hosting n8n gives you unlimited executions, full data sovereignty, and total control over your automation stack. The tradeoff is that you own the infrastructure — updates, backups, security, uptime. For any team with basic DevOps capacity, this tradeoff is heavily in your favor.
This guide walks you through setting up a production-ready n8n instance on either DigitalOcean or AWS EC2. Not a toy setup — a real production deployment with PostgreSQL, SSL, automated backups, and proper security. The same setup we run for our own AI automation workflows and for client projects.
What You Will Learn
- How to deploy n8n on DigitalOcean (Droplet) or AWS EC2 with Docker Compose
- How to configure PostgreSQL instead of SQLite for production reliability
- How to set up Nginx reverse proxy with free SSL via Certbot
- How to automate database backups to S3
- Real cost comparison: self-hosted vs n8n Cloud at different execution volumes
- Every production gotcha we have encountered and how to fix it
Prerequisites
- A DigitalOcean account or AWS account
- A registered domain name with DNS access
- Basic comfort with SSH and the Linux command line
- About 30-45 minutes of uninterrupted time
Option A: DigitalOcean Droplet Setup
DigitalOcean is the simpler path. If you do not have a strong preference, start here.
Step 1: Create the Droplet
Log into DigitalOcean. Create a new Droplet with these settings:
- Image: Ubuntu 24.04 LTS
- Plan: Basic Shared CPU, 2 GB RAM / 1 vCPU ($12/month). This handles most workloads comfortably. For heavy automation (50+ active workflows with large payloads), go with 4 GB RAM ($24/month).
- Region: Choose the region closest to your primary API targets. For India-based workflows, Singapore or Bangalore. For Middle East clients, Amsterdam or Frankfurt.
- Authentication: SSH key (not password). If you do not have an SSH key, generate one with
ssh-keygen -t ed25519.
Step 2: Initial Server Setup
SSH into your new Droplet:
ssh root@your-droplet-ip
Create a non-root user and set up basic security:
# Create a new user
adduser n8nuser
usermod -aG sudo n8nuser
# Copy SSH keys to the new user
rsync --archive --chown=n8nuser:n8nuser ~/.ssh /home/n8nuser
# Set up firewall
ufw allow OpenSSH
ufw allow 80/tcp
ufw allow 443/tcp
ufw enable
# Log out and log back in as the new user
exit
Log in as the new user:
ssh n8nuser@your-droplet-ip
Step 3: Install Docker and Docker Compose
# Update system packages
sudo apt update && sudo apt upgrade -y
# Install Docker
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
# Add your user to the docker group
sudo usermod -aG docker $USER
# Install Docker Compose plugin
sudo apt install docker-compose-plugin -y
# Verify installation
docker --version
docker compose version
# IMPORTANT: Log out and back in for group changes to take effect
exit
SSH back in after logging out.
Step 4: Create the Docker Compose Configuration
This is the docker-compose.yml we use in production. Copy it exactly:
mkdir -p ~/n8n-docker && cd ~/n8n-docker
Create the .env file first:
cat > .env << 'EOF'
# Domain configuration
N8N_HOST=n8n.yourdomain.com
N8N_PORT=5678
N8N_PROTOCOL=https
WEBHOOK_URL=https://n8n.yourdomain.com/
# Database
POSTGRES_USER=n8n
POSTGRES_PASSWORD=your-strong-password-here
POSTGRES_DB=n8n
# Security
N8N_ENCRYPTION_KEY=your-random-encryption-key-here
# Timezone - CRITICAL: set this or you will get scheduling bugs
GENERIC_TIMEZONE=Asia/Kolkata
TZ=Asia/Kolkata
# Performance
EXECUTIONS_PROCESS=main
EXECUTIONS_DATA_PRUNE=true
EXECUTIONS_DATA_MAX_AGE=168
EOF
Generate a strong encryption key:
openssl rand -hex 32
Paste the output as your N8N_ENCRYPTION_KEY value. This key encrypts your credentials stored in n8n. If you lose it, you lose access to all saved credentials.
Now create the docker-compose.yml:
version: '3.8'
services:
postgres:
image: postgres:15-alpine
container_name: n8n-db
restart: always
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- n8n-network
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}']
interval: 10s
timeout: 5s
retries: 5
n8n:
image: n8nio/n8n:latest
container_name: n8n
restart: always
ports:
- '5678:5678'
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=${POSTGRES_DB}
- DB_POSTGRESDB_USER=${POSTGRES_USER}
- DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
- N8N_HOST=${N8N_HOST}
- N8N_PORT=${N8N_PORT}
- N8N_PROTOCOL=${N8N_PROTOCOL}
- NODE_ENV=production
- WEBHOOK_URL=${WEBHOOK_URL}
- GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
- TZ=${TZ}
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
- EXECUTIONS_PROCESS=${EXECUTIONS_PROCESS}
- EXECUTIONS_DATA_PRUNE=${EXECUTIONS_DATA_PRUNE}
- EXECUTIONS_DATA_MAX_AGE=${EXECUTIONS_DATA_MAX_AGE}
volumes:
- n8n_data:/home/node/.n8n
- n8n_files:/files
depends_on:
postgres:
condition: service_healthy
networks:
- n8n-network
volumes:
postgres_data:
n8n_data:
n8n_files:
networks:
n8n-network:
Why PostgreSQL instead of SQLite? SQLite is fine for testing. In production, it becomes a bottleneck the moment you have concurrent workflow executions. We learned this when a client's workflow that processed incoming webhooks started dropping events under load. PostgreSQL handles concurrent writes without breaking a sweat.
Step 5: Set Up Nginx and SSL
Install Nginx and Certbot:
sudo apt install nginx certbot python3-certbot-nginx -y
Create the Nginx configuration:
sudo nano /etc/nginx/sites-available/n8n
Paste this configuration:
server {
listen 80;
server_name n8n.yourdomain.com;
location / {
proxy_pass http://localhost:5678;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
# Increase timeouts for long-running workflows
proxy_read_timeout 300s;
proxy_connect_timeout 75s;
proxy_send_timeout 300s;
# Increase max body size for file uploads
client_max_body_size 50M;
}
}
Enable the site and get SSL:
sudo ln -s /etc/nginx/sites-available/n8n /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl restart nginx
# Get SSL certificate (make sure your DNS A record points to the Droplet IP first)
sudo certbot --nginx -d n8n.yourdomain.com
Certbot will automatically configure SSL and set up auto-renewal.
Step 6: Start n8n
cd ~/n8n-docker
docker compose up -d
Wait 30-60 seconds, then visit https://n8n.yourdomain.com. You should see the n8n setup screen. Create your owner account.
Option B: AWS EC2 Setup
As an AWS Partner, we run several client automation stacks on EC2. The setup is similar to DigitalOcean but with a few AWS-specific steps.
Step 1: Launch EC2 Instance
- AMI: Ubuntu 24.04 LTS
- Instance type: t3.small (2 GB RAM, 2 vCPU). For heavier workloads, use t3.medium (4 GB RAM).
- Storage: 30 GB gp3 EBS volume (default 8 GB is not enough for production)
- Security Group: Allow SSH (22), HTTP (80), HTTPS (443) from anywhere. Restrict SSH to your IP if possible.
- Key pair: Create or select an existing key pair
Step 2: Allocate Elastic IP
This is critical. Without an Elastic IP, your instance IP changes on every stop/start, breaking your DNS.
Go to EC2 > Elastic IPs > Allocate > Associate with your instance.
Step 3: DNS Configuration
If you use Route 53, create an A record pointing n8n.yourdomain.com to your Elastic IP. If you use an external DNS provider, create the A record there.
Step 4: Install and Configure
SSH into your instance and follow Steps 2-6 from the DigitalOcean guide above. The Docker Compose configuration is identical.
ssh -i your-key.pem ubuntu@your-elastic-ip
The only difference: use ubuntu as the initial user instead of root.
Automated Backups
This is the part most tutorials skip and most people regret later. Set up automated backups from day one.
Create a backup script:
mkdir -p ~/backups
nano ~/backup-n8n.sh
#!/bin/bash
BACKUP_DIR="$HOME/backups"
DATE=$(date +%Y%m%d-%H%M%S)
# Backup PostgreSQL database
docker exec n8n-db pg_dump -U n8n n8n | gzip > "$BACKUP_DIR/n8n-db-$DATE.sql.gz"
# Keep only last 14 days of local backups
find "$BACKUP_DIR" -name "*.sql.gz" -mtime +14 -delete
# Optional: Upload to S3 (uncomment and configure)
# aws s3 cp "$BACKUP_DIR/n8n-db-$DATE.sql.gz" s3://your-bucket/n8n-backups/
echo "Backup completed: n8n-db-$DATE.sql.gz"
Make it executable and schedule it:
chmod +x ~/backup-n8n.sh
# Run daily at 2 AM
crontab -e
# Add this line:
0 2 * * * /home/n8nuser/backup-n8n.sh >> /home/n8nuser/backups/backup.log 2>&1
For S3 backups on AWS, install the AWS CLI and configure it with an IAM user that has S3 write permissions. This gives you offsite backups that survive even if your server is completely destroyed.
How to restore from backup:
# Stop n8n first
cd ~/n8n-docker && docker compose stop n8n
# Restore the database
gunzip < ~/backups/n8n-db-20251124.sql.gz | docker exec -i n8n-db psql -U n8n n8n
# Start n8n again
docker compose start n8n
Updating n8n
n8n releases updates frequently. Here is how to update safely:
cd ~/n8n-docker
# Always backup before updating
./backup-n8n.sh
# Pull the latest image
docker compose pull
# Restart with the new version
docker compose up -d
# Check logs for errors
docker compose logs -f n8n
We recommend pinning to a specific version in production instead of using latest. Change the image line in docker-compose.yml to something like n8nio/n8n:1.72.1 and update deliberately after testing.
Real Cost Comparison
Here is the math that convinced us to self-host. These numbers are from 2025-2026 pricing.
n8n Cloud pricing:
- Starter: €24/month (2,500 executions)
- Pro: €60/month (10,000 executions)
- Business: €800/month (40,000 executions)
Self-hosted on DigitalOcean:
- 2 GB Droplet: $12/month
- Executions: Unlimited
- Backups (DigitalOcean automated): $2.40/month
- Total: ~$15/month
Self-hosted on AWS EC2:
- t3.small (on-demand): ~$15/month
- 30 GB EBS gp3: ~$2.40/month
- Elastic IP (while associated): Free
- S3 backups: ~$0.50/month
- Total: ~$18/month
Break-even analysis: If you run more than 2,500 executions per month (which any serious automation setup will), self-hosting saves money immediately. At 10,000+ executions, you are saving €45-785/month depending on the Cloud plan you would need. Over a year, that is €540-9,420 saved.
We run over 150,000 executions per month across our client workflows. On n8n Cloud Business, that would cost thousands per month. Self-hosted on a $24/month DigitalOcean Droplet, it costs $24/month. The math is not even close.
Common Gotchas We Have Hit in Production
Timezone issues cause scheduling bugs: If you do not set the GENERIC_TIMEZONE and TZ environment variables, cron-triggered workflows will run at unexpected times. Always set both variables to your local timezone. This has burned us on client projects where a "daily 9 AM report" was firing at 3:30 AM.
Webhooks require HTTPS: n8n webhooks will not work over plain HTTP in production. Many external services (Shopify, Stripe, GitHub) reject webhook URLs that are not HTTPS. Get SSL set up before configuring any webhook-based workflows.
Memory spikes on large JSON payloads: If your workflows process large datasets (bulk API responses, CSV imports), the n8n container can hit memory limits. Two fixes: increase the Droplet/instance RAM, or add NODE_OPTIONS=--max-old-space-size=2048 to the n8n environment variables to increase the Node.js heap.
Community nodes may not work on Cloud: One advantage of self-hosting is access to community nodes. Some of the most useful n8n integrations are community-built and only available on self-hosted instances. We use several community nodes for Indian payment gateway integrations that are not available on n8n Cloud.
Credential encryption key is irreplaceable: The N8N_ENCRYPTION_KEY in your .env file encrypts all stored credentials. If you lose this key, you lose access to every API key, OAuth token, and database password stored in n8n. Back up this key separately, ideally in a password manager or a secure vault.
Docker logs can fill your disk: By default, Docker keeps all container logs indefinitely. On a small Droplet, this can fill your disk within weeks of heavy usage. Add log rotation to your Docker daemon configuration:
sudo nano /etc/docker/daemon.json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
Restart Docker: sudo systemctl restart docker
When Self-Hosting is NOT the Right Call
We are opinionated about self-hosting, but it is not for everyone:
- If you have zero DevOps experience and no one on your team can SSH into a server, use n8n Cloud. The managed experience is worth the premium.
- If your execution volume is genuinely low (under 2,500/month), n8n Cloud Starter at €24/month is simpler and comparable in cost.
- If you need SSO, audit logs, or enterprise compliance out of the box, you will need n8n's Business or Enterprise plans (self-hosted with a license key or cloud).
- If uptime is mission-critical and you do not have monitoring set up, n8n Cloud handles availability for you.
For everyone else — automation agencies, technical founders, development teams running client workflows — self-hosting is the right call. You cannot build a reliable automation business on execution-limited cloud plans.
Security Checklist
Before considering your setup production-ready, verify all of these:
- SSH key authentication enabled (password auth disabled)
- UFW firewall active with only ports 22, 80, 443 open
- n8n running behind Nginx reverse proxy with SSL
- PostgreSQL not exposed to the internet (only accessible within Docker network)
- Strong passwords for PostgreSQL and n8n admin account
- Encryption key backed up securely
- Automated database backups running and verified
- Docker log rotation configured
- Fail2Ban installed for SSH brute-force protection
- System auto-updates enabled:
sudo apt install unattended-upgrades
Frequently Asked Questions
Written by

Founder & CEO
Rishabh Sethia is the founder and CEO of Innovatrix Infotech, a Kolkata-based digital engineering agency. He leads a team that delivers web development, mobile apps, Shopify stores, and AI automation for startups and SMBs across India and beyond.
Connect on LinkedIn