TL;DR

Running n8n with Docker Compose gives you a production-ready automation platform for AI workflows without managing complex dependencies. This guide walks through setting up n8n with persistent storage, environment configuration, and AI integrations using OpenAI, Anthropic, and local LLMs.

Docker Compose handles multi-container orchestration, making it straightforward to add PostgreSQL for workflow history, Redis for queue management, and reverse proxies for SSL termination. The setup takes under 10 minutes and provides a stable foundation for building AI-powered workflows that process documents, generate content, and orchestrate multi-step automations.

You’ll deploy n8n with a docker-compose.yml file that includes volume mounts for workflow persistence, environment variables for AI API keys, and network configuration for connecting to external services. The setup supports AI Agent nodes for conversational workflows, AI Chain nodes for sequential processing, and HTTP Request nodes for custom AI API calls.

Common AI workflow patterns include document processing pipelines that extract text with Tesseract OCR, summarize with Claude or GPT-4, and store results in Airtable or Notion. Another frequent use case involves monitoring Slack channels, analyzing sentiment with AI models, and triggering automated responses based on classification results.

Key Configuration Points

The N8N_EDITOR_BASE_URL environment variable configures external access for webhook callbacks and OAuth redirects. Port 5678 exposes the web interface. Volume mounts preserve workflows and credentials across container restarts. AI nodes require API keys stored as n8n credentials, never hardcoded in workflows.

Caution: Always validate AI-generated Docker commands and environment variables before deploying to production. Review security settings, especially when exposing n8n to the internet. Use strong passwords and consider placing n8n behind a VPN or authentication proxy for sensitive workflows.

Why Docker Compose for n8n AI Workflows

Docker Compose solves the complexity of running n8n alongside the databases, vector stores, and AI services that modern workflows require. When you build AI-powered automation, you typically need n8n connected to PostgreSQL for workflow persistence, Redis for queue management, and often a vector database like Qdrant or Weaviate for semantic search capabilities.

A typical AI workflow setup involves multiple containers working together. Your n8n instance connects to OpenAI or Anthropic APIs through AI Agent nodes, stores conversation history in PostgreSQL, caches responses in Redis, and queries embedded documents in a vector database. Managing these dependencies manually with individual docker run commands becomes error-prone and difficult to reproduce across environments.

Docker Compose defines all services in a single docker-compose.yml file. You specify network connections, volume mounts, and environment variables once. When you run docker-compose up, all containers start with correct networking automatically configured. The n8n container can reference your PostgreSQL service as postgres:5432 instead of managing IP addresses.

Environment Consistency

AI workflows often require specific API keys, model endpoints, and configuration values. Docker Compose lets you define these in a .env file that stays out of version control. Your N8N_EDITOR_BASE_URL, database credentials, and AI service keys load automatically when containers start. This approach prevents the common mistake of hardcoding secrets or forgetting required variables when deploying to a new server.

Simplified Backup and Migration

With Docker Compose, your entire AI automation stack – workflows, credentials, execution history – lives in defined volumes. You can backup the PostgreSQL data volume and your n8n user directory, then restore the complete system on another machine by running docker-compose up with the same configuration.

Caution: Always validate AI-generated Docker commands before running them in production. Review volume mount paths, exposed ports, and environment variables to ensure they match your security requirements and infrastructure setup.

Essential Docker Compose Configuration for n8n

A production-ready Docker Compose setup for n8n requires careful attention to persistence, networking, and environment configuration. The minimal configuration below establishes a stable foundation for AI workflow automation.

Create a docker-compose.yml file in your project directory:

version: '3.8'

services:
  n8n:
    image: n8nio/n8n:latest
    container_name: n8n
    restart: unless-stopped
    ports:
      - "5678:5678"
    environment:
      - N8N_EDITOR_BASE_URL=http://localhost:5678
      - WEBHOOK_URL=http://localhost:5678/
      - GENERIC_TIMEZONE=America/New_York
      - N8N_LOG_LEVEL=info
    volumes:
      - n8n_data:/home/node/.n8n
      - ./workflows:/home/node/.n8n/workflows

volumes:
  n8n_data:
    driver: local

Critical Environment Variables

For AI workflows using OpenAI, Anthropic, or other LLM providers, add API credentials as environment variables:

environment:
  - OPENAI_API_KEY=${OPENAI_API_KEY}
  - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}

Store sensitive values in a .env file alongside your compose configuration. Never commit API keys to version control.

Volume Persistence

The n8n_data volume preserves workflow definitions, credentials, and execution history across container restarts. The bind mount ./workflows enables version control for workflow JSON files, essential for team collaboration and disaster recovery.

Caution: When using AI nodes to generate Docker configurations or environment variable suggestions, always validate the output against official n8n documentation. AI-generated commands may reference deprecated variables or incorrect syntax that causes silent failures in production environments.

Integrating AI Services with Dockerized n8n

Once your Dockerized n8n instance is running, connecting AI services requires configuring credentials and using the appropriate AI nodes. n8n provides dedicated nodes for OpenAI, Anthropic, Google PaLM, Hugging Face, and other providers rather than a generic AI API.

Add AI service API keys through the n8n credentials manager. Navigate to Settings > Credentials in the n8n editor, then create a new credential for your chosen provider. For OpenAI, you’ll need an API key from platform.openai.com. Store these credentials in n8n’s encrypted credential store rather than hardcoding them in workflows.

For production deployments, pass sensitive credentials via environment variables in your docker-compose.yml:

services:
  n8n:
    environment:
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}

Then reference these in your .env file without committing them to version control.

Building AI Workflows with Dedicated Nodes

Use the AI Agent node for conversational workflows that require memory and tool-calling capabilities. The AI Chain node handles sequential AI operations like summarization followed by translation. For simple completions, the OpenAI node provides direct access to GPT models.

A practical example: trigger a workflow with a webhook, pass user input to an AI Agent node configured with OpenAI credentials, then route the response to Slack or email. The AI Agent node supports function calling, allowing the AI to trigger other n8n nodes like database queries or API calls.

Validation and Safety

Always validate AI-generated outputs before executing system commands or database operations. Use the IF node to check response formats and the Code node to sanitize inputs. AI models can produce unexpected outputs, so implement error handling with the Error Trigger node to catch failures in production workflows. Test thoroughly in a development environment before deploying AI-powered automations that modify data or interact with external systems.

Advanced Multi-Container Setup

Production n8n deployments benefit from PostgreSQL instead of SQLite. This setup adds a dedicated database container with persistent storage and automatic backups.

version: '3.8'

services:
  postgres:
    image: postgres:15
    restart: unless-stopped
    environment:
      POSTGRES_USER: n8n
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: n8n
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ['CMD-SHELL', 'pg_isready -U n8n']
      interval: 10s
      timeout: 5s
      retries: 5

  n8n:
    image: n8nio/n8n:latest
    restart: unless-stopped
    ports:
      - '5678:5678'
    environment:
      DB_TYPE: postgresdb
      DB_POSTGRESDB_HOST: postgres
      DB_POSTGRESDB_PORT: 5432
      DB_POSTGRESDB_DATABASE: n8n
      DB_POSTGRESDB_USER: n8n
      DB_POSTGRESDB_PASSWORD: ${POSTGRES_PASSWORD}
      N8N_EDITOR_BASE_URL: ${N8N_EDITOR_BASE_URL}
      EXECUTIONS_DATA_PRUNE: 'true'
      EXECUTIONS_DATA_MAX_AGE: 168
    volumes:
      - n8n_data:/home/node/.n8n
    depends_on:
      postgres:
        condition: service_healthy

volumes:
  postgres_data:
  n8n_data:

Integrating Redis for Queue Management

For AI workflows processing large volumes of requests, Redis enables queue-based execution. This prevents timeout issues when calling external AI APIs like OpenAI or Anthropic.

Add Redis to your compose file:

  redis:
    image: redis:7-alpine
    restart: unless-stopped
    volumes:
      - redis_data:/data

Update n8n environment variables:

  n8n:
    environment:
      QUEUE_BULL_REDIS_HOST: redis
      EXECUTIONS_MODE: queue

Securing Your n8n Docker Deployment

Security becomes critical when running n8n in production, especially for AI workflows that process sensitive data or connect to external APIs. Start by configuring proper authentication and network isolation in your docker-compose.yml.

Set strong credentials and configure external access properly:

environment:
  - N8N_EDITOR_BASE_URL=https://n8n.yourdomain.com
  - N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
  - WEBHOOK_URL=https://n8n.yourdomain.com/
  - GENERIC_TIMEZONE=America/New_York

Generate a secure encryption key with:

openssl rand -base64 32

Store this in a .env file alongside docker-compose.yml and never commit it to version control. The encryption key protects credentials for AI services like OpenAI, Anthropic, or Hugging Face that your workflows use.

Network Isolation

Restrict n8n’s network exposure by binding only to localhost:

ports:
  - "127.0.0.1:5678:5678"

Use a reverse proxy like Nginx or Caddy to handle SSL termination and external access. This prevents direct exposure of the n8n interface to the internet.

Secrets Management for AI Workflows

When building AI workflows, store API keys as n8n credentials rather than hardcoding them in workflow nodes. Navigate to Settings > Credentials in the n8n interface to add credentials for OpenAI API, Anthropic, or other AI services. These credentials are encrypted using your N8N_ENCRYPTION_KEY.

For Docker secrets management, consider using Docker Swarm secrets or external vaults like HashiCorp Vault for production deployments handling multiple AI service integrations.

Caution: AI-generated Docker configurations or security commands should always be reviewed by someone familiar with container security before production deployment. Validate that environment variables match n8n’s current documentation, as deprecated variables can create security gaps.

Step-by-Step Setup: Production-Ready n8n with Docker Compose

Start by creating a docker-compose.yml file in your project directory. This configuration includes persistent storage, environment variables, and network settings for production use:

version: '3.8'

services:
  n8n:
    image: n8nio/n8n:latest
    container_name: n8n
    restart: unless-stopped
    ports:
      - "5678:5678"
    environment:
      - N8N_EDITOR_BASE_URL=https://n8n.yourdomain.com
      - WEBHOOK_URL=https://n8n.yourdomain.com/
      - GENERIC_TIMEZONE=America/New_York
      - N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
    volumes:
      - n8n_data:/home/node/.n8n
      - ./local-files:/files

volumes:
  n8n_data:

Configure Environment Variables

Create a .env file in the same directory to store sensitive credentials:

N8N_ENCRYPTION_KEY=your-secure-random-key-here

Generate a secure encryption key using:

openssl rand -base64 32

The N8N_ENCRYPTION_KEY encrypts credentials for AI services like OpenAI, Anthropic, and Google AI within your workflows. Never commit this key to version control.

Launch and Verify

Start your n8n instance:

docker-compose up -d

Verify the container is running:

docker-compose ps
docker-compose logs -f n8n

Access the editor at http://localhost:5678. For AI workflows, you will configure API credentials through the n8n interface using dedicated AI nodes like AI Agent or AI Chain rather than code-based API calls.

Caution: When using AI-generated Docker configurations or workflow code, always review the output before deploying to production. AI models may suggest deprecated environment variables or non-existent APIs. Validate all configuration against official n8n documentation and test in a staging environment first.

The local-files volume mount allows workflows to read and write files for AI processing tasks like document analysis or batch operations.