TL;DR
Installing n8n with Docker gives you a self-hosted workflow automation platform that runs on port 5678 by default. The quickest test deployment uses a single command: docker run -it --rm -p 5678:5678 n8nio/n8n. This launches n8n without persistence – workflows disappear when the container stops. For production AI workflows, you need Docker Compose with volume mounts for data persistence and proper environment configuration.
A production-ready setup requires a docker-compose.yml file that defines persistent storage, sets N8N_EDITOR_BASE_URL for external access, and configures encryption keys. The self-hosted version is free and open-source, giving you full control over data and unlimited workflow executions. You can integrate AI capabilities through dedicated nodes like AI Agent and AI Chain, which connect to OpenAI, Anthropic, or local LLM endpoints.
Common AI workflow patterns include document processing pipelines that extract text with OCR tools, send content to language models for analysis, and route results to databases or notification channels. Another frequent use case involves AI Agents that query vector databases, retrieve context, and generate responses based on custom knowledge bases. These workflows combine HTTP Request nodes, AI nodes, and data transformation logic in JavaScript or Python code nodes.
The Docker approach works well for teams running workflows on internal servers or cloud VMs. You avoid vendor lock-in and can scale horizontally by adding worker containers. However, you must handle updates, backups, and security patches yourself. The n8n Cloud hosted service removes operational overhead but limits customization options.
Why Docker for n8n Deployment
Docker simplifies n8n deployment by packaging the application with all dependencies into a single container. This eliminates version conflicts between Node.js, npm packages, and system libraries that often plague traditional installations. You get a consistent environment whether you’re running n8n on Ubuntu, macOS, or Windows.
Docker containers isolate n8n from your host system. When you build AI workflows that process large datasets through OpenAI or Anthropic APIs, you can set memory limits and CPU quotas to prevent runaway processes from affecting other services. This matters when running multiple automation tools on the same server – n8n in one container, PostgreSQL in another, Redis for caching in a third.
Version Control and Rollbacks
Docker images are versioned. If an n8n update breaks your AI Agent workflows, you can roll back to the previous image with a single command. The docker-compose.yml file becomes your infrastructure documentation, showing exactly which n8n version, database version, and environment variables your production setup uses.
Simplified AI Integration Setup
AI workflow automation often requires additional services. A typical setup might include n8n connected to a vector database like Qdrant for semantic search, a PostgreSQL database for workflow execution history, and Redis for queue management. Docker Compose orchestrates all these services with defined networking and volume mounts.
docker run -it --rm -p 5678:5678 n8nio/n8n
This single command starts n8n with the web interface accessible at localhost:5678. No npm installation, no Node.js version management, no global package conflicts.
Prerequisites and Environment Setup
Before installing n8n with Docker, verify your system meets the minimum requirements and prepare your environment for AI workflow automation. You need a Linux, macOS, or Windows machine with Docker and Docker Compose installed. Most modern systems handle n8n workloads without issue, but AI-heavy workflows benefit from additional memory allocation.
Install Docker Engine version 20.10 or later and Docker Compose v2. On Ubuntu or Debian systems, run:
sudo apt-get update
sudo apt-get install docker.io docker-compose-plugin
sudo systemctl enable --now docker
For macOS or Windows, download Docker Desktop from the official Docker website. Verify installation with:
docker --version
docker compose version
Add your user to the docker group to avoid permission issues:
sudo usermod -aG docker $USER
newgrp docker
Directory Structure
Create a dedicated directory for n8n configuration and persistent data:
mkdir -p ~/n8n-docker/{data,config}
cd ~/n8n-docker
This structure separates workflow data from configuration files, simplifying backups and version control. The data directory stores workflow definitions, credentials, and execution history. The config directory holds environment variables and custom settings.
API Keys for AI Integration
n8n AI workflows require API credentials for services like OpenAI, Anthropic, or Google AI. Obtain API keys before starting your installation. Store these securely – never commit them to version control. You will reference them as environment variables in your Docker Compose configuration.
Caution: When using AI tools to generate Docker commands or configuration files, always review the output before execution. AI-generated configurations may include outdated environment variables or incorrect port mappings. Validate against official n8n documentation, especially for security-related settings like N8N_EDITOR_BASE_URL for external access.
Docker Installation Methods: Single Container vs Docker Compose
Docker offers two primary approaches for running n8n: single container deployment and Docker Compose orchestration. Each method suits different use cases and complexity levels.
The single container method launches n8n with a single command, ideal for testing or simple production setups. This approach requires minimal configuration and starts immediately:
docker run -it --rm \
--name n8n \
-p 5678:5678 \
-v ~/.n8n:/home/node/.n8n \
n8nio/n8n
This command mounts a local directory for workflow persistence and exposes port 5678. For AI workflows requiring external API access, add environment variables:
docker run -it --rm \
--name n8n \
-p 5678:5678 \
-e N8N_EDITOR_BASE_URL=https://n8n.yourdomain.com \
-v ~/.n8n:/home/node/.n8n \
n8nio/n8n
Single container deployments work well for development and small-scale automation but lack built-in database persistence and service orchestration.
Docker Compose for Production
Docker Compose manages multi-container setups through a declarative YAML file. This method supports PostgreSQL databases, reverse proxies, and coordinated service startup – essential for production AI workflows processing sensitive data or requiring high availability.
A basic docker-compose.yml for n8n with PostgreSQL:
version: '3.8'
services:
postgres:
image: postgres:15
environment:
POSTGRES_DB: n8n
POSTGRES_USER: n8n
POSTGRES_PASSWORD: n8n_password
volumes:
- postgres_data:/var/lib/postgresql/data
n8n:
image: n8nio/n8n
ports:
- "5678:5678"
environment:
DB_TYPE: postgresdb
DB_POSTGRESDB_HOST: postgres
DB_POSTGRESDB_DATABASE: n8n
DB_POSTGRESDB_USER: n8n
DB_POSTGRESDB_PASSWORD: n8n_password
N8N_EDITOR_BASE_URL: https://n8n.yourdomain.com
depends_on:
- postgres
volumes:
- n8n_data:/home/node/.n8n
volumes:
postgres_data:
n8n_data:
Start with docker-compose up -d. This configuration ensures workflow data persists across container restarts and scales better for teams building complex AI agent workflows with AI Chain and AI Agent nodes.
Caution: Always validate AI-generated Docker commands before production deployment. Review environment variables, volume mounts, and network configurations manually to prevent security misconfigurations.
Configuring n8n for AI Workflows
Once your n8n Docker container is running, configure it for AI-powered workflows by setting up API credentials and testing AI nodes. The platform uses dedicated AI nodes rather than a generic chat API, so you’ll connect specific AI services through their respective integrations.
Navigate to Settings > Credentials in the n8n editor. Add credentials for your chosen AI provider. For OpenAI integration, create a new OpenAI credential and paste your API key. For Anthropic Claude, add an Anthropic credential with your API key. These credentials authenticate requests from AI Agent, AI Chain, and other AI-specific nodes.
Store API keys as environment variables in your docker-compose.yml for production deployments:
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
Reference these in credential configurations using expressions like {{$env.OPENAI_API_KEY}}.
Testing AI Nodes
Create a test workflow with an AI Agent node. Configure it to use your OpenAI credential and set a simple prompt like “Summarize this text in one sentence.” Connect a Manual Trigger node as input and add a test payload. Execute the workflow to verify the AI integration works correctly.
Common AI nodes include:
- AI Agent: Autonomous decision-making with tool access
- AI Chain: Sequential AI operations with memory
- OpenAI Chat Model: Direct GPT model access
- Embeddings nodes: Vector generation for semantic search
Security Considerations
Always validate AI-generated outputs before using them in production workflows. AI models can produce unexpected results, especially when processing user input or external data. Add validation nodes after AI operations to check output format, content safety, and business logic compliance. Never execute AI-generated shell commands or database queries without human review in production environments.
Set N8N_EDITOR_BASE_URL to your public domain if accessing n8n remotely to ensure webhook URLs generate correctly for AI-triggered workflows.
Integrating AI Nodes and External Services
Once your n8n Docker container is running, you can connect AI services through dedicated nodes in the workflow editor. Navigate to http://localhost:5678 and create a new workflow to begin adding AI capabilities.
Search for “OpenAI” in the node panel and drag the OpenAI node onto the canvas. You’ll need an API key from OpenAI’s platform. Store this key in n8n’s credentials manager rather than hardcoding it in workflows. The OpenAI node supports chat completions, embeddings, and image generation endpoints.
For a basic chat workflow, connect a Manual Trigger node to an OpenAI Chat Model node. Configure the model parameter (gpt-4, gpt-3.5-turbo) and add your prompt in the messages field. Connect the output to a Code node if you need to parse or transform the AI response before sending it to another service.
Using AI Agent Nodes
The AI Agent node orchestrates multi-step reasoning tasks. Add an AI Agent node and connect it to a Chat Model node (OpenAI, Anthropic, or compatible providers). The agent can use tools you define – for example, an HTTP Request node configured as a tool lets the AI fetch external data during execution.
Connect a Vector Store node if you’re building retrieval-augmented generation workflows. Pinecone, Qdrant, and Supabase integrations work well for storing document embeddings that the AI Agent retrieves during conversations.
External Service Connections
Most AI workflows combine multiple services. Add HTTP Request nodes to call custom APIs, Webhook nodes to receive data from external systems, or database nodes (PostgreSQL, MongoDB) to store conversation history. The Code node accepts JavaScript or Python for custom logic between AI calls.
Caution: Always validate AI-generated commands or code before deploying to production environments. Use the Execute Workflow button to test each node’s output manually. Set up error handling with IF nodes to catch API failures or unexpected AI responses.
Step-by-Step Setup: Production-Ready n8n with Docker Compose
Create a new directory for your n8n installation and add a docker-compose.yml file. This configuration includes PostgreSQL for persistent storage and proper volume management for production use:
version: '3.8'
services:
postgres:
image: postgres:15
restart: unless-stopped
environment:
POSTGRES_USER: n8n
POSTGRES_PASSWORD: your_secure_password_here
POSTGRES_DB: n8n
volumes:
- postgres_data:/var/lib/postgresql/data
n8n:
image: n8nio/n8n:latest
restart: unless-stopped
ports:
- "5678:5678"
environment:
DB_TYPE: postgresdb
DB_POSTGRESDB_HOST: postgres
DB_POSTGRESDB_PORT: 5432
DB_POSTGRESDB_DATABASE: n8n
DB_POSTGRESDB_USER: n8n
DB_POSTGRESDB_PASSWORD: your_secure_password_here
N8N_EDITOR_BASE_URL: http://your-domain.com:5678
WEBHOOK_URL: http://your-domain.com:5678/
GENERIC_TIMEZONE: America/New_York
volumes:
- n8n_data:/home/node/.n8n
depends_on:
- postgres
volumes:
postgres_data:
n8n_data:
Replace your_secure_password_here with a strong password and update N8N_EDITOR_BASE_URL with your actual domain or IP address. The WEBHOOK_URL setting ensures AI Agent nodes and external integrations can reach your workflows correctly.
Launch and Verify
Start the containers with:
docker-compose up -d
Check container status:
docker-compose ps
docker-compose logs n8n
Access n8n at http://localhost:5678 and complete the initial setup wizard. Create your admin account immediately – n8n no longer uses basic auth environment variables for security.
Caution: When building AI workflows that generate Docker commands or system configurations, always review the output before executing. AI Agent nodes can produce syntactically correct but contextually inappropriate commands. Test AI-generated automation logic in a non-production environment first, especially when workflows modify infrastructure or execute shell commands.