Migrating n8n from SQLite to Postgres
n8n database migration in under 20 minutes
A lot of Reddit and community posts about migrating an existing n8n instance from SQLite to Postgres are relatively old and depend on manually fixing tables after the move. n8n's export:entities and import:entities path avoids that in most cases, it is the CLI workflow intended for moving between database types.
Migration method
This guide assumes you are running n8n (CE) in Docker. Use n8n export:entities against your current SQLite setup, then n8n import:entities into the new Postgres database. Requirements for a safe move:
- Use the same n8n version for export and import (do not use the :latest image).
- Keep the same encryption key, or credentials will not decrypt.
- Stop n8n while exporting, so the SQLite file is not changing during export.
Placeholders used in this guide
Replace these with your values:
<N8N_VERSION>e.g.2.1.4<N8N_CONTAINER>your running container name<N8N_VOLUME>your Docker volume name, e.g.n8n_data<EXPORT_DIR>local folder path for exported entities<PGHOST>,<PGPORT>,<PGDATABASE>,<PGUSER>,<PGPASSWORD><N8N_ENCRYPTION_KEY>your existing key<WEBHOOK_URL>your existing webhook URL
Step by step
0) Freeze your n8n image version
If you run n8nio/n8n:latest, pin it to a specific version before you start.
Example:
services:
n8n:
image: n8nio/n8n:<N8N_VERSION>
1) Disable workflows, then stop n8n
This avoids triggers firing while you export.
docker compose ps
docker exec -u node -it <N8N_CONTAINER> n8n unpublish:workflow --all
docker compose down
Note: unpublish:workflow updates the database. A restart is needed after this. We are doing this as control measure, so you don't end up with a lot of failed runs.
2) Take a full backup of the n8n volume
This backs up the SQLite database file and the n8n config folder.
mkdir -p ./n8n_migrate_backup
docker run --rm \
-v <N8N_VOLUME>:/data \
-v "$(pwd)/n8n_migrate_backup:/backup" \
alpine sh -lc 'cd /data && tar -czf /backup/n8n_volume_full.tgz .'
Verify the tarball:
ls -lh ./n8n_migrate_backup/n8n_volume_full.tgz
tar -tzf ./n8n_migrate_backup/n8n_volume_full.tgz | head -n 30
3) Extract and record your current encryption key
You want to reuse the same key via N8N_ENCRYPTION_KEY. In Docker volume setups, the key is stored in the n8n config file inside the volume.
docker run --rm \
-v <N8N_VOLUME>:/data \
alpine sh -lc 'sed -n "1,200p" /data/config'
If you see an encryptionKey value, copy it and store it safely.
4) Export entities from SQLite
Create an export folder on the host and make it writable by UID 1000 (the node user in the container is commonly UID 1000).
mkdir -p <EXPORT_DIR>
chown -R 1000:1000 <EXPORT_DIR>
Run the export using the same n8n version and mounting your existing volume.
docker run --rm -u node \
--entrypoint n8n \
-v <N8N_VOLUME>:/home/node/.n8n \
-v "<EXPORT_DIR>:/export" \
n8nio/n8n:<N8N_VERSION> \
export:entities --outputDir=/export
Execution history tables are excluded by default because they can be large. If you need them, add --includeExecutionHistoryDataTables=true.
Optional extra backups (not required for the entity migration, but can be useful as a fallback):
mkdir -p ./n8n_migrate/workflows ./n8n_migrate/credentials
chown -R 1000:1000 ./n8n_migrate
docker run --rm -u node \
--entrypoint n8n \
-v <N8N_VOLUME>:/home/node/.n8n \
-v "$(pwd)/n8n_migrate/workflows:/out" \
n8nio/n8n:<N8N_VERSION> \
export:workflow --backup --output=/out/
docker run --rm -u node \
--entrypoint n8n \
-v <N8N_VOLUME>:/home/node/.n8n \
-v "$(pwd)/n8n_migrate/credentials:/out" \
n8nio/n8n:<N8N_VERSION> \
export:credentials --backup --output=/out/
5) Update Docker Compose for Postgres
Update your compose stack to use the new Postgres info.
DB_TYPE=postgresdbDB_POSTGRESDB_*variablesN8N_ENCRYPTION_KEYset to your existing key
Managed Postgres often requires TLS. If you can provide a CA certificate, prefer that over disabling verification.
Connectivity check from your VPS:
docker run --rm -it postgres:16-alpine sh -lc '
apk add --no-cache ca-certificates >/dev/null
PGPASSWORD="<PGPASSWORD>" psql "host=<PGHOST> port=<PGPORT> dbname=<PGDATABASE> user=<PGUSER> sslmode=require" -c "select 1;"
'
Example docker-compose.yml service:
services:
n8n:
image: n8nio/n8n:<N8N_VERSION>
ports:
- "5678:5678"
environment:
- GENERIC_TIMEZONE=Europe/London
- TZ=Europe/London
- WEBHOOK_URL=<WEBHOOK_URL>
# key from your existing n8n config
- N8N_ENCRYPTION_KEY=<N8N_ENCRYPTION_KEY>
# Postgres
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=<PGHOST>
- DB_POSTGRESDB_PORT=<PGPORT>
- DB_POSTGRESDB_DATABASE=<PGDATABASE>
- DB_POSTGRESDB_USER=<PGUSER>
- DB_POSTGRESDB_PASSWORD=<PGPASSWORD>
- DB_POSTGRESDB_SSL_ENABLED=true
# Prefer a CA when possible
# - DB_POSTGRESDB_SSL_CA_FILE=/path/in/container/ca.crt
# Optional execution pruning
- EXECUTIONS_DATA_PRUNE=true
- EXECUTIONS_DATA_MAX_AGE=72
- EXECUTIONS_DATA_PRUNE_MAX_COUNT=10000
- EXECUTIONS_DATA_SAVE_ON_SUCCESS=none
- EXECUTIONS_DATA_SAVE_ON_ERROR=all
- N8N_RUNNERS_ENABLED=true
volumes:
- <N8N_VOLUME>:/home/node/.n8n
networks:
- n8n
restart: always
volumes:
<N8N_VOLUME>:
networks:
n8n:5a) Postgres permissions
This assumes your database is <PGDATABASE> and your app user is <PGUSER>.
GRANT CONNECT, TEMPORARY ON DATABASE <PGDATABASE> TO "<PGUSER>";
GRANT USAGE, CREATE ON SCHEMA public TO "<PGUSER>";
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO "<PGUSER>";
GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public TO "<PGUSER>";
GRANT ALL PRIVILEGES ON ALL FUNCTIONS IN SCHEMA public TO "<PGUSER>";
ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER ON TABLES TO "<PGUSER>";
ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT USAGE, SELECT, UPDATE ON SEQUENCES TO "<PGUSER>";
ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT EXECUTE ON FUNCTIONS TO "<PGUSER>";
If you hit errors during import related to replication role, you may need:
GRANT SET ON PARAMETER session_replication_role TO "<PGUSER>";
6) Start n8n once to create the schema, then stop it
This ensures the destination DB schema matches your exact n8n version.
docker compose up -d
docker compose logs -f n8n
After you confirm it started and created tables, stop it:
docker compose down
7) Import entities into Postgres
Run the import using the same n8n version and the same encryption key.
docker run --rm -u node \
-e N8N_ENCRYPTION_KEY="<N8N_ENCRYPTION_KEY>" \
-e DB_TYPE=postgresdb \
-e DB_POSTGRESDB_HOST="<PGHOST>" \
-e DB_POSTGRESDB_PORT="<PGPORT>" \
-e DB_POSTGRESDB_DATABASE="<PGDATABASE>" \
-e DB_POSTGRESDB_USER="<PGUSER>" \
-e DB_POSTGRESDB_PASSWORD="<PGPASSWORD>" \
-e DB_POSTGRESDB_SSL_ENABLED=true \
-v "<EXPORT_DIR>:/import" \
n8nio/n8n:<N8N_VERSION> \
n8n import:entities --inputDir /import --truncateTables true
import:entities expects an empty destination. --truncateTables forces that.
7a) Fallback path if you cannot use import:entities
If import:entities fails and you want a fallback, use the earlier workflow and credentials backups.
Select the user ID in the new Postgres DB:
SELECT id, email, "createdAt"
FROM "user"
ORDER BY "createdAt";
Import credentials:
docker run --rm -u node \
-e N8N_ENCRYPTION_KEY="<N8N_ENCRYPTION_KEY>" \
-e DB_TYPE=postgresdb \
-e DB_POSTGRESDB_HOST="<PGHOST>" \
-e DB_POSTGRESDB_PORT="<PGPORT>" \
-e DB_POSTGRESDB_DATABASE="<PGDATABASE>" \
-e DB_POSTGRESDB_USER="<PGUSER>" \
-e DB_POSTGRESDB_PASSWORD="<PGPASSWORD>" \
-e DB_POSTGRESDB_SSL_ENABLED=true \
-v "$(pwd)/n8n_migrate/credentials:/in" \
n8nio/n8n:<N8N_VERSION> \
import:credentials --separate --input=/in --userId="<N8N_USER_ID>"
Import workflows:
docker run --rm -u node \
--entrypoint n8n \
-e N8N_ENCRYPTION_KEY="<N8N_ENCRYPTION_KEY>" \
-e DB_TYPE=postgresdb \
-e DB_POSTGRESDB_HOST="<PGHOST>" \
-e DB_POSTGRESDB_PORT="<PGPORT>" \
-e DB_POSTGRESDB_DATABASE="<PGDATABASE>" \
-e DB_POSTGRESDB_USER="<PGUSER>" \
-e DB_POSTGRESDB_PASSWORD="<PGPASSWORD>" \
-e DB_POSTGRESDB_SSL_ENABLED=true \
-v "$(pwd)/n8n_migrate/workflows:/in" \
n8nio/n8n:<N8N_VERSION> \
import:workflow --separate --input=/in --userId="<N8N_USER_ID>"
8) Start n8n normally and validate
docker compose up -d
docker compose logs -f n8n
Log in and check your workflows, credentials, then open a workflow and confirm credential fields load without decrypt errors.
Re-enable workflows at once:
docker exec -u node -it <N8N_CONTAINER> n8n update:workflow --all --active=true
docker compose restart n8n
In case something fails
Encryption key mismatch
- If
N8N_ENCRYPTION_KEYdiffers from the key used previously, credentials will not decrypt. - Check the saved key in the n8n config file inside your old volume, then set it explicitly in Compose.
TLS connection problems
- Start with
DB_POSTGRESDB_SSL_ENABLED=true. - If validation fails, provide the CA certificate using
DB_POSTGRESDB_SSL_CAorDB_POSTGRESDB_SSL_CA_FILE. - Avoid disabling verification unless you have no alternative.
Version mismatch between export and import
Keep the image tag identical for export, schema creation, import, and first startup.
Rollback
If the migration fails, restore the original volume tarball into a fresh volume and start n8n again on SQLite.