Containerizing Python Apps
How to containerize Python scripts with Docker, deploy via Portainer, and add uptime monitoring.
I have an extensive Python script that I currently run in a dev environment. Normally, I’d use n8n to automate workflows like this, but n8n’s Python node runs Pyodide, which is a WebAssembly-based Python interpreter. That’s fine for light tasks but it doesn’t support many critical features, including:
- File system access
- Custom dependencies (e.g.
psycopg2
,pandas
, etc.) - Making raw
requests
with custom headers - Multithreading or subprocess usage
All of which my script leverages. Because of these limitations, I decided to containerize the script, deploy it through Portainer, and monitor it with Uptime Kuma. It’s a very clean and repeatable way to deploy Python apps on any Linux box or VM.
1. Project Structure
Create a project folder on the server, in this case we’ll call it python-script
:
mkdir python-script && cd python-script
Inside this folder, create:
python-script/
├── Dockerfile
├── requirements.txt
└── app.py
2. Dockerfile
FROM python:3.11-slim
WORKDIR /app
# Install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy the script
COPY app.py .
# Run it
CMD ["python", "app.py"]
Change the Python version tag if needed (e.g.,3.10-slim
or3.12-slim
).
3. Build the Docker image
From inside the project folder:
docker build -t python-script:latest .
4. Deploy it with Portainer
You can absolutely run this with Docker Compose locally or as a standalone container. But I used Portainer so I could monitor logs more easily from the web UI.
In Portainer:
- Go to Stacks > Add Stack
- Give it a name like
python-script
- Paste in the following YAML:
version: '3.8'
services:
python_script:
image: python-script:latest
container_name: python-script
restart: always
command: python -u app.py # -u forces unbuffered output so logs show up live in Portainer
Without-u
, Docker will buffer stdout and yourprint()
logs won’t show up in Portainer until the buffer flushes (which might never happen if your script runs indefinitely).
5. Monitor with Uptime Kuma
If your script runs continuously or at regular intervals, you can:
- Add a heartbeat call to Healthchecks.io or a similar service
- Or expose a simple
/health
endpoint via Flask or FastAPI inside the container and point Uptime Kuma at it - Monitor the container itself with Uptime Kuma's Docker Container monitor.
For example, to use a basic heartbeat with Healthchecks.io:
import requests
requests.get("https://hc-ping.com/YOUR-UUID/start")
# your logic...
requests.get("https://hc-ping.com/YOUR-UUID")
Wrap-up
This method is lightweight and works great for:
- Data ETL jobs
- API polling scripts
- Scheduled sync tasks
- Anything you’d otherwise try to cram into cron or n8n