Building a Pi Network Monitor with n8n & Postgres

Monitoring WiFi stability over time using a Raspberry Pi, shell script, and a webhook pipeline.

I’ve been testing a lightweight way to log network stats from a Raspberry Pi over a dozen networks and push them into a PostgreSQL database via n8n to see how reliable various networks are, and whether latency or speed drops over time.

This setup collects signal strength, speedtest results, and ping stats every 10 minutes, and posts them into a webhook where n8n handles storage.


Tools Installed on the Pi

Basic stuff:

sudo apt-get install arp-scan
sudo apt-get install nmap
sudo apt-get install wireshark
sudo apt install dnsutils
sudo apt install speedtest-cli

Pre-Configuring WiFi Networks

If the Pi's going to move between locations, it helps to preload the SSIDs and passwords using:

sudo nmtui

That way it connects automatically.


Logging Script

This runs speed tests, pings Google DNS, grabs the SSID, and posts a small JSON payload to a webhook.

nano ~/wifi_speedtest_logger.sh

Script contents (replace the webhook URL and auth details):

#!/usr/bin/env bash
set -euo pipefail

WEBHOOK_URL="https://<your-n8n-instance>/webhook/network-logger"
AUTH_USER="user"
AUTH_PASS="pass"

HOSTNAME=$(hostname)
DATE=$(date --utc '+%Y-%m-%d')
TIME=$(date --utc '+%H:%M:%SZ')
SSID=$(iwgetid -r 2>/dev/null || nmcli -t -f active,ssid dev wifi | grep '^yes:' | cut -d: -f2 || echo "N/A")

SPEEDTEST=$(timeout 30 speedtest-cli --simple 2>/dev/null || true)
DOWNLOAD=$(awk '/Download:/ {print $2}' <<<"$SPEEDTEST" || echo "N/A")
UPLOAD=$(awk '/Upload:/ {print $2}' <<<"$SPEEDTEST" || echo "N/A")

PING=$(ping -c 10 8.8.8.8 2>/dev/null || true)
LATENCY=$(awk -F'/' '/rtt/ {print $5 " ms"}' <<<"$PING" || echo "N/A")
PACKET_LOSS=$(awk -F', ' '/packet loss/ {print $3}' <<<"$PING" || echo "100%")

read -r -d '' PAYLOAD <<EOF || true
{
  "hostname": "$HOSTNAME",
  "timestamp": "${DATE}T${TIME}",
  "ssid": "$SSID",
  "download": "$DOWNLOAD",
  "upload": "$UPLOAD",
  "latency": "$LATENCY",
  "packet_loss": "$PACKET_LOSS"
}
EOF

curl --fail --silent \
  --user "$AUTH_USER:$AUTH_PASS" \
  --header "Content-Type: application/json" \
  --data "$PAYLOAD" \
  "$WEBHOOK_URL" \
  && echo "✔ Sent at $DATE $TIME" \
  || echo "✘ Failed to send"

Cron

Runs every 10 minutes:

crontab -e
*/10 * * * * /home/pi/wifi_speedtest_logger.sh

n8n Flow

The n8n workflow just takes the webhook payload, flattens it, strips units (like ms), and inserts it into a Postgres table.

The "code" node parses values like this:

for (const item of $input.all()) {
  const body = item.json.body || {};

  item.json.hostname = body.hostname;
  item.json.timestamp = body.timestamp;
  item.json.ssid = body.ssid;
  item.json.download = parseFloat(body.download) || null;
  item.json.upload = parseFloat(body.upload) || null;
  item.json.latency_ms = parseFloat(body.latency) || null;
  item.json.packet_loss = parseFloat(body.packet_loss) || null;

  delete item.json.body;
}
return $input.all();

The Postgres node maps those fields directly into Postgres table like it_network_monitoring which is then visualised in BI/Grafana.


Notes

This setup's been solid so far. Speedtest returns reliably, SSID detection works with a quick fallback, and the Pi handles everything quietly in the background. It’s not fancy, but it does exactly what I wanted; a hands-off way to log network behaviour over time without needing more services or extra overhead.