Orange Pi - Setting Up a Secure Web Server on a Single Board Computer

Transform an Orange Pi Zero 3 into a secure, efficient web server using Docker and Cloudflare Tunnels.

I've been fascinated by SBCs since I got my first Raspberry Pi nearly 10 years ago. They are conveniently sized and powerful devices that have come in handy for numerous projects. I have multiple Pis around the house running various services and curiosity led me to explore other SBC options like the Rock Pi, Banana Pi, Orange Pi, and Potato Pi. Among these, I was intrigued by the Orange Pi and decided it would be fun to it up as a self-hosted web server to host multiple websites.

A word of warning: This walkthrough is rather extensive and places significant emphasis on securing the device prior to any hosting activities.


High-level Overview

Outlining the process:

  1. Flash Debian.
  2. Secure the device and the network.
  3. Set up Docker and Docker Compose.
  4. Host a website.
  5. Configure Cloudflare tunnel.
  6. Set up backups.

Requirements

The following is required for this project:

  • Orange Pi: This project uses an Orange Pi Zero 3. I do not have any prior experience with other Orange Pi models therefore cannot comment on if other models will work with the specific commands in this post. The Pi will run headless, additional keyboard and display are not required.
  • microSD card: I went with a 64GB card to make sure there is enough space for multiple sites.
  • Network cable: For reliability, the Pi will be hardwired to the network.
  • Client PC: For controlling the Pi and securing the network.
  • Domain Name: I purchased one via Cloudflare, given that Cloudflare tunnels will be utilized, it makes the management of the domain and the tunnel simpler.

Preparation

Set up a dedicated VLAN on your network:

  • This highly depends on your network infrastructure, but at the minimum, prepare the following:
    • Set up a dedicated network segment for this project.
    • Ensure firewall rules are set up to block traffic from this VLAN to the main network:
      • Allow traffic to and from your Client PC for configuration purposes.
      • Optional: Allow traffic to and from your NAS for backup purposes.

Preparing the Orange Pi Zero 3:

  • Insert the microSD card into an adapter and connect it to your computer.
  • Download and install Etcher to flash the latest Debian Bookworm image onto your microSD card
  • Flash the SD card.

Configuring the basics - host name and services.

  • The default account is orangepi with the password orangepi. Use these credentials to SSH into the Pi from the client PC.
  • When logged in, run:
sudo orangepi-config 
  • Disable Bluetooth under Network > Remove BT
  • Set the IP to static under Network > IP > Static
  • Update hostname the Personal > Host Name

Changing package repositories

By default, the Orange Pi image will ship with a list of Chinese mirrors. China - and other countries - are blocked through my firewall for security reasons, therefore the existing package repository is updated to a local repository with the following command:

sudo nano /etc/apt/sources.list
deb http://ftp.uk.debian.org/debian/ bookworm main contrib  
deb-src http://ftp.uk.debian.org/debian/ bookworm main contrib

deb http://ftp.uk.debian.org/debian/ bookworm-updates main contrib  
deb-src http://ftp.uk.debian.org/debian/ bookworm-updates main contrib

deb http://deb.debian.org/debian-security/ bookworm-security main non-free-firmware  
deb-src http://deb.debian.org/debian-security/ bookworm-security main non-free-firmware
  • Update the Docker list:
sudo nano /etc/apt/sources.list
  • Replace the existing list with the following:
deb [arch=arm64] https://download.docker.com/linux/debian bookworm stable
  • Remove Docker's GPG key from the legacy trusted.gpg keyring:
sudo apt-key --keyring /etc/apt/trusted.gpg del 9DC858229FC7DD38854AE2D88D81803C0EBFCD88
  • Create Docker's Keyring File:
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian bookworm stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
  • Download and Install Docker's GPG Key:
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
  • After making changes to the repository configuration, update the system's package lists:
sudo apt update && sudo apt upgrade -y

Secure the Pi and the Network

Creating a new non-root user account and disabling orangepi

Replace newuser with an actual username:

sudo adduser newuser
  • Change the root password. This command will prompt you to enter and confirm the new root password:
sudo passwd root
  • Disable the orangepi account. This command sets the expiration date of the account to yesterday, effectively disabling it:
su root  
sudo usermod --expiredate 1 orangepi

Setting up SSH key authentication for newuser

The goal is to allow access to the Pi exclusively through newuser with an SSH keypair and disable root SSH login. This will ensure that in order to reach root, newuser must be authenticated first. The next section will list the steps required for settings this up.

Generating an SSH keypair

  • Install PuTTY and launch the PuTTY Key Generator (PuTTYGen.exe)
  • Generate a new key pair with a strong passphrase, then save both private and public keys on your local machine.

Appending the public key for newuser

  • Create a new authorized_keys file for newuser.
su newuser  
mkdir ~/.ssh  
chmod 700 ~/.ssh  
nano ~/.ssh/authorized_keys
  • Paste in the public key, then save and exit with Ctrl + X.

Changing the SSH port, disabling root login and forcing key authentication

  • Switch to root and edit sshd_config with nano.
  • Change and uncomment the SSH port. For this example, port 22041 will be used.
su root
sudo nano /etc/ssh/sshd_config
  • Change the SSH port to something random, in this example 22041 is used.
  • Change and uncomment PermitRootLogin to no.
  • Change KbdInteractiveAuthentication to yes. This will be used for setting up 2FA for the account on login later.
Port 22041
PermitRootLogin no
PasswordAuthentication no  
KbdInteractiveAuthentication yes
ChallengeResponseAuthentication yes
  • Close and save with Ctrl+X.
  • Restart the SSH service.
sudo systemctl restart ssh
  • Log out and start a new SSH session on port 22041 using newuser and the public key.

Updating NTP to ensure TOTP 2FA will work for newuser

su root
systemctl status systemd-timesyncd
  • If it's not installed or active, you may need to install or enable it.
sudo apt-get install systemd-timesyncd
sudo systemctl enable systemd-timesyncd  
sudo systemctl start systemd-timesyncd
  • Check NTP status with:
timedatectl set-ntp true
sudo systemctl status systemd-timesyncd

Updating DNS servers to Cloudflare's servers

If you have an existing DNS server set for this VLAN, make sure it is updated not to force a local DNS (Pi Hole in my case), then update the local DNS records on the Orange Pi:

sudo nano /etc/resolv.conf  
search localdomain  
nameserver 1.1.1.1  
nameserver 1.0.0.1
sudo service networking restart

Configuring the local firewall

As the device will be exposed, it is a good idea to make sure it is locked down as much as possible to close the attack surface. Start with installing UFW:

sudo apt update  
sudo apt install ufw  
sudo ufw --force reset

You will need to customise the following UFW rules to suit your needs. The goal is the following:

  • Block all incoming traffic, except for the Client PC and your network drive. The network drive will be used at a later point to save full backups to.
  • Block access to all other local VLANs.
  • Allow traffic to and from Cloudflare.
#set the default rules
sudo ufw default deny incoming  
sudo ufw default allow outgoing  
sudo ufw default deny routed

#allow outgoing traffic to control PC and NAS
sudo ufw allow out to 10.1.1.19  
sudo ufw allow out to 10.11.1.23

#block access to all local VLANs (repeat for multiple IP ranges)
sudo ufw deny out to 10.1.1.0/24
sudo ufw deny out to 10.11.1.0/24
...

#allow traffic from the control PC
sudo ufw allow from 10.1.1.19

#allow traffic from and to Cloudflare
sudo ufw allow 7844/tcp
sudo ufw allow from 173.245.48.0/20  
sudo ufw allow from 103.21.244.0/22  
sudo ufw allow from 103.22.200.0/22  
sudo ufw allow from 103.31.4.0/22  
sudo ufw allow from 141.101.64.0/18  
sudo ufw allow from 108.162.192.0/18  
sudo ufw allow from 190.93.240.0/20  
sudo ufw allow from 188.114.96.0/20  
sudo ufw allow from 197.234.240.0/22  
sudo ufw allow from 198.41.128.0/17  
sudo ufw allow from 162.158.0.0/15  
sudo ufw allow from 104.16.0.0/13  
sudo ufw allow from 104.24.0.0/14  
sudo ufw allow from 172.64.0.0/13  
sudo ufw allow from 131.0.72.0/22
  • Enable the UFW service and restart UFW to apply the changes
sudo ufw enable  
sudo ufw reload

Install Fail2Ban and automatic updates

  • Update packages and upgrade to the latest version.
  • Install Fail2Ban.
  • Enable Automatic Updates.
sudo apt-get update && sudo apt-get upgrade -y
sudo apt-get install fail2ban
sudo systemctl enable fail2ban
sudo systemctl start fail2ban
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
sudo apt install unattended-upgrades
sudo dpkg-reconfigure -plow unattended-upgrades
  • Enable Fail2Ban jails:
sudo nano /etc/fail2ban/jail.local
  bantime = 1h #update to 2h
  ignoreip = 127.0.0.1/8 ::1 your_client_ip #whitelist your static Client PC's IP address by replacing 'your_client_ip' with it to avoid potential login bans due to failed attempts
  maxretry = 5 #update to 3
  enable jails by adding enabled = true to each jail, defined by '[]'. If not sure which ones to enable, turn it on for SSH and Apache for now

Install CrowSec

Install CrowdSec. This requires a CrowdSec account. CrowdSec is an open-source security monitoring and response tool that analyses user behaviour and traffic on your server to identify and block potential security threats. Highly recommended for this use case, please read the official CrowdSec documentation for setting this up, assigning block lists and bouncers.

curl -s https://packagecloud.io/install/repositories/crowdsec/crowdsec/script.deb.sh | sudo bash
apt install crowdsec
sudo cscli console enroll <your ID> --overwrite #get the ID from your CrowdSec console's Security Engine

Install and Configure Endlessh

Endless SSH, also known as an SSH tarpit, is a security mechanism that deliberately slows down incoming SSH connections, typically to frustrate attackers and bots. By drastically reducing the speed of the SSH handshake process, it effectively hinders malicious attempts to access the server, as automated scripts and brute-force attacks become exceedingly time-consuming and resource-intensive.

git clone https://github.com/skeeto/endlessh
cd endlessh
make
sudo make install
sudo cp util/endlessh.service /etc/systemd/system
sudo systemctl enable endlessh
sudo mkdir -p /etc/endlessh
cd /etc/endlessh
sudo nano config
# The port on which to listen for new SSH connections.
Port 22

# The endless banner is sent one line at a time. This is the delay
# in milliseconds between individual lines.
Delay 10000

# The length of each line is randomized. This controls the maximum
# length of each line. Shorter lines may keep clients on for longer if
# they give up after a certain number of bytes.
MaxLineLength 32

# Maximum number of connections to accept at a time. Connections beyond
# this are not immediately rejected, but will wait in the queue.
MaxClients 4096

# Set the detail level for the log.
#   0 = Quiet
#   1 = Standard, useful log messages
#   2 = Very noisy debugging information
LogLevel 0

# Set the family of the listening socket
#   0 = Use IPv4 Mapped IPv6 (Both v4 and v6, default)
#   4 = Use IPv4 only
#   6 = Use IPv6 only
BindFamily 0
sudo systemctl start endlessh

Install and Configure ClamAV

ClamAV is an open-source antivirus software solution that employs a database of known malware signatures to identify and protect systems from various types of malicious software. Operating as a daemon, the ClamAV engine listens for incoming connections from clients and scans files as they are transmitted.

sudo apt install clamav  
sudo service clamav-freshclam stop  
sudo freshclam  
sudo service clamav-freshclam start  
clamscan -r

Run the first scan:

clamscan -r

Schedule automatic scans with a script:

To automate daily AV scans, create a script and add it to Cron:

sudo nano /root/dailyscan.sh
clamscan -r --remove &> "/var/log/clamav/scan-$(date +'%Y-%m-%d').log"
sudo crontab -e
0 0,12 * * * /bin/bash /root/dailyscan.sh #scan every 12 hours

Save and exit with CTRL + X

Install and Configure 2FA

To secure the non-root account with 2FA, we will use Google's PAM module and set up TOTP (Time-based One-Time Password). The 2FA challenge will be presented when logging into the newuser account for an additional layer of security. TOTP is designed to protect against interception and reuse of passwords, as each OTP can only be used once in the designated time frame.

sudo apt install libpam-google-authenticator
su newuser

Call Google Authenticator with the following command:

google-authenticator
  • To create a TOTP, answer the Do you want authentication tokens to be time-based (y/n) question with y.
  • Scan the QR code or copy-paste the secret key (CTRL + Right Click > Copy in the terminal) to your password manager or authenticator app.
  • Save all backup keys to a secure place.
  • Verify the 2FA code and update the .google_authenticator file by answering y on the next question.
  • Disallow multiple uses of the same token for extra security.
  • Increase the allowed time by 30 seconds. I enabled this as the Pi does not have a CMOS and has to call out to an NTP server to update its time. If the connection can't be made in time, the TOTP is still valid for another 30 seconds (1 minute in total).
  • Enable rate limiting. With rate limiting enabled, a user is restricted to making a maximum of three login attempts within a 30-second timeframe.

Update SSH to use 2FA:

su root
sudo nano /etc/pam.d/sshd

Add this to the beginning of the file:

auth required pam_google_authenticator.so

Restart SSH:

sudo systemctl restart sshd

Now, in order to access the root account, the login procedure is updated to the following:

  1. Start an SSH session for newuser over port 22041.
  2. Enter the SSH password.
  3. Enter the 2FA code.
  4. To switch to root, enter the root password.

Run a Security Audit:

Lynis is a security auditing tool designed for Linux, macOS, and UNIX-based systems. It aids in compliance testing for standards like HIPAA, ISO27001, and PCI DSS, and contributes to system hardening. The tool is agentless, meaning it doesn't require installation on the system it's auditing, which provides flexibility in its use​​.

To run Lynis:

git clone https://github.com/CISOfy/lynis  
cd lynis  
./lynis audit system

A recurring warning I noticed was Permissions for directory: /etc/sudoers.d WARNING. By default, the sudoers file is expected to be inaccessible by "Others". To fix this warning, run:

chmod 750 /etc/sudoers.d

Review the results of the audit and act on them where you can.

The "Hardening Index" is a Lynis-specific indicator that does not mean how safe the system is. Play around with restricting the Pi as much as you can.

Docker and Docker Compose

Installing Docker and Compose:

sudo apt update  
sudo apt install apt-transport-https ca-certificates curl software-properties-common  
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -  
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"  
sudo apt update  
sudo apt install docker-ce  
sudo apt-get install docker-compose-plugin  
sudo apt-get install docker-compose

Enable the Docker service:

sudo systemctl enable docker  
sudo systemctl status docker

Hosting a Website with Nginx:

This is an example using Nginx, you can host whatever container you wish to. For this example, we are not doing any web development, only hosting the base configuration of Nginx.

  • Create a folder for the docker compose yml.
  • Create the compose.yml file.
  • Create a basic HTML index page.
  • Spin up docker compose.
  • Verify that the containers is running on port 8080.
cd ~
mkdir docker
cd docker
nano compose.yml
version: '3'

services:
  nginx:
    image: nginx:latest
    ports:
      - "8080:80"
    volumes:
      - nginx:/usr/share/nginx/html
    restart: always
volumes:
  nginx:
nano /var/lib/docker/volumes/nginx/_data/index.html
<!DOCTYPE html>
<html>
<head>
    <title>My Simple Page</title>
</head>
<body>
    <h1>Welcome to My Webpage</h1>
    <p>This is a paragraph of text in my simple HTML page.</p>
    <a href="https://www.ambientnode.uk">Click here to visit ambientnode.uk</a>
</body>
</html>
cd ~/docker
docker-compose up -d
docker-compose ps

Set up the Cloudflare Tunnel

A Cloudflare Tunnel creates a secure, encrypted path between your Ghost server and the Cloudflare network, bypassing the public internet. This enhances security and performance by minimizing the exposure of your server's IP address and optimizing content delivery.

ZeroTrust

  • Sign in to Cloudflare and select ZeroTrust from the left side menu.
  • Expand Networks > Tunnels and click Create a tunnel.
  • Use the Cloudflared connector, give it a descriptive name, then select Debian and the CPU architecture.
  • Run the generated command on the server to connect it.
  • Select Public Hostname, add the domain and enter the local path for Ghost:
http://0.0.0.0:8080

At this point the site can be reachable externally through your domain.

Configure Backups

Backups are always important especially when dealing with non-server grade equipment, such as the Orange Pi. The following steps will detail setting up a 1:1 backup of the system using DD and saving it a compressed .img.gz file. This will make sure the unused space is captured, but compressed to the absolute minimum. Backups will be scheduled once a week in this example and the image fill saved to a central NAS. Ensure there is a separate, dedicated account on your NAS for this purpose. The Pi also has a USB port, I suppose this could be used for storing backups on a USB drive instead, but did not explore this option.

To restore the backup, Balena can be used to burn it to a new SD card that is the same size or larger than the existing card in the Pi.

Creating a backup script:

sudo apt install gzip
mkdir /mnt/backup
cd ~
echo 'NASACCOUNTPASSWORD' > ~/pass chmod 600 ~/pass
nano backup.sh
#!/bin/bash

if ping -c 1 10.11.1.23 &> /dev/null; then
PASSWORD=$(cat /pass)
    sudo mount -t cifs -o username=USERNAME,password=$PASSWORD,uid=1000,gid=1000 //10.11.1.23/backup /mnt/backup #mount the network drive to /mnt/backup
    if mountpoint -q /mnt/backup; then
        timestamp=$(date +"%Y%m%d-%H%M%S")
        sudo dd if=/dev/mmcblk0 bs=1M | gzip > /mnt/backup/backup-$timestamp.img.gz #create a copy of the SD card and save it to the NAS
        if mountpoint -q /mnt/backup; then
            sudo umount /mnt/backup #unmount the NAS
        fi
    else
        echo "Mount failed, backup not started."
    fi
else
    echo "Network is unreachable, backup not started."
fi
chmod +x /backup.sh
./backup.sh

Automating a weekly backup:

sudo crontab -e
0 0 * * 0 ~/.bp.sh #create a backup every Sunday

Remember, it is not enough to create a backup. Test the backup.

Conclusion

This guide has taken you through the detailed process of setting up an Orange Pi Zero 3 as a web server. Starting with installing Debian, securing the device, and configuring Docker, we have touched upon essential aspects necessary for a robust setup. The project highlights the capabilities of SBCs like the Orange Pi in web hosting and emphasizes the importance of security in such environments.

Using these instructions, you can transform a simple, cost-effective SBC into a functioning web server, showcasing the practicality and potential of these devices. Staying updated, experimenting with new tools, and refining your system are key to maintaining a secure and efficient server.