8.6 KiB
| title | description | pubDate | heroImage | category | tags | translationKey | |||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Setting up a Forgejo Actions runner for self-hosted CI/CD | How I replaced manual SSH deploys with a push-to-deploy pipeline using a self-hosted Forgejo Actions runner on the same VPS. | Apr 22 2026 | ../../../assets/blog-placeholder-2.jpg | en/tech |
|
forgejo-actions-runner |
After moving my Git repositories from GitHub to a self-hosted Forgejo instance, the next logical step was to move deployment off my laptop. Instead of running ./scripts/deploy.sh locally and hoping nothing was uncommitted, I wanted git push to trigger the build and roll out the container automatically.
This post documents the full setup: installing a Forgejo Actions runner on the same VPS that runs Forgejo, wiring it to a workflow, and keeping secrets out of the repo.
The setup
- VPS: single Debian machine hosting both Forgejo (rootless Podman container) and the Astro website (
/opt/websites/adrian-altner.de, managed by apodman-compose@systemd service). - Forgejo: v11 LTS, rootless, running under a dedicated
gitsystem user. - Goal: on every push to
main, rebuild the production image and restart the service — all on the same box.
Why a dedicated runner user
The runner executes arbitrary code defined in workflow files. Running it as the git user (which has access to Forgejo's database and every repo) would be a bad idea. I created a separate system user with a locked-down home directory:
sudo useradd --system --create-home \
--home-dir /var/lib/forgejo-runner \
--shell /bin/bash forgejo-runner
That user gets no sudo by default — we'll grant targeted privileges later only for the specific commands the deploy needs.
Installing the runner binary
The runner is distributed as a single static binary from Forgejo's own registry. I grabbed the latest release programmatically:
LATEST=$(curl -s https://code.forgejo.org/api/v1/repos/forgejo/runner/releases \
| grep -oE '"tag_name":"[^"]+"' | head -1 | cut -d'"' -f4)
VER="${LATEST#v}"
cd /tmp
curl -L -o forgejo-runner \
"https://code.forgejo.org/forgejo/runner/releases/download/${LATEST}/forgejo-runner-${VER}-linux-amd64"
chmod +x forgejo-runner
sudo mv forgejo-runner /usr/local/bin/
A quick forgejo-runner --version confirmed v12.9.0 was in place — which is the current major, compatible with Forgejo v10, v11, and beyond.
Enabling Actions in Forgejo
Actions are off by default on Forgejo instances. I added the minimal configuration to app.ini (found inside the rootless container's volume at /home/git/forgejo-data/custom/conf/app.ini):
[actions]
ENABLED = true
DEFAULT_ACTIONS_URL = https://code.forgejo.org
DEFAULT_ACTIONS_URL matters because GitHub's Actions marketplace isn't reachable as-is — Forgejo maintains its own mirrors of common actions like actions/checkout at code.forgejo.org/actions/*. A container restart and the actions_artifacts storage directory appeared in the logs.
Registering the runner
For a single repo, repo-scoped runners are the cleanest option. The registration token came from Settings → Actions → Runners → Create new Runner in the Forgejo UI:
sudo -iu forgejo-runner /usr/local/bin/forgejo-runner register \
--no-interactive \
--instance https://git.altner.cloud \
--token <REGISTRATION_TOKEN> \
--name arcturus-runner \
--labels "self-hosted:host"
The label self-hosted:host means "jobs labelled self-hosted run directly on the host". No container runtime required for the runner itself — we already have Podman for the application.
Making it not-need-Docker
On first boot, the runner refused to start with:
Error: daemon Docker Engine socket not found and docker_host config was invalid
Even when using only the host label, the runner checks for a Docker socket on startup. Since the server only has rootless Podman, I generated a config file and explicitly disabled the Docker check:
sudo -iu forgejo-runner /usr/local/bin/forgejo-runner generate-config \
> /tmp/runner-config.yaml
sudo mv /tmp/runner-config.yaml /var/lib/forgejo-runner/config.yaml
sudo chown forgejo-runner:forgejo-runner /var/lib/forgejo-runner/config.yaml
sudo -iu forgejo-runner sed -i \
-e 's|docker_host: .*|docker_host: "-"|' \
-e 's| labels: \[\]| labels: ["self-hosted:host"]|' \
/var/lib/forgejo-runner/config.yaml
Systemd service
[Unit]
Description=Forgejo Actions Runner
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=forgejo-runner
Group=forgejo-runner
WorkingDirectory=/var/lib/forgejo-runner
ExecStart=/usr/local/bin/forgejo-runner --config /var/lib/forgejo-runner/config.yaml daemon
Restart=on-failure
RestartSec=5s
NoNewPrivileges=false
ProtectSystem=full
ProtectHome=read-only
ReadWritePaths=/var/lib/forgejo-runner
[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable --now forgejo-runner
Granting just enough sudo
The deploy step needs to build a Podman image and restart the systemd service that runs it. Both require root. Instead of giving the runner user broad sudo, I created a narrow allowlist in /etc/sudoers.d/forgejo-runner-deploy:
forgejo-runner ALL=(root) NOPASSWD: /usr/bin/podman build *, \
/usr/bin/podman container prune *, \
/usr/bin/podman image prune *, \
/usr/bin/podman builder prune *, \
/usr/bin/systemctl restart podman-compose@adrian-altner.de.service, \
/usr/bin/rsync *
visudo -cf parses it to catch syntax errors before you accidentally lock yourself out of sudo entirely.
The workflow
Workflows live under .forgejo/workflows/*.yml. The deploy flow mirrors what my old shell script did, minus the SSH:
name: Deploy
on:
push:
branches: [main]
workflow_dispatch:
jobs:
deploy:
runs-on: self-hosted
env:
DEPLOY_DIR: /opt/websites/adrian-altner.de
steps:
- uses: actions/checkout@v4
- name: Sync to deploy directory
run: |
sudo rsync -a --delete \
--exclude='.env' \
--exclude='.env.production' \
--exclude='.git/' \
--exclude='node_modules/' \
./ "${DEPLOY_DIR}/"
- name: Build image
run: |
cd "${DEPLOY_DIR}"
sudo podman build \
--build-arg WEBMENTION_TOKEN="${{ secrets.WEBMENTION_TOKEN }}" \
-t localhost/adrian-altner.de:latest .
- name: Restart service
run: sudo systemctl restart podman-compose@adrian-altner.de.service
- name: Prune
run: |
sudo podman container prune -f 2>/dev/null || true
sudo podman image prune --external -f 2>/dev/null || true
sudo podman image prune -f 2>/dev/null || true
sudo podman builder prune -af 2>/dev/null || true
Secrets stay in Forgejo
Anything sensitive — API tokens for webmention.io and webmention.app in my case — lives in Settings → Actions → Secrets and is injected into the job as ${{ secrets.NAME }}. Forgejo stores them encrypted, and the workflow logs automatically mask the values. The tokens are referenced from exactly two places: the CI workflow file (committed) and Forgejo's encrypted store (never in the repo).
The build-time token is passed into the container as an ARG, used only during the build stage, and not present in the final runtime image — a quick podman run --rm <image> env | grep -i webmention confirms it's gone.
The one gotcha: Node on the host
The first real workflow run failed immediately with:
Cannot find: node in PATH
actions/checkout@v4 is a JavaScript-based action. On a runner using the host label, it runs directly on the VPS and needs a Node interpreter available in PATH. One apt install later and the runner was happy:
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
sudo apt-get install -y nodejs
sudo systemctl restart forgejo-runner
Result
From a cold git push origin main, the whole pipeline — checkout, rsync, Podman build, systemd restart, prune, Webmention pings — completes in about 1 minute 15 seconds. No SSH keys to rotate, no laptop involved, no mystery about which version of the code is live.
The runner itself uses about 5 MB of RAM while idle, polling Forgejo every two seconds for new jobs. Resource overhead is negligible compared to the convenience of push-to-deploy on infrastructure I fully control.