I’m going to show you how I automated the build and deployment process of my static webpage (yes the one you are currently reading).

It all started when my VPS hosting company had a really good deal, where I got two .de domains for cheap. Since I didn’t really know what to do with them, I used both domains to get myself a bit more into hosting a static website behind Cloudflare.

I’ve had my Cloudflare account for quite a while, but never really tried to get my other self hosted services behind their CDN, because I didn’t want to deal with configuring the OpenID provider in order to use it with my SSO (Authelia).

But with my newly acquired domains, I’ve had two main goals:

  • To have the domains running on the same webserver used for all my other services. But I wanted the site to be accessible only through Cloudflare, not directly through the webserver.
  • Automate the process of building and deploying static files

Used applications:

  • Caddy as my webserver
  • Hugo as a CMS to render the static pages
  • Gitea for my self-hosted Git
  • Gitea actions as my deployment runner, which is almost fully compatible with Github actions
  • Docker, all my self-hosted services are containerized

Caddy configuration:

In order to have the static content hosted on my Caddy web server, I need to edit the Caddyfile and include a directive to match the new domain name (julius-roettgermann.de). As I’m doing the www. redirect directly on the CDN level, I don’t need to include it into the Caddyfile directive. The same thing happens with my second domain (jroettgermann.de), which is redirected by cloudflare and never hits my web server.

To make sure that only requests coming from Cloudflare are served by Caddy, I’ve added a @handle condition, which only serves the content if a pre-generated token is present in the headers. The Header is set by Cloudlfare when doing the origin request to my webserver and as it’s all ssl encrypted, the Header should never be visible to the end user.

I’ve also implemented the Authenticated Origin Pulls from Cloudfront, so Caddy doesn’t create a new Let’s Encrypt certificate, but uses a Cloudflare signed one, which is only valid between Cloudflare and the origin server.

Full Caddyfile directive:

julius-roettgermann.de {
	import logging
	encode gzip
	header {
		Strict-Transport-Security "max-age=31536000;"
		X-XSS-Protection "1; mode=block"
		X-Frame-Options "DENY"
		X-Robots-Tag "all"
		-Server
	}
	tls /etc/caddy/certs/cf_cert.pem /etc/caddy/certs/cf_key.pem
	@hasToken header Cloudlfare-Request-Token {$CDN_TOKEN}
	handle @hasToken {
		root * /var/www/static/StaticPage/
		file_server
	}
	handle_errors {
		handle @hasToken {
			rewrite * /404.html
			file_server
		}
	}
	handle {
		abort
	}
}

Gitea configuration:

Since the release 1.19, Gitea offers a built-in CI/CD solution, that is similar and compatible to Github Actions. So Drone is no longer needed to build a CI/CD pipeline with Gitea. Gitea Actions is forked from a project called act and and needs to be installed separately from Gitea.

Ideally this so-called “runner” is installed on a completely separate host from the main gitea instance. This runner spawns Docker container which depending on the workload, can easily interfere with the running Gitea instance.

I really didn’t want to rent a second VPS to keep my runners separate, and since my server is rarely close to capacity, I figured it should be fine to run them on the same host.

To get the runner container running alongside my existing gitea/mysql setup, I’ve extended my docker-compose file with the gitea runner service

  runner:
    image: gitea/act_runner:latest
    container_name: gitea_runner
    restart: always
    environment:
      - USER_UID=${USER_UID}
      - USER_GID=${USER_GID}
      - CONFIG_FILE=/config.yaml
      - GITEA_INSTANCE_URL=http://gitea:3000
      - GITEA_RUNNER_REGISTRATION_TOKEN=${REGISTRATION_TOKEN}
      - GITEA_RUNNER_NAME=gitea_runner_local
      #- GITEA_RUNNER_LABELS=${RUNNER_LABELS}
    volumes:
      - /opt/DockerContainer/gitea/act_runner/config.yaml:/config.yaml
      - /opt/DockerContainer/gitea/act_runner/data:/data
      - /var/run/docker.sock:/var/run/docker.sock
    networks:
      - gitea

It’s also necessary to enable the actions feature in Gitea, either by adding it to the config.yml, or by setting it as an environment variable directly in the docker-compose.yml of Gitea itself:

      - GITEA__actions__ENABLED=true

After creating the REGISTRATION_TOKEN in the Gitea dashboard, the runner should automatically register with the Gitea host and be ready to be assigned to a repository.

Creating the build and deploy pipeline was actually quite easy, as Gitea actions are almost fully compatible with Github actions, and can use all the handy native and community-created actions.

Gitea Actions pipeline:

To build a static website with hugo, not much more is needed than the markdown and template/theme files and the hugo binary to build the /public directory containing all the static files. All these files are committed to a private repository on my Gitea instance, along with the Gitea actions pipline file under .gitea/workflows/build_and_deploy.yaml. The new pipline will automatically show up in the actions tab of the repository view and will automatically start depending on the configured trigger.

It’s basically just checking out the repository, finding and downloading the latest hugo release, building the static files, testing the status code of the build page and deploying them to my server using rsync:

name: Build,test and deploy julius-roettgermann.de
run-name: ${{ gitea.actor }} is building, testing and deploying the static page
on:
  push:
    branches:
      - prod

jobs:
  deploy-prod:
    runs-on: ubuntu-latest
    steps:
      - run: echo "The job was automatically triggered by a ${{ gitea.event_name }} event."
      - run: echo "The name of your branch is ${{ gitea.ref }} and your repository is ${{ gitea.repository }}."
      - name: Check out repository code
        uses: actions/checkout@v3
        with:
          submodules: 'true'
          github-server-url: 'https:/git.server.de/'
      - name: Install apt packages
        run: apt update && apt install -y jq rsync
      - name: Get latest Hugo version
        run: |
          url=$(curl --silent "https://api.github.com/repos/gohugoio/hugo/releases/latest" | jq -r '.assets[] | select(.name | contains("linux-amd64.tar.gz")) | .browser_download_url' | grep -E 'hugo_[0-9]+\.[0-9]+\.[0-9]+_linux-amd64.tar.gz')
          wget -P /tmp/hugo/ "$url"
          version=$(echo "$url" | grep -oP 'hugo_\K[0-9]+\.[0-9]+\.[0-9]+')
          echo "Downloaded Hugo version: $version"
      - name: Unpack Hugo
        run: tar -xf /tmp/hugo/* -C ${{ gitea.workspace }}/bin
      - name: Build the static webpage
        run: ${{ gitea.workspace }}/bin/hugo --minify
      - name: Test static page
        run: bash ${{ gitea.workspace }}/bin/test_static_page.sh
      - name: Create private key
        run: |
          echo "${{ secrets.ACT_RUNNER_KEY }}" > /tmp/act_runner_key
          chmod 600 /tmp/act_runner_key
      - name: rsync public directory
        run: |
          rsync -avz --delete -e "ssh -i /tmp/act_runner_key -o StrictHostKeyChecking=no" ${{ gitea.workspace }}/public/* [email protected]:/opt/DockerContainer/caddy/static/StaticPage/
      - name: purge Cloudflare cache
        run: |
          curl --request POST \
            --url https://api.cloudflare.com/client/v4/zones/${{ secrets.CF_ZONE_ID }}/purge_cache \
            --header 'Content-Type: application/json' \
            --header 'Authorization: Bearer ${{ secrets.CF_CACHE_PURGE }}' \
            --data '{"purge_everything": true}'
      - run: echo "This job's status is ${{ job.status }}."

Testing of the build output:

To test the built page, I wrote a short bash script which spins up the python3 http.server and checks the status code of the landing page. If the web server doesn’t respond with a 200 code, the pipeline fails before the broken static pages are deployed:

#!/bin/bash
PORT=8080
python3 -m http.server $PORT --directory public/ &

SERVER_PID=$!

# Give it a moment to start
sleep 2

# Check if the server is running
if ! ps -p $SERVER_PID > /dev/null; then
    echo "HTTP server failed to start."
    exit 1
fi

# Check HTTP status
status_code=$(curl -o /dev/null -s -w "%{http_code}" http://localhost:$PORT)

# Kill the http server
kill $SERVER_PID

# Check if status code is 200
if [ "$status_code" -ne 200 ]; then
    echo "Website returned a non-200 status code: $status_code"
    exit 1
fi

After the deployment and testing is done (which is really fast, about 20 seconds in total), the pipeline sends a cache purge request to Cloudflare so that the newly deployed content is immediately available. The private key and other tokens are all stored as secrets in Gitea and can be referenced just like secrets in Github actions.