Organizational repo for Codeberg's Infrastructure: Documentation, Organizing, Planning.
Find a file
2025-12-24 12:13:21 +01:00
ceph Document deletion of CephFS filesystems 2025-04-03 13:23:58 +00:00
hardware landslide: fix fan configuration 2025-12-24 12:13:21 +01:00
notes notes/2024-04-09-documentation-and-scoping-kick-off.md aktualisiert 2024-04-09 18:41:24 +00:00
BTRFSRaid1Setup.md add notes 2024-01-28 16:19:20 +00:00
debian_partitioning.png add image about debian partitioning 2024-10-20 20:08:17 +02:00
LICENSE Initial commit 2022-09-25 02:39:32 +02:00
mdadmRaid1Setup.md add notes about clover 2024-10-20 20:07:26 +02:00
networks.md Virtual IP overview table 2025-04-18 13:10:34 +00:00
postfix-spam-fighting.md Update postfix-spam-fighting.md 2025-05-02 02:18:06 +00:00
README.md Rebooting our hardware got muuuuuch easier 2025-09-09 21:03:55 +02:00
troubleshooting.md troubleshooting.md hinzugefügt 2024-04-09 19:54:21 +00:00

Table of Contents

The Codeberg infrastructure wiki

This is central bird-eye overview wiki for the current active infrastructure of Codeberg.

Everything in this wiki is licensed under Creative Commons Attribution 4.0 International.

Please make a backup on your local machine and keep it up-to-date in case Codeberg goes down and you want to troubleshoot:

git clone git@codeberg.org:Codeberg-Infrastructure/meta.git Codeberg-Infrastructure-Wiki
echo "cd '$PWD/Codeberg-Infrastructure-Wiki' && sudo -u $USER git pull --rebase" | sudo tee /etc/cron.daily/codeberg-infrastructure-wiki-sync
sudo chmod +x /etc/cron.daily/codeberg-infrastructure-wiki-sync

Servers

Codeberg is mostly based on physical servers and supported by some virtual servers running at German cloud providers.

Details on hardware can be found in the hardware folder.

Netcup VPS

Hostname: smtp.codeberg.org

The Netcup VPS is the oldest server currently being used by Codeberg. As the name implies, netcup is the provider of this VPS and it has the following specifications:

  • Six virtual cores.
  • 32GB RAM.
  • 800GB SSD.
  • ext4 as file system.
  • Debian Stable as operating system.

This server is running the following services:

  • Email, is the main mail server for Codeberg. It is accompanied by an shell script, which throttles the automatic registration.
  • HAProxy, this is the reverse proxy for this server.
  • LXC containers:
    • support (ticketing system, not in use yet)
    • ci-staging (staging environment for Woodpecker CI)

This server is manually managed and most of the scripts and configuration files lives over on deploy-server.

Backups: Offsite backups are sent to Otto's personal backup machine via borgbackup (encrypted). The backups are performed by an irregular backup routine (every few days depending on sunshine / power availability).

Netcup tips & tricks

The /root directory is quite dirty, but contains some useful stuff that was still not properly set up elsewhere.

Kampenwand

Kampenwand is a bare-metal server currently being used by Codeberg. The hardware is owned by Codeberg. It's a physical server that is placed in a rack managed by in-berlin at the AK(Tempelhof) location. The specifications of the server can be found in the hardware sheet.

This server is running the following services: (TODO: keep updated)

  • HAProxy, this is the reverse proxy for this server. All requests from codeberg.page and ci.codeberg.org goes to this service. Configuration file.
  • Ceph cluster via systemd and podman, admin panel is running on port :8443 (see external Ceph guide)
  • LXC Containers (a lot, please use lxc-ls for an up-to-date overview)
    • gitea-staging, https://codeberg-test.org
    • forgejo-migration, usually offline, mounts a Ceph snapshot and a duplication of production's database
    • build is used to deploy Forgejo and manage our fork
    • pages, the Pages server, this service is run in a LXC container and serves codeberg.page and it's subdomains.
    • ci (frontend of https://ci.codeberg.org) and ci-agent (the actual builds)
    • backup has a readonly mount of btrfs and ceph
    • admin runs Garbage collection on production Git repos and is planned for future moderation tasks
    • static serves static pages via lighttpd and updates them via cron job (blog, docs, design, ...)
    • translate Weblate, this is a manually managed LXC container, that runs a modification of Weblate and built via docker and deployed with docker-compose. The docker-compose repo contains the documentation for the codeberg specific configuration and adjustments. It is responsible for serving https://translate.codeberg.org/.

This server was initially managed via Ansible, configurations and services live over on configuration-as-code. It is still in use for ci and pages containers. Newer configuration changes and new containers are being manged by scripted-configuration.

SSH access

To host machine: Use port 19198. To guests: Use port 19198 and connect via jump user.

This can be added to your SSH config:

Host *.lxc.local
  User <your actual username>
  ProxyJump codeberg.in-berlin.de


Host codeberg.in-berlin.de
  User jump
  Port 19198
  ForwardAgent yes

optionally, you can configure your ssh key

  IdentityFile ~/.ssh/<your_id>

(TODO: still relevant?) On the host machine, IP forwarding for manually managed services is done via iptables. The script that configures this is: /root/container_portforwarding.sh.

status.codeberg.eu

status.codeberg.eu is a Hetzner cx11 shared CPU cloud instance currently being used by Codeberg. It hosts uptime-kuma in a docker container.

  • 1 shared vCPU
  • 2 GB RAM
  • 20GB storage.

Status Monitor

Status Monitor status.codeberg.org is powered by Uptime Kuma.

Management of monitors, announcements and so on is available through the dashboard (auth required)

Deployment:

  • Uptime-Kuma is deployed as a Docker container
  • it serves plain http on port 3001 and needs a reverse proxy for https & tls termination
  • persistent data is stored in /var/lib/docker/volumes/uptime-kuma (mostly a sqlite db)
  • the scripted-configuration.git/inside.sh includes updating and starting uptime-kuma

Ceph

Ceph is a distributed storage system. It allows to create storage that is distributed over multiple servers over the internet. Codeberg uses Ceph to avoid a single server running out of storage, by configuring services to use ceph as storage when they are expected to use large amounts of data on the file system.

The infrastructure for Ceph currently looks like:

  • A Ceph node in the Ceph Cluster runs an Ceph OSD service.
  • Ceph Manager, this is deployed on kamenwand and is responsible to server interfaces for the Ceph monotoring and management and provide additional monitoring.
  • Ceph Monitor, this is deployed on kampenwand and is responsible to monitor the Ceph nodes.
  • Ceph Metadata Server, this is deployed on kampenwand and is responsible to allow Ceph be mounted via CephFS on servers.
  • Ceph OSD, this is deployed on Ceph nodes and is responsible to provide the actual storage to the Ceph Cluster.

kampenwand runs multiple Ceph nodes, managed by Ceph-internal tools in a podman cluster (also exposed as systemd services)

Ceph is really complex, so it has its own page at Ceph Deep Dive

Firewall

We're having UFW blocks between important hosts. @fnetx did run

Netcup VPS:

ufw allow from 10.30.35.1
ufw allow from 2001:67c:1401:20f0::1
ufw allow from 217.197.91.145

Kampenwand:

ufw allow from 193.26.156.135
ufw allow from 10.30.35.3
ufw allow from 2a03:4000:4c:e24:85e:10ff:fef8:a405
ufw allow from 10.0.3.4
ufw allow from fe80::216:3eff:fe37:fce1

This is a temporary solution and it does not prevent Fail2ban from completely blocking the IPs :-(

Certificates

On Netcup VPS, these are managed by Certbot. There's a script in /root that allows rebuilding this.

On Kampenwand, Lego is used with a custom wrapper in /usr/local/bin/lego-wrapper which respects /etc/default/lego for the domains.

Lego automatically renews the default domains via the namesilo API. This is triggered via a crontab:

0 0 * * * /usr/local/bin/lego-wrapper --default-domains renew

While adding subdomains to codeberg.org, you don't have to worry about that.