|
| 1 | ++++ |
| 2 | +title = "Homelab Adventure - Part 4: Application Hosting and Monitoring" |
| 3 | +description = "Homelab application hosting and monitoring" |
| 4 | +date = 2025-07-13 |
| 5 | +slug = "homelab-adventure-part-4" |
| 6 | +[taxonomies] |
| 7 | +tags = ["Linux", "Homelab"] |
| 8 | ++++ |
| 9 | + |
| 10 | +Welcome to my journey in building my Homelab. This is part of a multipart series; in the last part I showed how to setup an internal network across multiple hosts. This "final" post will go over application hosting and monitoring. |
| 11 | + |
| 12 | +<!-- more --> |
| 13 | + |
| 14 | +[**Part 1: The Adventure Begins**](@/posts/homelab-adventure-part-1.md) |
| 15 | +[**Part 2: Configuration Management**](@/posts/homelab-adventure-part-2.md) |
| 16 | +[**Part 3: Internal Network**](@/posts/homelab-adventure-part-3.md) |
| 17 | +[**Sidequest: Switching from Salt to Ansible**](@/posts/homelab-switching-salt-to-ansible.md) |
| 18 | +**Part 4: Application Hosting and Monitoring (You are here!)** |
| 19 | + |
| 20 | +## Application Hosting |
| 21 | + |
| 22 | +All applications are hosted in [Docker](https://docs.docker.com/) and configured by [Ansible](https://docs.ansible.com/ansible/latest/index.html) and kept updated via [Watchtower](https://github.com/containrrr/watchtower). I decided to go with this route instead of [Kubernetes](https://kubernetes.io/) because: |
| 23 | + |
| 24 | +- I don't have shared storage across all servers |
| 25 | +- I don't need multiple instances of the applications |
| 26 | +- Most applications are on specific servers |
| 27 | + |
| 28 | +Each container gets a separate role in Ansible that creates a storage folder, a user, and the container. Then in the main playbook I can specify which roles apply to which hosts or the whole group. |
| 29 | + |
| 30 | +```yaml |
| 31 | +# roles/container-readeck/tasks/main.yml |
| 32 | +- name: readeck-user-present |
| 33 | + ansible.builtin.user: |
| 34 | + name: readeck |
| 35 | + shell: /usr/sbin/nologin |
| 36 | + uid: 2002 |
| 37 | + |
| 38 | +- name: readeck-config-dir |
| 39 | + ansible.builtin.file: |
| 40 | + path: /data/readeck |
| 41 | + state: directory |
| 42 | + owner: readeck |
| 43 | + group: readeck |
| 44 | + mode: '0755' |
| 45 | + |
| 46 | +- name: readeck-container |
| 47 | + community.docker.docker_container: |
| 48 | + name: readeck |
| 49 | + image: codeberg.org/readeck/readeck:latest |
| 50 | + restart_policy: unless-stopped |
| 51 | + user: '2002:2002' # has to be the id not name |
| 52 | + volumes: |
| 53 | + - "/data/readeck:/readeck" |
| 54 | + labels: |
| 55 | + com.centurylinklabs.watchtower.enable: 'true' |
| 56 | + traefik.enable: 'true' |
| 57 | +``` |
| 58 | +
|
| 59 | +### Exposing applications |
| 60 | +
|
| 61 | +I use [Traefik](https://doc.traefik.io/traefik/) to expose each application based on labels. I use a wildcard certificate to enable HTTPS, but Let's Encrypt could also be used. If you are worried about mounting the `docker.sock` directly to Traefik, you can setup [socket-proxy](https://github.com/wollomatic/socket-proxy). For managing DNS, I use [DNSControl](https://dnscontrol.org/) but Ansible could also be used, but it is more verbose. |
| 62 | + |
| 63 | +```yaml |
| 64 | +# roles/container-traefik/tasks/main.yml |
| 65 | +- name: traefik-user-present |
| 66 | + ansible.builtin.user: |
| 67 | + name: traefik |
| 68 | + shell: /usr/sbin/nologin |
| 69 | + uid: 2001 |
| 70 | +
|
| 71 | +- name: traefik-config-dir |
| 72 | + ansible.builtin.file: |
| 73 | + path: /data/traefik |
| 74 | + state: directory |
| 75 | + owner: traefik |
| 76 | + group: traefik |
| 77 | + mode: '0755' |
| 78 | +
|
| 79 | +- name: traefik-cert-dir |
| 80 | + ansible.builtin.file: |
| 81 | + path: /data/traefik/config/certs |
| 82 | + owner: traefik |
| 83 | + group: traefik |
| 84 | + state: directory |
| 85 | + mode: '0755' |
| 86 | +
|
| 87 | +- name: traefik-dynamic-config |
| 88 | + ansible.builtin.template: |
| 89 | + src: dynamic.yaml.j2 |
| 90 | + dest: /data/traefik/config/dynamic.yaml |
| 91 | + owner: traefik |
| 92 | + group: traefik |
| 93 | + mode: '0644' |
| 94 | + notify: traefik-container-restart |
| 95 | +
|
| 96 | +- name: traefik-config |
| 97 | + ansible.builtin.template: |
| 98 | + src: traefik.yaml.j2 |
| 99 | + dest: /data/traefik/config/traefik.yaml |
| 100 | + owner: traefik |
| 101 | + group: traefik |
| 102 | + mode: '0644' |
| 103 | + notify: traefik-container-restart |
| 104 | +
|
| 105 | +- name: traefik-cert |
| 106 | + ansible.builtin.copy: |
| 107 | + src: "{{ item.key }}.cert" |
| 108 | + dest: "/data/traefik/config/certs/{{ item.key }}.cert" |
| 109 | + owner: traefik |
| 110 | + group: traefik |
| 111 | + mode: '0644' |
| 112 | + loop: "{{ traefik_certs | dict2items }}" |
| 113 | + loop_control: |
| 114 | + label: "{{ item.key }}" |
| 115 | + diff: false |
| 116 | + notify: traefik-container-restart |
| 117 | + when: traefik_certs is defined |
| 118 | +
|
| 119 | +- name: traefik-certkey |
| 120 | + ansible.builtin.copy: |
| 121 | + dest: /data/traefik/config/certs/{{ item.key }}.key |
| 122 | + content: "{{ item.value.key }}" |
| 123 | + owner: traefik |
| 124 | + group: traefik |
| 125 | + mode: '0644' |
| 126 | + loop: "{{ traefik_certs | dict2items }}" |
| 127 | + loop_control: |
| 128 | + label: "{{ item.key }}" |
| 129 | + diff: false |
| 130 | + notify: traefik-container-restart |
| 131 | + when: traefik_certs is defined |
| 132 | +
|
| 133 | +- name: traefik-container |
| 134 | + community.docker.docker_container: |
| 135 | + name: traefik |
| 136 | + image: traefik:3.2 |
| 137 | + restart_policy: unless-stopped |
| 138 | + command: "--configFile=/config/traefik.yaml" |
| 139 | + user: "2001:2001" # has to be the id not name |
| 140 | + published_ports: |
| 141 | + - 80:80 |
| 142 | + - 443:443 |
| 143 | + # Use these if you want traefik to only listen on the internal network (define 'traefik_internal_interface' in host_vars or group_vars) |
| 144 | + # - "{{ vars['ansible_'~traefik_internal_interface].ipv4.address }}:80:8080" |
| 145 | + # - "{{ vars['ansible_'~traefik_internal_interface].ipv4.address }}:443:8443" |
| 146 | + volumes: |
| 147 | + - /data/traefik/config:/config |
| 148 | + - /var/run/docker.sock:/var/run/docker.sock:ro |
| 149 | + labels: |
| 150 | + com.centurylinklabs.watchtower.enable: 'true' |
| 151 | +``` |
| 152 | + |
| 153 | +```yaml |
| 154 | +# roles/container-traefik/handlers/main.yml |
| 155 | +- name: traefik-container-restart |
| 156 | + community.docker.docker_container: |
| 157 | + name: traefik |
| 158 | + restart: true |
| 159 | +``` |
| 160 | + |
| 161 | +This config sets up the certificates and a middleware for HTTPS redirection and an `ipAllowList`. This middleware can then be specified on a container via the `traefik.http.routers.<app_name>.middlewares: 'internal-network@file'` label (replace `<app_name>` with the application name). |
| 162 | + |
| 163 | +```yaml |
| 164 | +# roles/container-traefik/templates/dynamic.yaml.j2 |
| 165 | +{% if traefik_certs %} |
| 166 | +tls: |
| 167 | + certificates: |
| 168 | +{% for name in traefik_certs.keys() %} |
| 169 | + - certFile: /config/certs/{{ name }}.cert |
| 170 | + keyFile: /config/certs/{{ name }}.key |
| 171 | +{% endfor %} |
| 172 | +{% endif %} |
| 173 | +
|
| 174 | +http: |
| 175 | + middlewares: |
| 176 | + internal-network: |
| 177 | + chain: |
| 178 | + middlewares: |
| 179 | + - internal-allowlist |
| 180 | + - https-only |
| 181 | + |
| 182 | + https-only: |
| 183 | + redirectScheme: |
| 184 | + scheme: https |
| 185 | +
|
| 186 | + internal-allowlist: |
| 187 | + ipAllowList: |
| 188 | + sourceRange: |
| 189 | + - "{{ traefik_internal_iprange }}" # internal network |
| 190 | + - "172.17.0.1/16" # docker range |
| 191 | + - "192.168.1.1/24" # home network range |
| 192 | +``` |
| 193 | + |
| 194 | +This config sets up the entrypoints, HTTPS redirection and the Docker provider. |
| 195 | + |
| 196 | +```yaml |
| 197 | +# roles/container-traefik/templates/traefik.yaml.j2 |
| 198 | +log: |
| 199 | + level: INFO |
| 200 | +
|
| 201 | +entryPoints: |
| 202 | + web: |
| 203 | + address: ":8080" # will be routed to port 80 by docker |
| 204 | + http: |
| 205 | + redirections: |
| 206 | + entryPoint: |
| 207 | + to: ":443" |
| 208 | + scheme: https |
| 209 | + websecure: |
| 210 | + address: ":8443" # will be routed to port 443 by docker |
| 211 | + http: |
| 212 | + middlewares: |
| 213 | + - https-only@file # default to https everything |
| 214 | + tls: {} |
| 215 | +
|
| 216 | +providers: |
| 217 | + docker: |
| 218 | + exposedByDefault: false # require `traefik.enable: 'true'` label |
| 219 | + defaultRule: "Host(`{{ '{{' }} normalize .Name {{ '}}' }}.{{ traefik_domain }}`)" |
| 220 | + network: bridge # default network |
| 221 | + file: |
| 222 | + filename: "/config/dynamic.yaml" |
| 223 | + watch: true |
| 224 | + |
| 225 | +serversTransport: |
| 226 | + insecureSkipVerify: true # enable self signed certs on containers |
| 227 | +``` |
| 228 | +
|
| 229 | +Now any container with the `traefik.enable: 'true'` will be automatically exposed on `<container_name>.<traefik_domain>`. If `traefik.http.routers.<app_name>.middlewares: 'internal-network@file'` is set as a label, the application will only be accessible from IPs on the `ipAllowList`. |
| 230 | + |
| 231 | +## Monitoring |
| 232 | + |
| 233 | +For general server metrics and monitoring, I use [Netdata](https://www.netdata.cloud/). There are some additional monitoring modules I have setup: |
| 234 | + |
| 235 | +- [SMART monitoring](https://learn.netdata.cloud/docs/collecting-metrics/hardware-devices-and-sensors/s.m.a.r.t.) |
| 236 | +- [UPS monitoring](https://learn.netdata.cloud/docs/collecting-metrics/ups/apc-ups) |
| 237 | +- [Container metrics](https://learn.netdata.cloud/docs/collecting-metrics/containers-and-vms/docker) |
| 238 | + |
| 239 | +This gives alerts for things like high disk usage, high memory usage, UPS failed over to battery, and SMART failures. |
| 240 | + |
| 241 | +For custom monitoring, I use [Uptime Kuma](https://github.com/louislam/uptime-kuma). It works well for application health monitoring as well as custom webhook monitors. I have setup custom webhook monitors for: |
| 242 | + |
| 243 | +- BTRFS Health |
| 244 | +- BTRFS Scrubbing |
| 245 | +- Backups |
| 246 | + |
| 247 | +I have alerts from both Netdata and Uptime Kuma going to [Pushover](https://pushover.net/) and email. |
| 248 | + |
| 249 | +[**Part 1: The Adventure Begins**](@/posts/homelab-adventure-part-1.md) |
| 250 | +[**Part 2: Configuration Management**](@/posts/homelab-adventure-part-2.md) |
| 251 | +[**Part 3: Internal Network**](@/posts/homelab-adventure-part-3.md) |
| 252 | +[**Sidequest: Switching from Salt to Ansible**](@/posts/homelab-switching-salt-to-ansible.md) |
| 253 | +**Part 4: Application Hosting and Monitoring (You are here!)** |
0 commit comments