Add documentation for rootless systemd service and podman quadlets (#927)

This commit is contained in:
Kevin Thomer
2026-02-19 11:29:55 -08:00
committed by GitHub
parent 67b7a08e4a
commit 18f10a9295
3 changed files with 487 additions and 11 deletions
+4
View File
@@ -128,6 +128,10 @@ docker run --restart unless-stopped \
ghcr.io/analogj/scrutiny:latest-collector ghcr.io/analogj/scrutiny:latest-collector
``` ```
### Hub rootless installation using Podman Quadlets
See [docs/INSTALL_ROOTLESS_PODMAN.md](docs/INSTALL_ROOTLESS_PODMAN.md) for instructions.
## Manual Installation (without-Docker) ## Manual Installation (without-Docker)
While the easiest way to get started with [Scrutiny is using Docker](https://github.com/AnalogJ/scrutiny#docker), While the easiest way to get started with [Scrutiny is using Docker](https://github.com/AnalogJ/scrutiny#docker),
+313 -11
View File
@@ -122,6 +122,11 @@ So you'll need to install the v7+ version using one of the following commands:
- `dnf install smartmontools` - `dnf install smartmontools`
- **FreeBSD:** `pkg install smartmontools` - **FreeBSD:** `pkg install smartmontools`
The following additional dependencies are needed if you want to run the collector as an unprivileged user:
- systemd version > 235
- a restricted user account
### Directory Structure ### Directory Structure
Now let's create a directory structure to contain the Scrutiny collector binary. Now let's create a directory structure to contain the Scrutiny collector binary.
@@ -133,40 +138,337 @@ mkdir -p /opt/scrutiny/bin
### Download Files ### Download Files
Next, we'll download the Scrutiny collector binary from the [latest Github release](https://github.com/analogj/scrutiny/releases). Next, we'll download the Scrutiny collector binary from the [latest Github release](https://github.com/analogj/scrutiny/releases). You are looking for the one titled **scrutiny-collector-metrics-linux-amd64** unless you know you are on arm.
The file you need to download is named:
- **scrutiny-collector-metrics-linux-amd64** - save this file to `/opt/scrutiny/bin` ```sh
wget -O /tmp/scrutiny-collector-metrics https://github.com/AnalogJ/scrutiny/releases/latest/download/scrutiny-collector-metrics-linux-amd64
```
Optional, but recommended: Before continuing it's recommended you compare the sha from the release page with the downloaded file to ensure it's the same file and not corrupted/tampered with. The command to do this is:
`echo "SHA_GOES_HERE /tmp/scrutiny-collector-metrics" | sha256sum -c`
example for the v0.8.6 release:
`echo "4c163645ce24e5487f4684a25ec73485d77a82a57f084808ff5aad0c11499ad2 /tmp/scrutiny-collector-metrics" | sha256sum -c`
followed by:
`sudo mv /tmp/scrutiny-collector-metrics /opt/scrutiny/bin/`
to move the binary to its final resting place
### Prepare Scrutiny ### Prepare Scrutiny
Now that we have downloaded the required files, let's prepare the filesystem. Now that we have downloaded the required files, let's prepare the filesystem.
``` ```sh
# Let's make sure the Scrutiny collector is executable. # Let's make sure the Scrutiny collector is executable.
chmod +x /opt/scrutiny/bin/scrutiny-collector-metrics-linux-amd64 chmod +x /opt/scrutiny/bin/scrutiny-collector-metrics
``` ```
if you are using SELinux, you may need to also do the following:
```sh
# tell SELinux to allow these binaries
sudo semanage fcontext -a -t bin_t "/opt/scrutiny/bin(/.*)?"
# update labels
sudo restorecon -Rv /opt/scrutiny/bin
```
### Start Scrutiny Collector, Populate Webapp ### Start Scrutiny Collector, Populate Webapp
Next, we will manually trigger the collector, to populate the Scrutiny dashboard: Next, we will manually trigger the collector, to populate the Scrutiny dashboard:
> NOTE: if you need to pass a config file to the scrutiny collector, you can provide it using the `--config` flag. > NOTE: if you need to pass a config file to the scrutiny collector, you can provide it using the `--config` flag.
``` ```sh
/opt/scrutiny/bin/scrutiny-collector-metrics-linux-amd64 run --api-endpoint "http://localhost:8080" /opt/scrutiny/bin/scrutiny-collector-metrics run --api-endpoint "http://localhost:8080"
``` ```
### Schedule Collector with Cron ### Schedule Collector with (root) Cron
Finally you need to schedule the collector to run periodically. Finally you need to schedule the collector to run periodically.
This may be different depending on your OS/environment, but it may look something like this: This may be different depending on your OS/environment, but it may look something like this:
``` ```sh
# open crontab # open crontab
crontab -e sudo crontab -e
# add a line for Scrutiny # add a line for Scrutiny
*/15 * * * * . /etc/profile; /opt/scrutiny/bin/scrutiny-collector-metrics-linux-amd64 run --api-endpoint "http://localhost:8080" */15 * * * * . /etc/profile; /opt/scrutiny/bin/scrutiny-collector-metrics run --api-endpoint "http://localhost:8080"
``` ```
### Schedule Collector with Systemd (rootless)
Alternatively you can run `scrutiny-collector-metrics` as non-root so long as the relevant capabilities and permissions are granted.
#### Creating a Restricted Service Account
This is the account that will run `scrutiny-collector-metrics`. Note this isn't strictly needed for all setups, but is useful from a logging/auditing perspective.
- Debian-based distros:
- `sudo adduser --system scrutiny-svc --group --home /opt/scrutiny-svc`
- RHEL-based distros:
- `sudo useradd --system --home-dir /opt/scrutiny-svc --shell /sbin/nologin scrutiny-svc`
Next, add the user to the `disk` group:
```sh
sudo usermod -aG disk scrutiny-svc
```
#### Creating a Restricted Systemd Service using AmbientCapabilities (easier)
This is the simpler setup, which allows you to run scrutiny rootless, but depending on what you want, may require granting more permissions to scrutiny than you would like to.
1. go to `/etc/systemd/system`
2. create scrutiny-collector.service with the following contents:
```ini
[Unit]
Description=Daily Restricted Scrutiny Collector
After=network.target
[Service]
[Unit]
Description=Daily Restricted Scrutiny Collector
After=network.target
[Service]
Type=oneshot
User=scrutiny-svc
Group=disk
ExecStart=/opt/scrutiny/bin/scrutiny-collector-metrics run --api-endpoint "http://localhost:8080"
# --- PRIVILEGE LOCKDOWN ---
## CAP_SYS_RAWIO is needed for SATA drives
AmbientCapabilities=CAP_SYS_RAWIO
CapabilityBoundingSet=CAP_SYS_RAWIO
## unfortunately nvme drives require CAP_SYS_ADMIN
## if you want nvme drives you must do the following:
#AmbientCapabilities=CAP_SYS_RAWIO CAP_SYS_ADMIN
#CapabilityBoundingSet=
NoNewPrivileges=yes
# Security/sandboxing settings
KeyringMode=private
LockPersonality=yes
MemoryDenyWriteExecute=yes
ProtectSystem=strict
ProtectHome=yes
PrivateDevices=no
## you can restrict devices using:
#DevicePolicy=closed
#DeviceAllow=/dev/sda r
#DeviceAllow=/dev/nvme0 r
ProtectKernelModules=yes
ProtectKernelTunables=yes
ProtectControlGroups=yes
ProtectClock=yes
ProtectHostname=yes
ProtectKernelLogs=yes
RemoveIPC=yes
RestrictSUIDSGID=true
# --- NETWORK LOCKDOWN
## use these to restrict what scrutiny can talk to over the network
## if using a hub on a different host you will need to change the values accordingly
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX
IPAddressDeny=any
IPAddressAllow=localhost
[Install]
WantedBy=multi-user.target
```
Additionally, for nvme drives you may need to create a udev rule on many systems, as /dev/nvme* is often owned only by root:
##### add udev rule `/etc/udev/rules.d/99-nvme.rules` with contents:
```
KERNEL=="nvme[0-9]*", GROUP="disk", MODE="0640"
```
then run the following commands to load the udev rule:
```sh
sudo udevadm control --reload-rules
sudo udevadm trigger --subsystem-match=nvme --action=add
```
##### Pros:
- easy to maintain
- much better than running as root (especially if you don't need nvme drives)
- there are no privilege escalations needed
##### Cons:
NOTE: These cons basically only apply if a major supply-chain attack happens against scrutiny, and reflect a worst-case scenario that is unlikely to ever occur:
- CAP_SYS_RAWIO allows for data exfiltration/modification from SATA drives (ssh keys, /etc/shadow, etc)
- CAP_SYS_ADMIN would theoretically allow for significant system compromise
- nvme drives requires a udev rule for reliable access
If you are happy with that, you can jump to [Create a Systemd Timer to run scrutiny-collector.service](#create-a-systemd-timer-to-run-scrutiny-collectorservice)
#### Creating a Restricted Systemd Service using sudo and Shim Script
If granting scrutiny `CAP_SYS_RAWIO` and/or `CAP_SYS_ADMIN` exceeds your risk appetite, you have another option, though one more complicated and with its own set of pros/cons
1. run `sudo mkdir -p /opt/smartctl-shim/bin`
2. edit `/opt/smartctl-shim/bin/smartctl` with the following content:
```sh
#!/bin/bash
# Shim for accounts to use smartctl without being root
# for automation requires the account be in sudoers
exec /usr/bin/sudo /usr/sbin/smartctl "$@"
```
3. create a new `scrutiny-collector` file in `/etc/sudoers.d/`
4. inside `/etc/sudoers.d/scrutiny-collector` add the following:
```sh
scrutiny-svc ALL=(root) NOPASSWD: /usr/sbin/smartctl *
```
5. go to `/etc/systemd/system`
6. create scrutiny-collector.service with the following contents:
```ini
[Unit]
Description=Daily Restricted Scrutiny Collector
After=network.target
[Service]
Type=oneshot
User=scrutiny-svc
Environment="PATH=/opt/smartctl-shim/bin:/usr/bin:/bin"
ExecStart=/opt/scrutiny/bin/scrutiny-collector-metrics run --api-endpoint "http://localhost:8080"
# --- PRIVILEGE LOCKDOWN ---
## we use sudo to elevate privileges for smartctl only, so no Ambient Capabilities are needed
AmbientCapabilities=
## CAP_SYS_RAWIO is needed for SATA drives
CapabilityBoundingSet=CAP_SETUID CAP_SETGID CAP_AUDIT_WRITE CAP_SYS_RAWIO CAP_SYS_RESOURCE
## unfortunately nvme drives require CAP_SYS_ADMIN
## if you want nvme drives you must do the following:
# CapabilityBoundingSet=CAP_SETUID CAP_SETGID CAP_AUDIT_WRITE CAP_SYS_RAWIO CAP_SYS_ADMIN CAP_SYS_RESOURCE
## since sudo needs to be used to elevate permissions in this setup, we need to allow new privileges
NoNewPrivileges=no
# Security/sandboxing settings
KeyringMode=private
LockPersonality=yes
MemoryDenyWriteExecute=yes
ProtectSystem=strict
ProtectHome=yes
PrivateDevices=no
ProtectKernelModules=yes
ProtectKernelTunables=yes
ProtectControlGroups=yes
ProtectClock=yes
ProtectHostname=yes
ProtectKernelLogs=yes
RemoveIPC=yes
RestrictSUIDSGID=true
# --- NETWORK LOCKDOWN
## use these to restrict what scrutiny can talk to over the network
## if using a hub on a different host you will need to change the values accordingly
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX
IPAddressDeny=any
IPAddressAllow=localhost
[Install]
WantedBy=multi-user.target
```
##### Pros:
- the scrutiny binary itself will not have permissions like CAP_SYS_ADMIN
- much better than running as root (especially if you don't need nvme drives)
- `sudo` restricts privilege escalation to just `smartctl`
- no udev rule needed
##### Cons:
NOTE: These cons basically only apply if a major supply-chain attack happens against scrutiny, and reflect a worst-case scenario that is unlikely to ever occur:
- Any sort of privilege escalation attack in sudo could theoretically allow a compromised scrutiny to gain additional privileges, since the process has permission to escelate privileges in general
- Even though sudo only allows `smartctl`, it still has `CAP_SYS_RAWIO` and `CAP_SYS_ADMIN` so in theory the same attacks from the first method are possible, though now only with an exploit using smartctl instead of scrutiny directly
- even though you don't need a udev rule, this adds a lot of additional administrative overhead
- while the scrutiny binary itself isn't elevated, it has a sub-process that is running as root (systemctl)
#### Create a Systemd Timer to run scrutiny-collector.service
First, lets test our service. It doesn't matter which method you used above, as either way you need to load and run it.
```sh
# reload changes for systemd services
sudo systemctl daemon-reload
# enable the service
sudo systemctl enable scrutiny-collector.service
# now run the service
sudo systemctl start scrutiny-collector.service
```
You should see the data in your hub instance of scrutiny now. If your run into issues I recommend turning on debug logging for scrutiny and checking your system logs using journalctl. It may be a permission is missing or wrong.
Now that things have been validated, lets create the systemd timer to run the service for us on a schedule:
1. if you are not still there, go to `/etc/systemd/system`
2. create scrutiny-collector.timer with the following contents:
```ini
[Unit]
Description=Run Scruitiny Collector daily at 2am
[Timer]
# Standard calendar trigger
OnCalendar=*-*-* 02:00:00
# Ensures the job runs if the computer was off at 2am
Persistent=true
# Minimizes I/O spikes by staggering start time
RandomizedDelaySec=30
[Install]
WantedBy=timers.target
```
Update the schedule as you see fit for your needs
Once you are satisfied with our timer, you'll need to load and enable it:
```sh
# reload changes for systemd services
sudo systemctl daemon-reload
# now enable the timer
sudo systemctl enable --now scrutiny-collector.timer
```
That's it! you're done. You can check the status of the timer using `sudo systemctl status scrutiny-collector.timer
`
+170
View File
@@ -0,0 +1,170 @@
# Rootless Podman Quadlet Install
Note: These instructions are written with Podman 4.9 in mind, as that's what's available on Ubuntu 24.04. Podman 5+ can simplify the process using a .pod file to run both the hub and influxdb instance in the same pod, sharing localhost. This is a fairly trivial change should anyone want to add the documentation for it. While this document isn't Ubuntu-specific, this is being purposefully done to allow it to apply to the vast majority of Podman users, regardless of what Linux distro they use.
### Dependencies
- Podman > 4.9
- Systemd > 250 (for quadlet support)
- a restricted service account
### Creating a Service Account
See [Creating a Restricted Service Account](INSTALL_MANUAL.md#creating-a-restricted-service-account) for instructions.
While you do not need to use the same account as the collector, this guide will assume you will be for all its examples.
In addition to those steps, you will need to create sub ids and enable lingering for the user:
```sh
# add sub-uids and sub-gids, you may need to adjust numbers if you have other rootless quadlets running for other users already
# it is not recommended to go below 100000
# we choose to start at 500000 in the event you have some other podman accounts
sudo usermod --add-subuids 500000-565535 scrutiny-svc
sudo usermod --add-subgids 500000-565535 scrutiny-svc
# We want the quadlets to stay running even if the user isn't logged in
sudo loginctl enable-linger scrutiny-svc
```
### Directory Structure
Once the account is created, you will need to grab its id to create a few drectories for the data files and rootless quadlet files:
```sh
# create folders for config and influxdb
sudo mkdir -p /opt/scrutiny-svc/scrutiny/{config,influxdb}
# get the config file for scrutiny hub
sudo wget -O /opt/scrutiny-svc/scrutiny/config/scrutiny.yaml https://raw.githubusercontent.com/AnalogJ/scrutiny/refs/heads/master/example.scrutiny.yaml
# set permissions on everything
sudo chown -R scrutiny-svc:scrutiny-svc /opt/scrutiny-svc
# Get the ID of scrutiny-svc so you know it for your own record-keeping
id -u scrutiny-svc
# create a directory
sudo mkdir -p /etc/containers/systemd/users/$(id -u scrutiny-svc)
## go into the directory you just created for the rest of the guide
cd /etc/containers/systemd/users/$(id -u scrutiny-svc)
```
### Quadlet Files
Now that everything is set up and configured for the account to run quadlets, we just need to create a few quadlet files.
All remaining system actions will take place in `/etc/containers/systemd/users/$(id -u scrutiny-svc)` which is why we had you cd into it.
#### Networking
We need the hub and influxdb instances to be able to talk to each other, and in the case of Podman 4.9, they will run separately not sharing a localhost, and as such we need to configure a network for them to share. The file is pretty simple:
##### scrutiny-net.network
```ini
[Network]
NetworkName=scrutiny-net
```
#### Containers
Now we're ready for creating the containers
##### influxdb.container
```ini
[Unit]
Description=influxdb
[Container]
ContainerName=influxdb
Image=docker.io/library/influxdb:2.2
AutoUpdate=registry
Timezone=local
## not strictly necessary, but keeps file permission sane for influxdb
PodmanArgs=--group-add keep-groups
## versions of podman after 5.1 should do the below instead
#GroupAdd=keep-groups
Volume=/opt/scrutiny-svc/scrutiny/influxdb:/var/lib/influxdb2:Z
Network=scrutiny-net
[Service]
Restart=on-failure
[Install]
# Start by default on boot
WantedBy=default.target
```
##### scrutiny-web.container
```ini
[Unit]
Description=scrutiny-web
After=influxdb.service
Requires=influxdb.service
[Container]
ContainerName=scrutiny-web
Image=ghcr.io/analogj/scrutiny:latest-web
AutoUpdate=registry
Timezone=local
Volume=/opt/scrutiny-svc/scrutiny/config:/opt/scrutiny/config:Z
Network=scrutiny-net
PublishPort=8080:8080/tcp
[Service]
Restart=on-failure
[Install]
# Start by default on boot
WantedBy=default.target
```
#### Update scrutiny config
Since our containers are running separately, we need to update `/opt/scrutiny-svc/scrutiny/config/scrutiny.yaml` to the new influxdb host:
1. edit `/opt/scrutiny-svc/scrutiny/config/scrutiny.yaml`
2. under `influxdb` section, change `host: 0.0.0.0` to `host: influxdb` -- remember that yaml is whitespace-sensitive! so be mindful of the indents
```yaml
influxdb:
# scheme: 'http'
host: influxdb
port: 8086
```
# Running the hub and doing the
With that done, we're now ready to start up the services:
```sh
# reload all the systemd user files for scrutiny-svc
sudo systemctl --user -M scrutiny-svc@ daemon-reload
# start the scrutiny-net network:
sudo systemctl --user -M scrutiny-svc@ start scrutiny-net-network.service
# start influxdb first and wait for it to come up
sudo systemctl --user -M scrutiny-svc@ start influxdb.service
# check if it's fully up
sudo systemctl --user -M scrutiny-svc@ status influxdb.service
# now start scrutiny
sudo systemctl --user -M scrutiny-svc@ start scrutiny-web.service
```
You are now ready to run the collector, if you would like to run that rootless as well, see the guide at [Schedule Collector with Systemd (rootless)](INSTALL_MANUAL.md#schedule-collector-with-systemd-rootless)