Files
docker-configs/linkwarden/data/archives/7/89_readability.json

1 line
31 KiB
JSON
Executable File

{"title":"","byline":"AnalogJ","dir":null,"lang":null,"content":"<div id=\"readability-page-1\" class=\"page\"><div data-hpc=\"true\"><article><p dir=\"auto\">\n <a href=\"https://github.com/AnalogJ/scrutiny\">\n <img src=\"https://github.com/AnalogJ/scrutiny/raw/master/webapp/frontend/src/assets/images/logo/scrutiny-logo-dark.png\" alt=\"scrutiny_view\" width=\"300\">\n </a>\n</p>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">scrutiny</h2><a href=\"#scrutiny\" aria-label=\"Permalink: scrutiny\" id=\"user-content-scrutiny\"></a></p>\n<p dir=\"auto\"><a href=\"https://github.com/AnalogJ/scrutiny/actions?query=workflow%3ACI\"><img alt=\"CI\" src=\"https://github.com/AnalogJ/scrutiny/workflows/CI/badge.svg?branch=master\"></a>\n<a rel=\"nofollow\" href=\"https://codecov.io/gh/AnalogJ/scrutiny\"><img data-canonical-src=\"https://codecov.io/gh/AnalogJ/scrutiny/branch/master/graph/badge.svg\" alt=\"codecov\" src=\"https://camo.githubusercontent.com/6759dddcb7240adebc3a794a9a8590dec91e268ad493095cb96c36ca929a4237/68747470733a2f2f636f6465636f762e696f2f67682f416e616c6f674a2f7363727574696e792f6272616e63682f6d61737465722f67726170682f62616467652e737667\"></a>\n<a href=\"https://github.com/AnalogJ/scrutiny/blob/master/LICENSE\"><img data-canonical-src=\"https://img.shields.io/github/license/AnalogJ/scrutiny.svg?style=flat-square\" alt=\"GitHub license\" src=\"https://camo.githubusercontent.com/88faa1452927b480cf1403d96177bd526eb6897919f4c537451adfcc30b3a41f/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f6c6963656e73652f416e616c6f674a2f7363727574696e792e7376673f7374796c653d666c61742d737175617265\"></a>\n<a rel=\"nofollow\" href=\"https://godoc.org/github.com/analogj/scrutiny\"><img data-canonical-src=\"https://img.shields.io/badge/godoc-reference-blue.svg?style=flat-square\" alt=\"Godoc\" src=\"https://camo.githubusercontent.com/fe1188b9f0668a1e0a543e1cbcc6fb28d50a52f74d04e99407f8e6405a7132cd/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f676f646f632d7265666572656e63652d626c75652e7376673f7374796c653d666c61742d737175617265\"></a>\n<a rel=\"nofollow\" href=\"https://goreportcard.com/report/github.com/AnalogJ/scrutiny\"><img data-canonical-src=\"https://goreportcard.com/badge/github.com/AnalogJ/scrutiny?style=flat-square\" alt=\"Go Report Card\" src=\"https://camo.githubusercontent.com/2fb09f12f73438746cac28143ab047ab9273b73dcd4850e5eb797c9e9b2a7df4/68747470733a2f2f676f7265706f7274636172642e636f6d2f62616467652f6769746875622e636f6d2f416e616c6f674a2f7363727574696e793f7374796c653d666c61742d737175617265\"></a>\n<a href=\"https://github.com/AnalogJ/scrutiny/releases\"><img data-canonical-src=\"http://img.shields.io/github/release/AnalogJ/scrutiny.svg?style=flat-square\" alt=\"GitHub release\" src=\"https://camo.githubusercontent.com/0c0fd6fa4c0090cd46fa2405de94ebd348a3824d55d35bf25dc092d5d27660ad/687474703a2f2f696d672e736869656c64732e696f2f6769746875622f72656c656173652f416e616c6f674a2f7363727574696e792e7376673f7374796c653d666c61742d737175617265\"></a></p>\n<p dir=\"auto\">WebUI for smartd S.M.A.R.T monitoring</p>\n<blockquote>\n<p dir=\"auto\">NOTE: Scrutiny is a Work-in-Progress and still has some rough edges.</p>\n</blockquote>\n<p dir=\"auto\"><a rel=\"nofollow\" href=\"https://imgur.com/a/5k8qMzS\"><img alt=\"\" src=\"https://github.com/AnalogJ/scrutiny/raw/master/docs/dashboard.png\"></a></p>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">Introduction</h2><a href=\"#introduction\" aria-label=\"Permalink: Introduction\" id=\"user-content-introduction\"></a></p>\n<p dir=\"auto\">If you run a server with more than a couple of hard drives, you're probably already familiar with S.M.A.R.T and the <code>smartd</code> daemon. If not, it's an incredible open source project described as the following:</p>\n<blockquote>\n<p dir=\"auto\">smartd is a daemon that monitors the Self-Monitoring, Analysis and Reporting Technology (SMART) system built into many ATA, IDE and SCSI-3 hard drives. The purpose of SMART is to monitor the reliability of the hard drive and predict drive failures, and to carry out different types of drive self-tests.</p>\n</blockquote>\n<p dir=\"auto\">These S.M.A.R.T hard drive self-tests can help you detect and replace failing hard drives before they cause permanent data loss. However, there's a couple issues with <code>smartd</code>:</p>\n<ul dir=\"auto\">\n<li>There are more than a hundred S.M.A.R.T attributes, however <code>smartd</code> does not differentiate between critical and informational metrics</li>\n<li><code>smartd</code> does not record S.M.A.R.T attribute history, so it can be hard to determine if an attribute is degrading slowly over time.</li>\n<li>S.M.A.R.T attribute thresholds are set by the manufacturer. In some cases these thresholds are unset, or are so high that they can only be used to confirm a failed drive, rather than detecting a drive about to fail.</li>\n<li><code>smartd</code> is a command line only tool. For head-less servers a web UI would be more valuable.</li>\n</ul>\n<p dir=\"auto\"><strong>Scrutiny is a Hard Drive Health Dashboard &amp; Monitoring solution, merging manufacturer provided S.M.A.R.T metrics with real-world failure rates.</strong></p>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">Features</h2><a href=\"#features\" aria-label=\"Permalink: Features\" id=\"user-content-features\"></a></p>\n<p dir=\"auto\">Scrutiny is a simple but focused application, with a couple of core features:</p>\n<ul dir=\"auto\">\n<li>Web UI Dashboard - focused on Critical metrics</li>\n<li><code>smartd</code> integration (no re-inventing the wheel)</li>\n<li>Auto-detection of all connected hard-drives</li>\n<li>S.M.A.R.T metric tracking for historical trends</li>\n<li>Customized thresholds using real world failure rates</li>\n<li>Temperature tracking</li>\n<li>Provided as an all-in-one Docker image (but can be installed manually)</li>\n<li>Configurable Alerting/Notifications via Webhooks</li>\n<li>(Future) Hard Drive performance testing &amp; tracking</li>\n</ul>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">Getting Started</h2><a href=\"#getting-started\" aria-label=\"Permalink: Getting Started\" id=\"user-content-getting-started\"></a></p>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">RAID/Virtual Drives</h2><a href=\"#raidvirtual-drives\" aria-label=\"Permalink: RAID/Virtual Drives\" id=\"user-content-raidvirtual-drives\"></a></p>\n<p dir=\"auto\">Scrutiny uses <code>smartctl --scan</code> to detect devices/drives.</p>\n<ul dir=\"auto\">\n<li>All RAID controllers supported by <code>smartctl</code> are automatically supported by Scrutiny.\n<ul dir=\"auto\">\n<li>While some RAID controllers support passing through the underlying SMART data to <code>smartctl</code> others do not.</li>\n<li>In some cases <code>--scan</code> does not correctly detect the device type, returning <a data-hovercard-url=\"/AnalogJ/scrutiny/issues/45/hovercard\" data-hovercard-type=\"issue\" href=\"https://github.com/AnalogJ/scrutiny/issues/45\">incomplete SMART data</a>.\nScrutiny supports overriding detected device type via the config file: see <a href=\"https://github.com/AnalogJ/scrutiny/blob/master/example.collector.yaml\">example.collector.yaml</a></li>\n</ul>\n</li>\n<li>If you use docker, you <strong>must</strong> pass though the RAID virtual disk to the container using <code>--device</code> (see below)\n<ul dir=\"auto\">\n<li>This device may be in <code>/dev/*</code> or <code>/dev/bus/*</code>.</li>\n<li>If you're unsure, run <code>smartctl --scan</code> on your host, and pass all listed devices to the container.</li>\n</ul>\n</li>\n</ul>\n<p dir=\"auto\">See <a href=\"https://github.com/AnalogJ/scrutiny/blob/master/docs/TROUBLESHOOTING_DEVICE_COLLECTOR.md\">docs/TROUBLESHOOTING_DEVICE_COLLECTOR.md</a> for help</p>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">Docker</h2><a href=\"#docker\" aria-label=\"Permalink: Docker\" id=\"user-content-docker\"></a></p>\n<p dir=\"auto\">If you're using Docker, getting started is as simple as running the following command:</p>\n<blockquote>\n<p dir=\"auto\">See <a href=\"https://github.com/AnalogJ/scrutiny/blob/master/docker/example.omnibus.docker-compose.yml\">docker/example.omnibus.docker-compose.yml</a> for a docker-compose file.</p>\n</blockquote>\n<div dir=\"auto\"><pre>docker run -it --rm -p 8080:8080 -p 8086:8086 \\\n -v <span><span>`</span>pwd<span>`</span></span>/scrutiny:/opt/scrutiny/config \\\n -v <span><span>`</span>pwd<span>`</span></span>/influxdb2:/opt/scrutiny/influxdb \\\n -v /run/udev:/run/udev:ro \\\n --cap-add SYS_RAWIO \\\n --device=/dev/sda \\\n --device=/dev/sdb \\\n --name scrutiny \\\n ghcr.io/analogj/scrutiny:master-omnibus</pre></div>\n<ul dir=\"auto\">\n<li><code>/run/udev</code> is necessary to provide the Scrutiny collector with access to your device metadata</li>\n<li><code>--cap-add SYS_RAWIO</code> is necessary to allow <code>smartctl</code> permission to query your device SMART data\n<ul dir=\"auto\">\n<li>NOTE: If you have <strong>NVMe</strong> drives, you must add <code>--cap-add SYS_ADMIN</code> as well. See issue <a data-hovercard-url=\"/AnalogJ/scrutiny/issues/26/hovercard\" data-hovercard-type=\"issue\" href=\"https://github.com/AnalogJ/scrutiny/issues/26#issuecomment-696817130\">#26</a></li>\n</ul>\n</li>\n<li><code>--device</code> entries are required to ensure that your hard disk devices are accessible within the container.</li>\n<li><code>ghcr.io/analogj/scrutiny:master-omnibus</code> is a omnibus image, containing both the webapp server (frontend &amp; api) as well as the S.M.A.R.T metric collector. (see below)</li>\n</ul>\n<p dir=\"auto\"><h3 dir=\"auto\" tabindex=\"-1\">Hub/Spoke Deployment</h3><a href=\"#hubspoke-deployment\" aria-label=\"Permalink: Hub/Spoke Deployment\" id=\"user-content-hubspoke-deployment\"></a></p>\n<p dir=\"auto\">In addition to the Omnibus image (available under the <code>latest</code> tag) you can deploy in Hub/Spoke mode, which requires 3\nother Docker images:</p>\n<ul dir=\"auto\">\n<li><code>ghcr.io/analogj/scrutiny:master-collector</code> - Contains the Scrutiny data collector, <code>smartctl</code> binary and cron-like\nscheduler. You can run one collector on each server.</li>\n<li><code>ghcr.io/analogj/scrutiny:master-web</code> - Contains the Web UI and API. Only one container necessary</li>\n<li><code>influxdb:2.2</code> - InfluxDB image, used by the Web container to persist SMART data. Only one container necessary\nSee <a href=\"https://github.com/AnalogJ/scrutiny/blob/master/docs/TROUBLESHOOTING_INFLUXDB.md\">docs/TROUBLESHOOTING_INFLUXDB.md</a></li>\n</ul>\n<blockquote>\n<p dir=\"auto\">See <a href=\"https://github.com/AnalogJ/scrutiny/blob/master/docker/example.hubspoke.docker-compose.yml\">docker/example.hubspoke.docker-compose.yml</a> for a docker-compose file.</p>\n</blockquote>\n<div dir=\"auto\"><pre>docker run --rm -p 8086:8086 \\\n -v <span><span>`</span>pwd<span>`</span></span>/influxdb2:/var/lib/influxdb2 \\\n --name scrutiny-influxdb \\\n influxdb:2.2\n\ndocker run --rm -p 8080:8080 \\\n -v <span><span>`</span>pwd<span>`</span></span>/scrutiny:/opt/scrutiny/config \\\n --name scrutiny-web \\\n ghcr.io/analogj/scrutiny:master-web\n\ndocker run --rm \\\n -v /run/udev:/run/udev:ro \\\n --cap-add SYS_RAWIO \\\n --device=/dev/sda \\\n --device=/dev/sdb \\\n -e COLLECTOR_API_ENDPOINT=http://SCRUTINY_WEB_IPADDRESS:8080 \\\n --name scrutiny-collector \\\n ghcr.io/analogj/scrutiny:master-collector</pre></div>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">Manual Installation (without-Docker)</h2><a href=\"#manual-installation-without-docker\" aria-label=\"Permalink: Manual Installation (without-Docker)\" id=\"user-content-manual-installation-without-docker\"></a></p>\n<p dir=\"auto\">While the easiest way to get started with <a href=\"https://github.com/AnalogJ/scrutiny#docker\">Scrutiny is using Docker</a>,\nit is possible to run it manually without much work. You can even mix and match, using Docker for one component and\na manual installation for the other.</p>\n<p dir=\"auto\">See <a href=\"https://github.com/AnalogJ/scrutiny/blob/master/docs/INSTALL_MANUAL.md\">docs/INSTALL_MANUAL.md</a> for instructions.</p>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">Usage</h2><a href=\"#usage\" aria-label=\"Permalink: Usage\" id=\"user-content-usage\"></a></p>\n<p dir=\"auto\">Once scrutiny is running, you can open your browser to <code>http://localhost:8080</code> and take a look at the dashboard.</p>\n<p dir=\"auto\">If you're using the omnibus image, the collector should already have run, and your dashboard should be populate with every\ndrive that Scrutiny detected. The collector is configured to run once a day, but you can trigger it manually by running the command below.</p>\n<p dir=\"auto\">For users of the docker Hub/Spoke deployment or manual install: initially the dashboard will be empty.\nAfter the first collector run, you'll be greeted with a list of all your hard drives and their current smart status.</p>\n<div dir=\"auto\"><pre>docker <span>exec</span> scrutiny /opt/scrutiny/bin/scrutiny-collector-metrics run</pre></div>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">Configuration</h2><a href=\"#configuration\" aria-label=\"Permalink: Configuration\" id=\"user-content-configuration\"></a></p>\n<p dir=\"auto\">By default Scrutiny looks for its YAML configuration files in <code>/opt/scrutiny/config</code></p>\n<p dir=\"auto\">There are two configuration files available:</p>\n<ul dir=\"auto\">\n<li>Webapp/API config via <code>scrutiny.yaml</code> - <a href=\"https://github.com/AnalogJ/scrutiny/blob/master/example.scrutiny.yaml\">example.scrutiny.yaml</a>.</li>\n<li>Collector config via <code>collector.yaml</code> - <a href=\"https://github.com/AnalogJ/scrutiny/blob/master/example.collector.yaml\">example.collector.yaml</a>.</li>\n</ul>\n<p dir=\"auto\">Neither file is required, however if provided, it allows you to configure how Scrutiny functions.</p>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">Cron Schedule</h2><a href=\"#cron-schedule\" aria-label=\"Permalink: Cron Schedule\" id=\"user-content-cron-schedule\"></a></p>\n<p dir=\"auto\">Unfortunately the Cron schedule cannot be configured via the <code>collector.yaml</code> (as the collector binary needs to be trigged by a scheduler/cron).\nHowever, if you are using the official <code>ghcr.io/analogj/scrutiny:master-collector</code> or <code>ghcr.io/analogj/scrutiny:master-omnibus</code> docker images,\nyou can use the <code>COLLECTOR_CRON_SCHEDULE</code> environmental variable to override the default cron schedule (daily @ midnight - <code>0 0 * * *</code>).</p>\n<p dir=\"auto\"><code>docker run -e COLLECTOR_CRON_SCHEDULE=\"0 0 * * *\" ...</code></p>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">Notifications</h2><a href=\"#notifications\" aria-label=\"Permalink: Notifications\" id=\"user-content-notifications\"></a></p>\n<p dir=\"auto\">Scrutiny supports sending SMART device failure notifications via the following services:</p>\n<ul dir=\"auto\">\n<li>Custom Script (data provided via environmental variables)</li>\n<li>Email</li>\n<li>Webhooks</li>\n<li>Discord</li>\n<li>Gotify</li>\n<li>Hangouts</li>\n<li>IFTTT</li>\n<li>Join</li>\n<li>Mattermost</li>\n<li>ntfy</li>\n<li>Pushbullet</li>\n<li>Pushover</li>\n<li>Slack</li>\n<li>Teams</li>\n<li>Telegram</li>\n<li>Tulip</li>\n</ul>\n<p dir=\"auto\">Check the <code>notify.urls</code> section of <a href=\"https://github.com/AnalogJ/scrutiny/blob/master/example.scrutiny.yaml\">example.scrutiny.yml</a> for examples.</p>\n<p dir=\"auto\">For more information and troubleshooting, see the <a href=\"https://github.com/AnalogJ/scrutiny/blob/master/docs/TROUBLESHOOTING_NOTIFICATIONS.md\">TROUBLESHOOTING_NOTIFICATIONS.md</a> file</p>\n<p dir=\"auto\"><h3 dir=\"auto\" tabindex=\"-1\">Testing Notifications</h3><a href=\"#testing-notifications\" aria-label=\"Permalink: Testing Notifications\" id=\"user-content-testing-notifications\"></a></p>\n<p dir=\"auto\">You can test that your notifications are configured correctly by posting an empty payload to the notifications health check API.</p>\n<div dir=\"auto\"><pre>curl -X POST http://localhost:8080/api/health/notify</pre></div>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">Debug mode &amp; Log Files</h2><a href=\"#debug-mode--log-files\" aria-label=\"Permalink: Debug mode &amp; Log Files\" id=\"user-content-debug-mode--log-files\"></a></p>\n<p dir=\"auto\">Scrutiny provides various methods to change the log level to debug and generate log files.</p>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">Web Server/API</h2><a href=\"#web-serverapi\" aria-label=\"Permalink: Web Server/API\" id=\"user-content-web-serverapi\"></a></p>\n<p dir=\"auto\">You can use environmental variables to enable debug logging and/or log files for the web server:</p>\n<div dir=\"auto\"><pre>DEBUG=true\nSCRUTINY_LOG_FILE=/tmp/web.log</pre></div>\n<p dir=\"auto\">You can configure the log level and log file in the config file:</p>\n<div dir=\"auto\"><pre><span>log</span>:\n <span>file</span>: <span><span>'</span>/tmp/web.log<span>'</span></span>\n <span>level</span>: <span>DEBUG</span></pre></div>\n<p dir=\"auto\">Or if you're not using docker, you can pass CLI arguments to the web server during startup:</p>\n<div dir=\"auto\"><pre>scrutiny start --debug --log-file /tmp/web.log</pre></div>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">Collector</h2><a href=\"#collector\" aria-label=\"Permalink: Collector\" id=\"user-content-collector\"></a></p>\n<p dir=\"auto\">You can use environmental variables to enable debug logging and/or log files for the collector:</p>\n<div dir=\"auto\"><pre>DEBUG=true\nCOLLECTOR_LOG_FILE=/tmp/collector.log</pre></div>\n<p dir=\"auto\">Or if you're not using docker, you can pass CLI arguments to the collector during startup:</p>\n<div dir=\"auto\"><pre>scrutiny-collector-metrics run --debug --log-file /tmp/collector.log</pre></div>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">Supported Architectures</h2><a href=\"#supported-architectures\" aria-label=\"Permalink: Supported Architectures\" id=\"user-content-supported-architectures\"></a></p>\n<table>\n<thead>\n<tr>\n<th>Architecture Name</th>\n<th>Binaries</th>\n<th>Docker</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>linux-amd64</td>\n<td>✅</td>\n<td>✅</td>\n</tr>\n<tr>\n<td>linux-arm-5</td>\n<td>✅</td>\n<td></td>\n</tr>\n<tr>\n<td>linux-arm-6</td>\n<td>✅</td>\n<td></td>\n</tr>\n<tr>\n<td>linux-arm-7</td>\n<td>✅</td>\n<td>web/collector only. see <a data-hovercard-url=\"/AnalogJ/scrutiny/issues/236/hovercard\" data-hovercard-type=\"issue\" href=\"https://github.com/AnalogJ/scrutiny/issues/236\">#236</a></td>\n</tr>\n<tr>\n<td>linux-arm64</td>\n<td>✅</td>\n<td>✅</td>\n</tr>\n<tr>\n<td>freebsd-amd64</td>\n<td>✅</td>\n<td></td>\n</tr>\n<tr>\n<td>macos-amd64</td>\n<td>✅</td>\n<td>✅</td>\n</tr>\n<tr>\n<td>macos-arm64</td>\n<td>✅</td>\n<td>✅</td>\n</tr>\n<tr>\n<td>windows-amd64</td>\n<td>✅</td>\n<td>WIP, see <a data-hovercard-url=\"/AnalogJ/scrutiny/issues/15/hovercard\" data-hovercard-type=\"issue\" href=\"https://github.com/AnalogJ/scrutiny/issues/15\">#15</a></td>\n</tr>\n<tr>\n<td>windows-arm64</td>\n<td>✅</td>\n<td></td>\n</tr>\n</tbody>\n</table>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">Contributing</h2><a href=\"#contributing\" aria-label=\"Permalink: Contributing\" id=\"user-content-contributing\"></a></p>\n<p dir=\"auto\">Please see the <a href=\"https://github.com/AnalogJ/scrutiny/blob/master/CONTRIBUTING.md\">CONTRIBUTING.md</a> for instructions for how to develop and contribute to the scrutiny codebase.</p>\n<p dir=\"auto\">Work your magic and then submit a pull request. We love pull requests!</p>\n<p dir=\"auto\">If you find the documentation lacking, help us out and update this README.md. If you don't have the time to work on Scrutiny, but found something we should know about, please submit an issue.</p>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">Versioning</h2><a href=\"#versioning\" aria-label=\"Permalink: Versioning\" id=\"user-content-versioning\"></a></p>\n<p dir=\"auto\">We use SemVer for versioning. For the versions available, see the tags on this repository.</p>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">Authors</h2><a href=\"#authors\" aria-label=\"Permalink: Authors\" id=\"user-content-authors\"></a></p>\n<p dir=\"auto\">Jason Kulatunga - Initial Development - @AnalogJ</p>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">Licenses</h2><a href=\"#licenses\" aria-label=\"Permalink: Licenses\" id=\"user-content-licenses\"></a></p>\n<ul dir=\"auto\">\n<li>MIT</li>\n<li>Logo: <a rel=\"nofollow\" href=\"https://thenounproject.com/term/glasses/775232\">Glasses by matias porta lezcano</a></li>\n</ul>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">Sponsors</h2><a href=\"#sponsors\" aria-label=\"Permalink: Sponsors\" id=\"user-content-sponsors\"></a></p>\n<p dir=\"auto\">Scrutiny is only possible with the help of my <a href=\"https://github.com/sponsors/AnalogJ/\">Github Sponsors</a>.</p>\n<p dir=\"auto\"><a href=\"https://github.com/sponsors/AnalogJ/\"><img alt=\"\" src=\"https://github.com/AnalogJ/scrutiny/raw/master/docs/sponsors.png\"></a></p>\n<p dir=\"auto\">They read a simple <a href=\"https://github.com/sponsors/AnalogJ/\">reddit announcement post</a> and decided to trust &amp; finance\na developer they've never met. It's an exciting and incredibly humbling experience.</p>\n<p dir=\"auto\">If you found Scrutiny valuable, please consider <a href=\"https://github.com/sponsors/AnalogJ/\">supporting my work</a></p>\n</article></div></div>","textContent":"\n \n \n \n\nscrutiny\n\n\n\n\n\n\nWebUI for smartd S.M.A.R.T monitoring\n\nNOTE: Scrutiny is a Work-in-Progress and still has some rough edges.\n\n\nIntroduction\nIf you run a server with more than a couple of hard drives, you're probably already familiar with S.M.A.R.T and the smartd daemon. If not, it's an incredible open source project described as the following:\n\nsmartd is a daemon that monitors the Self-Monitoring, Analysis and Reporting Technology (SMART) system built into many ATA, IDE and SCSI-3 hard drives. The purpose of SMART is to monitor the reliability of the hard drive and predict drive failures, and to carry out different types of drive self-tests.\n\nThese S.M.A.R.T hard drive self-tests can help you detect and replace failing hard drives before they cause permanent data loss. However, there's a couple issues with smartd:\n\nThere are more than a hundred S.M.A.R.T attributes, however smartd does not differentiate between critical and informational metrics\nsmartd does not record S.M.A.R.T attribute history, so it can be hard to determine if an attribute is degrading slowly over time.\nS.M.A.R.T attribute thresholds are set by the manufacturer. In some cases these thresholds are unset, or are so high that they can only be used to confirm a failed drive, rather than detecting a drive about to fail.\nsmartd is a command line only tool. For head-less servers a web UI would be more valuable.\n\nScrutiny is a Hard Drive Health Dashboard & Monitoring solution, merging manufacturer provided S.M.A.R.T metrics with real-world failure rates.\nFeatures\nScrutiny is a simple but focused application, with a couple of core features:\n\nWeb UI Dashboard - focused on Critical metrics\nsmartd integration (no re-inventing the wheel)\nAuto-detection of all connected hard-drives\nS.M.A.R.T metric tracking for historical trends\nCustomized thresholds using real world failure rates\nTemperature tracking\nProvided as an all-in-one Docker image (but can be installed manually)\nConfigurable Alerting/Notifications via Webhooks\n(Future) Hard Drive performance testing & tracking\n\nGetting Started\nRAID/Virtual Drives\nScrutiny uses smartctl --scan to detect devices/drives.\n\nAll RAID controllers supported by smartctl are automatically supported by Scrutiny.\n\nWhile some RAID controllers support passing through the underlying SMART data to smartctl others do not.\nIn some cases --scan does not correctly detect the device type, returning incomplete SMART data.\nScrutiny supports overriding detected device type via the config file: see example.collector.yaml\n\n\nIf you use docker, you must pass though the RAID virtual disk to the container using --device (see below)\n\nThis device may be in /dev/* or /dev/bus/*.\nIf you're unsure, run smartctl --scan on your host, and pass all listed devices to the container.\n\n\n\nSee docs/TROUBLESHOOTING_DEVICE_COLLECTOR.md for help\nDocker\nIf you're using Docker, getting started is as simple as running the following command:\n\nSee docker/example.omnibus.docker-compose.yml for a docker-compose file.\n\ndocker run -it --rm -p 8080:8080 -p 8086:8086 \\\n -v `pwd`/scrutiny:/opt/scrutiny/config \\\n -v `pwd`/influxdb2:/opt/scrutiny/influxdb \\\n -v /run/udev:/run/udev:ro \\\n --cap-add SYS_RAWIO \\\n --device=/dev/sda \\\n --device=/dev/sdb \\\n --name scrutiny \\\n ghcr.io/analogj/scrutiny:master-omnibus\n\n/run/udev is necessary to provide the Scrutiny collector with access to your device metadata\n--cap-add SYS_RAWIO is necessary to allow smartctl permission to query your device SMART data\n\nNOTE: If you have NVMe drives, you must add --cap-add SYS_ADMIN as well. See issue #26\n\n\n--device entries are required to ensure that your hard disk devices are accessible within the container.\nghcr.io/analogj/scrutiny:master-omnibus is a omnibus image, containing both the webapp server (frontend & api) as well as the S.M.A.R.T metric collector. (see below)\n\nHub/Spoke Deployment\nIn addition to the Omnibus image (available under the latest tag) you can deploy in Hub/Spoke mode, which requires 3\nother Docker images:\n\nghcr.io/analogj/scrutiny:master-collector - Contains the Scrutiny data collector, smartctl binary and cron-like\nscheduler. You can run one collector on each server.\nghcr.io/analogj/scrutiny:master-web - Contains the Web UI and API. Only one container necessary\ninfluxdb:2.2 - InfluxDB image, used by the Web container to persist SMART data. Only one container necessary\nSee docs/TROUBLESHOOTING_INFLUXDB.md\n\n\nSee docker/example.hubspoke.docker-compose.yml for a docker-compose file.\n\ndocker run --rm -p 8086:8086 \\\n -v `pwd`/influxdb2:/var/lib/influxdb2 \\\n --name scrutiny-influxdb \\\n influxdb:2.2\n\ndocker run --rm -p 8080:8080 \\\n -v `pwd`/scrutiny:/opt/scrutiny/config \\\n --name scrutiny-web \\\n ghcr.io/analogj/scrutiny:master-web\n\ndocker run --rm \\\n -v /run/udev:/run/udev:ro \\\n --cap-add SYS_RAWIO \\\n --device=/dev/sda \\\n --device=/dev/sdb \\\n -e COLLECTOR_API_ENDPOINT=http://SCRUTINY_WEB_IPADDRESS:8080 \\\n --name scrutiny-collector \\\n ghcr.io/analogj/scrutiny:master-collector\nManual Installation (without-Docker)\nWhile the easiest way to get started with Scrutiny is using Docker,\nit is possible to run it manually without much work. You can even mix and match, using Docker for one component and\na manual installation for the other.\nSee docs/INSTALL_MANUAL.md for instructions.\nUsage\nOnce scrutiny is running, you can open your browser to http://localhost:8080 and take a look at the dashboard.\nIf you're using the omnibus image, the collector should already have run, and your dashboard should be populate with every\ndrive that Scrutiny detected. The collector is configured to run once a day, but you can trigger it manually by running the command below.\nFor users of the docker Hub/Spoke deployment or manual install: initially the dashboard will be empty.\nAfter the first collector run, you'll be greeted with a list of all your hard drives and their current smart status.\ndocker exec scrutiny /opt/scrutiny/bin/scrutiny-collector-metrics run\nConfiguration\nBy default Scrutiny looks for its YAML configuration files in /opt/scrutiny/config\nThere are two configuration files available:\n\nWebapp/API config via scrutiny.yaml - example.scrutiny.yaml.\nCollector config via collector.yaml - example.collector.yaml.\n\nNeither file is required, however if provided, it allows you to configure how Scrutiny functions.\nCron Schedule\nUnfortunately the Cron schedule cannot be configured via the collector.yaml (as the collector binary needs to be trigged by a scheduler/cron).\nHowever, if you are using the official ghcr.io/analogj/scrutiny:master-collector or ghcr.io/analogj/scrutiny:master-omnibus docker images,\nyou can use the COLLECTOR_CRON_SCHEDULE environmental variable to override the default cron schedule (daily @ midnight - 0 0 * * *).\ndocker run -e COLLECTOR_CRON_SCHEDULE=\"0 0 * * *\" ...\nNotifications\nScrutiny supports sending SMART device failure notifications via the following services:\n\nCustom Script (data provided via environmental variables)\nEmail\nWebhooks\nDiscord\nGotify\nHangouts\nIFTTT\nJoin\nMattermost\nntfy\nPushbullet\nPushover\nSlack\nTeams\nTelegram\nTulip\n\nCheck the notify.urls section of example.scrutiny.yml for examples.\nFor more information and troubleshooting, see the TROUBLESHOOTING_NOTIFICATIONS.md file\nTesting Notifications\nYou can test that your notifications are configured correctly by posting an empty payload to the notifications health check API.\ncurl -X POST http://localhost:8080/api/health/notify\nDebug mode & Log Files\nScrutiny provides various methods to change the log level to debug and generate log files.\nWeb Server/API\nYou can use environmental variables to enable debug logging and/or log files for the web server:\nDEBUG=true\nSCRUTINY_LOG_FILE=/tmp/web.log\nYou can configure the log level and log file in the config file:\nlog:\n file: '/tmp/web.log'\n level: DEBUG\nOr if you're not using docker, you can pass CLI arguments to the web server during startup:\nscrutiny start --debug --log-file /tmp/web.log\nCollector\nYou can use environmental variables to enable debug logging and/or log files for the collector:\nDEBUG=true\nCOLLECTOR_LOG_FILE=/tmp/collector.log\nOr if you're not using docker, you can pass CLI arguments to the collector during startup:\nscrutiny-collector-metrics run --debug --log-file /tmp/collector.log\nSupported Architectures\n\n\n\nArchitecture Name\nBinaries\nDocker\n\n\n\n\nlinux-amd64\n✅\n✅\n\n\nlinux-arm-5\n✅\n\n\n\nlinux-arm-6\n✅\n\n\n\nlinux-arm-7\n✅\nweb/collector only. see #236\n\n\nlinux-arm64\n✅\n✅\n\n\nfreebsd-amd64\n✅\n\n\n\nmacos-amd64\n✅\n✅\n\n\nmacos-arm64\n✅\n✅\n\n\nwindows-amd64\n✅\nWIP, see #15\n\n\nwindows-arm64\n✅\n\n\n\n\nContributing\nPlease see the CONTRIBUTING.md for instructions for how to develop and contribute to the scrutiny codebase.\nWork your magic and then submit a pull request. We love pull requests!\nIf you find the documentation lacking, help us out and update this README.md. If you don't have the time to work on Scrutiny, but found something we should know about, please submit an issue.\nVersioning\nWe use SemVer for versioning. For the versions available, see the tags on this repository.\nAuthors\nJason Kulatunga - Initial Development - @AnalogJ\nLicenses\n\nMIT\nLogo: Glasses by matias porta lezcano\n\nSponsors\nScrutiny is only possible with the help of my Github Sponsors.\n\nThey read a simple reddit announcement post and decided to trust & finance\na developer they've never met. It's an exciting and incredibly humbling experience.\nIf you found Scrutiny valuable, please consider supporting my work\n","length":9494,"excerpt":"","siteName":null}