Compare commits

...

649 Commits

Author SHA1 Message Date
dependabot[bot] fe6fcf1ede Bump immutable from 4.3.0 to 4.3.8 in /webapp/frontend
Bumps [immutable](https://github.com/immutable-js/immutable-js) from 4.3.0 to 4.3.8.
- [Release notes](https://github.com/immutable-js/immutable-js/releases)
- [Changelog](https://github.com/immutable-js/immutable-js/blob/main/CHANGELOG.md)
- [Commits](https://github.com/immutable-js/immutable-js/compare/v4.3.0...v4.3.8)

---
updated-dependencies:
- dependency-name: immutable
  dependency-version: 4.3.8
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-13 21:42:45 +00:00
mcarbonne 0aea6b96ca Add devcontainer config (#861)
Closes #853

---------

Co-authored-by: Aram Akhavan <github@aram.nubmail.ca>
Co-authored-by: Aram Akhavan <1147328+kaysond@users.noreply.github.com>
2026-03-13 14:40:51 -07:00
slydetector afbf1450c2 Build and distribute latest smartmontools 7.5 as part of image (#924)
Co-authored-by: slydetector <slydetector>
Co-authored-by: Aram Akhavan <1147328+kaysond@users.noreply.github.com>
2026-03-08 13:24:23 -07:00
Merlin 6a278bc2cf Add support for topic in Zulip notifications and truncate long topics
Subjects over 60 characters long, such as the test notification, are rejected by shoutrrr. This truncates the subject to the max length.

Users may want all Scrutiny notifications to be sent to a particular topic rather than whatever Scrutiny happens to decide.
2026-02-28 10:27:16 -08:00
enoch85 9d1ce790d0 Update docker compose example (#685) 2026-02-22 08:06:55 -08:00
Alliot fb5d4818b0 fix: page smart attribute queries with limit and sort (#869) 2026-02-21 21:04:33 -08:00
Aram Akhavan 3a06920354 Make defaut temperature history length 1 week (#939)
Closes #356
2026-02-21 20:48:44 -08:00
Aram Akhavan dd8a6757d1 Add telegram message thread format to example.scrutiny.yaml (#938)
Closes #765
2026-02-21 20:43:20 -08:00
Aram Akhavan d433a6a54e Bump base image to debian trixie (#935)
CIoses #929
2026-02-21 19:55:21 -08:00
Aram Akhavan c365988a52 Update Makefile docker image tags to use ghcr.io (#936)
Also remove outdated note on building frontend (it's built in the Dockerfiles)
2026-02-21 16:26:35 -08:00
Aram Akhavan 6a1a985306 Switch to maintained fork of shoutrrr (#934)
Closes #817
2026-02-21 16:13:37 -08:00
Aram Akhavan 02996d6288 Bump influxdb to 2.8 (#933)
Closes #863
2026-02-21 16:02:10 -08:00
Aram Akhavan 3d2671650e Change LBA metrics to uint64 (#932)
Fixes #800
2026-02-21 15:54:45 -08:00
Aram Akhavan 28658790c8 Fix notify urls env var in docs (#931)
Closes #862
2026-02-21 15:50:31 -08:00
Kevin Thomer 18f10a9295 Add documentation for rootless systemd service and podman quadlets (#927) 2026-02-19 11:29:55 -08:00
Liu Xiaoyi 67b7a08e4a feat: add "day" as resolution for temperature graph (#823) 2026-02-13 09:58:15 -08:00
packagrio-bot a014337167 (v0.8.6) Automated packaging of release by Packagr 2026-02-09 21:17:48 +00:00
Aram Akhavan 3a5ee0a762 Remove armv7 from omnibus builds (#916) 2026-02-09 13:14:39 -08:00
packagrio-bot 625a0244e2 (v0.8.5) Automated packaging of release by Packagr 2026-02-09 20:26:21 +00:00
Aram Akhavan a269ba57df Fix omnibus release builds (#915) 2026-02-09 12:24:03 -08:00
packagrio-bot 6a76b5aa26 (v0.8.4) Automated packaging of release by Packagr 2026-02-09 06:20:42 +00:00
Aram Akhavan 939d40eb20 Fix Dockerfiles in release workflow (#907) 2026-02-08 22:18:24 -08:00
Aram Akhavan ad738508e5 Update README.md CI badge (#906) 2026-02-08 22:10:13 -08:00
dependabot[bot] 971249ba3f Bump js-yaml from 3.14.1 to 3.14.2 in /webapp/frontend (#905)
Bumps [js-yaml](https://github.com/nodeca/js-yaml) from 3.14.1 to 3.14.2.
- [Changelog](https://github.com/nodeca/js-yaml/blob/master/CHANGELOG.md)
- [Commits](https://github.com/nodeca/js-yaml/compare/3.14.1...3.14.2)

---
updated-dependencies:
- dependency-name: js-yaml
  dependency-version: 3.14.2
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-08 21:52:36 -08:00
dependabot[bot] 73417ca653 Bump follow-redirects from 1.15.2 to 1.15.6 in /webapp/frontend (#604)
Bumps [follow-redirects](https://github.com/follow-redirects/follow-redirects) from 1.15.2 to 1.15.6.
- [Release notes](https://github.com/follow-redirects/follow-redirects/releases)
- [Commits](https://github.com/follow-redirects/follow-redirects/compare/v1.15.2...v1.15.6)

---
updated-dependencies:
- dependency-name: follow-redirects
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Aram Akhavan <github@aram.nubmail.ca>
2026-02-08 21:52:14 -08:00
dependabot[bot] a6d092983d Bump express from 4.18.2 to 4.19.2 in /webapp/frontend (#613)
Bumps [express](https://github.com/expressjs/express) from 4.18.2 to 4.19.2.
- [Release notes](https://github.com/expressjs/express/releases)
- [Changelog](https://github.com/expressjs/express/blob/master/History.md)
- [Commits](https://github.com/expressjs/express/compare/4.18.2...4.19.2)

---
updated-dependencies:
- dependency-name: express
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Aram Akhavan <github@aram.nubmail.ca>
2026-02-08 21:52:07 -08:00
dependabot[bot] 1988b101e1 Bump lodash from 4.17.21 to 4.17.23 in /webapp/frontend (#856)
Bumps [lodash](https://github.com/lodash/lodash) from 4.17.21 to 4.17.23.
- [Release notes](https://github.com/lodash/lodash/releases)
- [Commits](https://github.com/lodash/lodash/compare/4.17.21...4.17.23)

---
updated-dependencies:
- dependency-name: lodash
  dependency-version: 4.17.23
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Aram Akhavan <github@aram.nubmail.ca>
2026-02-08 21:37:51 -08:00
dependabot[bot] 746ae76cfc Bump node-forge from 1.3.1 to 1.3.3 in /webapp/frontend (#857)
Bumps [node-forge](https://github.com/digitalbazaar/forge) from 1.3.1 to 1.3.3.
- [Changelog](https://github.com/digitalbazaar/forge/blob/main/CHANGELOG.md)
- [Commits](https://github.com/digitalbazaar/forge/compare/v1.3.1...v1.3.3)

---
updated-dependencies:
- dependency-name: node-forge
  dependency-version: 1.3.3
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Aram Akhavan <github@aram.nubmail.ca>
2026-02-08 21:37:44 -08:00
Aram Akhavan 3380023ad0 Run CI on master (#904)
* Run CI on master

* Consolidate go linting into main CI
2026-02-08 21:21:48 -08:00
mcarbonne 6362512406 Update go to 1.25 (#875)
Closes #872

* update go to 1.25

* update deprecated gomock

* remove deprecated ioutil

* update (and fix) ci

* add golang lint (as warning)

* enable formatters + freeze golang version
2026-02-08 20:46:36 -08:00
packagrio-bot c6323fb7ce (v0.8.3) Automated packaging of release by Packagr 2026-02-09 04:41:13 +00:00
Aram Akhavan 349c7d4def Fix temperature plot to use local time (#903)
Closes #893 #889
2026-02-08 20:35:28 -08:00
Aram Akhavan 19ac712b78 Update docker builds (#901)
Closes #895 #896

* dont build every master commit
* push latest tags with releases
* update docs accordingly
* bump action versions
2026-02-08 20:11:04 -08:00
Aram Akhavan c95b272485 Fix nightlies tagging (#899) 2026-02-08 15:34:33 -08:00
Aram Akhavan 43231d7ec3 Build nightlies for all images (#897)
Closes #894
2026-02-08 13:57:45 -08:00
Hybras 3f6537e94c Fix temperature conversion in temperature.pipe.ts (#815) 2026-02-07 23:23:48 -08:00
Aram Akhavan b021797919 Fix issue triage template (#879) 2026-02-07 20:18:57 -08:00
Aram Akhavan 7ad997cc0e Update issue triage, feature request, and contributions policies (#877)
Adopt ghostty's policy for using issues to track actionable bug reports or feature requests, and discussions for triaging issues or discussing potential feature requests.
2026-02-07 20:09:11 -08:00
packagrio-bot a3000fd6b0 (v0.8.2) Automated packaging of release by Packagr 2026-02-07 05:05:07 +00:00
Jason Kulatunga af59f2639c Enable inclusion of hidden files in artifact upload 2026-02-06 23:00:44 -05:00
Jason Kulatunga b0ff0b3a48 Rename download artifact name to 'workspace' 2026-02-06 22:58:35 -05:00
Jason Kulatunga 56056b2d6a Change workspace download path in release workflow 2026-02-06 22:33:18 -05:00
Aram Akhavan 51f0ba6ee2 Update release workflows (#874)
* Bump action versions
* Merge frontend release into main release workflow
* Fix bugs with asset naming
2026-02-06 16:20:57 -08:00
Aram Akhavan 34b0347acd Fix release workflow (#873)
Newer version of upload-artifact requires unique artifact names, but the matrix was using the same name for all of them.
2026-02-06 12:29:15 -08:00
Jason Kulatunga 0565962a14 Check result of attribute casting to avoid panics (#528) 2026-02-05 22:14:03 -08:00
Liu Xiaoyi 184bc4bec5 Improve temperature logging (#825)
* Always log current temperature
* Forcefully align each ata_sct_temperature_history data point to an integer multiple of the logging interval to prevent repeated data points

Fixes #824
2026-02-05 21:35:35 -08:00
mcarbonne bdbe13e320 Add option to discard SCT Data Table Temperature History (#557)
Fixes #494
2026-02-05 20:59:24 -08:00
Aram Akhavan 761014a93f Fix codecov upload (#850)
Update Codecov action to version 5 and add token
2026-02-04 16:11:13 -08:00
Aram Akhavan 27be0b8327 Add AI Policy (#851) 2026-02-01 21:48:33 -08:00
Aram Akhavan 69abe43a1d Update authors (#849)
Add Aram Akhavan as a maintainer
2026-01-31 13:29:45 -08:00
Jason Kulatunga 7c35d59552 Merge pull request #784 from Peppercorn27/master
Fix web ui latency
2025-10-19 08:50:10 -04:00
Jason Kulatunga 742153e5dc Merge pull request #773 from Impact123/restart-policy
Unify docker restart policy among docs and example files
2025-10-19 08:43:40 -04:00
Jason Kulatunga 5f7e4a3808 Merge pull request #793 from pabsi/patch-1
feat: Update dashboard.component.ts
2025-08-14 11:38:02 -04:00
Pablo bb98b8c45b feat: Update dashboard.component.ts
Addresses https://github.com/AnalogJ/scrutiny/issues/755
2025-08-08 21:02:05 +02:00
Peppercorn27 b71897fa5f fix web ui latency
fix web ui latency in situations where cron shedule has been reduced resulting in more data being present in influxDB than expected
2025-07-06 17:11:42 +01:00
Impact a182c691fb Unify docker restart policy among docs and example files. 2025-05-06 05:07:56 +00:00
Jason Kulatunga 4066c84c8e Merge pull request #771 from RoboMagus/docker_semver_tags
Add docker semver tags
2025-04-30 10:32:48 -04:00
Jason Kulatunga 4a72c9ef55 Merge pull request #754 from Berry-95/491-FEAT-Allow-disks-to-be-archived
Fixes 491 [FEAT] Allow disks to be hidden/archived
2025-04-30 10:31:21 -04:00
Sam 3e11583283 491 [FEAT] Allow disks to be hidden/archived
- Fix mock device type definition mismatch in the frontend.
- Make DeviceModel archived field optional.
2025-04-28 15:01:31 +02:00
RoboMagus ea9799d963 Add docker semver tags 2025-04-24 22:38:31 +02:00
Jason Kulatunga e46ab7373e Merge pull request #739 from RickZaki/GHissue-643
fix: issue 643 - Fahrenheit values in graph were converted twice
2025-04-23 08:06:20 -04:00
Jason Kulatunga 87f923e1f2 Merge branch 'master' into GHissue-643 2025-04-11 07:16:29 -04:00
Sam Beresford 2244504023 Merge branch 'master' into 491-FEAT-Allow-disks-to-be-archived 2025-04-10 21:01:31 +01:00
Jason Kulatunga 192ae40f74 Merge pull request #744 from mcarbonne/fix_ci
Fix CI (conflicting artifact names)
2025-04-10 04:27:19 -04:00
Sam 600cd153e0 491 [FEAT] Allow disks to be hidden/archived
- Add archived to device type & db
- Add archive/unarchive handlers to webapp backend
- Add archive toggle & styling to webapp frontend
2025-02-21 09:25:27 +01:00
Maximilien Carbonne d11bf0a2fc fix CI (conflicting artifact names) 2025-01-26 21:51:51 +01:00
Rick Zaki 50561f34ea fix: https://github.com/AnalogJ/scrutiny/issues/643
needed to separate formatting temps from converting
dashboard was using format method to convert and send Fahrenheit values to chart, then passing the same method into chart formatter causing the Fahrenheit value to be passed in as Celsius and converted again.
2025-01-09 14:27:26 -05:00
Jason Kulatunga a58f9445c1 Merge pull request #619 from datenzar/override-config-with-env-variables
feat: Ability to override commands args
2025-01-08 10:46:10 -07:00
Jason Kulatunga 1ec478302f Merge pull request #737 from AnalogJ/AnalogJ-patch-1
Update TROUBLESHOOTING_DEVICE_COLLECTOR.md
2025-01-05 11:54:06 -07:00
Jason Kulatunga 412f956782 Update TROUBLESHOOTING_DEVICE_COLLECTOR.md 2025-01-05 11:53:43 -07:00
Jason Kulatunga 9b28ac5069 Update TESTERS.md 2025-01-04 17:46:17 -07:00
Jason Kulatunga db2869ffc6 Merge pull request #736 from AnalogJ/AnalogJ-patch-1
Update example.hubspoke.docker-compose.yml
2025-01-04 17:42:03 -07:00
Jason Kulatunga 6e349244d1 Update example.hubspoke.docker-compose.yml 2025-01-04 17:41:51 -07:00
Jason Kulatunga e6cd3ee3c6 Update TESTERS.md 2025-01-04 17:35:41 -07:00
Jason Kulatunga df6a4cef59 Update TROUBLESHOOTING_INFLUXDB.md 2025-01-04 17:01:29 -07:00
Jason Kulatunga 8cf7d64da7 Merge pull request #684 from enoch85/patch-1
Add info about rootless Docker
2025-01-04 15:29:20 -07:00
Jason Kulatunga 3de12cd739 Merge branch 'master' into override-config-with-env-variables 2025-01-04 15:26:10 -07:00
Jason Kulatunga affe05e145 Merge pull request #725 from pabsi/706-add-wait-time-between-checks-fix-unit
Issue 706: Fix time unit
2024-11-26 09:48:53 -05:00
Pablo Garcia 9ad96e6d37 Change to time.Seconds 2024-11-26 15:13:44 +01:00
Pablo Garcia 85d98316f3 Issue 706: Fix time unit 2024-11-26 10:46:27 +01:00
Jason Kulatunga 0641b5e79d Merge pull request #710 from pabsi/706-add-wait-time-between-checks
Add a wait between disks checks
2024-11-22 07:57:13 -05:00
Pablo Garcia Alvarez c168e1e9fc Add check for the wait 2024-11-11 22:07:57 +01:00
Pablo Garcia 56a9454730 Add a wait between disks checks 2024-11-07 11:54:46 +01:00
Martin Kleine a783604c4e Feature: Use automatic binding of env variables 2024-10-15 00:18:54 +02:00
Martin Kleine 604dcf355c feat: Ability to override commands args
In order to override the arguments which are used e.g. to call smartctl, they need to be bind to the respective environment variable.
2024-10-15 00:18:54 +02:00
Jason Kulatunga 57dc547265 fixing github actions. 2024-09-20 11:24:56 -04:00
Jason Kulatunga e0fe17afbf Merge pull request #686 from nicjohnson145/feat--device-allowlist
feat: create allow-list for filtering down devices to only a subset
2024-09-20 11:22:43 -04:00
Nic Johnson c9429c61b2 feat: create allow-list for filtering down devices to only a subset 2024-09-11 23:12:00 -05:00
Daniel Hansson 394ac0af2c Add info about rootless Docker
This avoids session being killed when running rootless.
2024-09-09 21:33:59 +02:00
Jason Kulatunga 48feee51d0 Merge pull request #672 from Hudater/master
Updated containrrr/shoutrrr from v0.7.1 to v0.8.0
2024-09-08 10:27:22 +09:00
Harshit Mani Tripathi d4fb7786d2 reverted accidental bump of spf13/viper from v0.14.0 to v0.15.0 2024-08-04 18:59:37 +05:30
Harshit Mani Tripathi c316f996c6 updated containrrr/shoutrrr from v0.7.1 to v0.8.0 2024-08-04 18:48:23 +05:30
Jason Kulatunga 49108bd1ef Merge pull request #634 from bauzer714/addDeviceHoursSetting
Create a setting for user to indicate humanized or hours on dashboard/device detail
2024-07-25 13:33:57 -07:00
Jason Kulatunga 0dafb65c5f Merge branch 'master' into addDeviceHoursSetting 2024-07-25 13:29:29 -07:00
Brice Bauer c5943a1ca4 Adjust null input response, and tests 2024-07-25 15:40:28 -04:00
Brice Bauer a5893f0bf9 Add tests for DeviceHoursPipe 2024-07-22 14:02:27 -04:00
Brice Bauer 142fe06df1 Move powered_on_hours_unit to a new migration id 2024-07-22 08:37:35 -04:00
Jason Kulatunga 8b7ddd3042 Merge pull request #644 from luomie/patch-1
fix example Shoutrrr discord notification url structure
2024-07-18 20:14:31 -07:00
Jason Kulatunga db57281557 Merge pull request #666 from phcreery/patch-1
Update INSTALL_HUB_SPOKE.md
2024-07-17 18:48:37 -07:00
Peyton Creery 5a5877b729 Update INSTALL_HUB_SPOKE.md 2024-07-17 20:14:26 -05:00
luomie 0a89c2bab3 fix Shoutrrr discord notification url structure 2024-05-19 23:13:05 +02:00
Brice Bauer a18e2842ac Update db migration description to match setting possible values 2024-05-08 08:43:25 -04:00
Brice Bauer 806f7c64a0 Add pipe and implement to dashboard/device component 2024-05-08 08:30:49 -04:00
Brice Bauer 8fa32c6dd7 Add DB Migration and config/settings 2024-05-07 16:45:09 -04:00
packagrio-bot 5e6ab2290b (v0.8.1) Automated packaging of release by Packagr 2024-04-08 04:48:11 +00:00
Jason Kulatunga 67c0af9f59 fix amd64 s6_arch. 2024-04-05 14:11:11 -07:00
Jason Kulatunga 55565e509d Merge pull request #625 from AnalogJ/cron_fixes
fixing cron in #602
2024-04-05 14:04:39 -07:00
Jason Kulatunga f74d9c108a fixing cron in #602
Updated s6overlay to v3
Note:  xz-utils was added as a requirement for s6-overlay (using safe 5.4.1 instead of compromised 5.6.x versions)
2024-04-05 10:01:04 -07:00
packagrio-bot 5977f7c7d4 (v0.8.0) Automated packaging of release by Packagr 2024-03-13 23:47:26 +00:00
Jason Kulatunga 3490a2ffc2 Merge pull request #597 from joserebelo/sigterm
Use exec on scrutiny-collector cron to ensure signal handling
2024-03-12 21:08:16 -07:00
Jason Kulatunga a0f9e6e3f3 Merge pull request #596 from dropsignal/master
rebase docker image to debian 12 (bookworm)
2024-03-12 20:53:35 -07:00
Drop Signal 6a9b89b38a fixed missing && 2024-03-12 21:39:36 -05:00
Drop Signal 543f376015 performing requested changes 2024-03-09 21:37:11 -06:00
José Rebelo ca7bd2c151 Use exec on scrutiny-collector cron to ensure signal handling
This way SIGTERM gets propagated and the container terminates
gracefully.
2024-03-09 22:46:07 +00:00
Drop Signal 1e74e96658 rebase to debian 12 (bookworm) 2024-03-08 22:30:43 -06:00
packagrio-bot 5e33c33e75 (v0.7.3) Automated packaging of release by Packagr 2024-02-24 23:12:54 +00:00
Jason Kulatunga 3ea223fa8e Merge pull request #547 from kaysond/master
Add support for disabling repeat notifications if the values haven't changed
2024-02-24 15:10:29 -08:00
Jason Kulatunga 44275c66ca Merge pull request #569 from uhthomas/466
fix(collector): show correct nvme capacity
2024-02-24 09:32:59 -08:00
Jason Kulatunga 19bd59dc27 Merge pull request #577 from DrFrankensteinUK/patch-1
Update SUPPORTED_NAS_OS.md
2024-02-24 09:20:23 -08:00
DrFrankensteinUK b7fab3c94e Update SUPPORTED_NAS_OS.md
Added another link for the new Container Manager version of my guide for those on the newer DSM versions. The older guide while archived still functions correctly.
2024-02-17 16:47:05 +00:00
Aram Akhavan 09f4b34bf0 fix server test 2024-02-04 11:52:30 -08:00
Aram Akhavan f24abf254b Add tests for not repeating notifications 2024-02-04 11:38:52 -08:00
Aram Akhavan cc889f2a2d fix notify tests 2024-02-04 11:38:52 -08:00
Jason Kulatunga 2aa242e364 update mockgen instructions 2024-02-04 11:38:52 -08:00
Jason Kulatunga 1c193aa043 add database interface mock 2024-02-04 11:38:52 -08:00
Aram Akhavan 01c5a7fdfe Address review comments 2024-02-04 11:38:51 -08:00
Aram Akhavan 98d958888c refactor common code 2024-02-04 11:38:51 -08:00
Aram Akhavan 4e5c76b259 Add support for disabling repeat notifications
* Add a new database function for getting the tail

* Update ShouldNotify() to handle ignoring repeat notifications if set
2024-02-04 11:38:51 -08:00
Aram Akhavan 6417d71311 Add a setting for repeating notifications or not 2024-02-04 11:38:50 -08:00
Aram Akhavan 3285eb659f Fix some development issues 2024-02-04 11:38:50 -08:00
Thomas Way db86bac9ef fix(collector): show correct nvme capacity
Some nvme devices do not report their capacity through the usual
'user_capacity' field, instead the total capacity is reported with the
'nvme_total_capacity' field.

Fixes: #466
2024-01-23 22:02:02 +00:00
Jason Kulatunga a3dfce3561 Update INSTALL_HUB_SPOKE.md
fixes https://github.com/AnalogJ/scrutiny/issues/495
2024-01-23 12:57:22 -08:00
Jason Kulatunga 240178d742 Merge pull request #529 from KaeTuuN/patch-1
Update README.md Links to compose files
2024-01-23 12:19:39 -08:00
Jason Kulatunga 2dcb6cd6b6 Merge pull request #566 from PrplHaz4/patch-3
Add COLLECTOR_HOST_ID env var to hubspoke example
2024-01-23 12:16:27 -08:00
PrplHaz4 56df7b5797 Add COLLECTOR_HOST_ID env var to hubspoke example
Added COLLECTOR_HOST_ID environment variable to hubspoke example
2024-01-15 16:51:10 -05:00
Jason Kulatunga d54a0abc8c Merge pull request #559 from ibizaman/patch-1
Fix typo in readme
2023-12-26 14:50:02 -07:00
Pierre Penninckx 061f55f5b1 Fix typo in readme 2023-12-19 15:51:38 -08:00
Jason Kulatunga 5bbd4c3b64 update delete device message to clarify that no data will actually be effected, only scrutiny data.
fixes #544
2023-11-22 14:25:26 -08:00
Jason Kulatunga fb6c3d6a24 document Shoutrrr special characters in username & password -
fixes #532
2023-11-18 08:50:47 -08:00
KaeTuuN 87dc51a9c0 Update README.md Links to compose files
Links were not working, so I replaced them with working ones.
2023-10-18 10:33:37 +02:00
packagrio-bot c3a0fb7fb5 (v0.7.2) Automated packaging of release by Packagr 2023-10-17 13:19:35 +00:00
Jason Kulatunga 5e87608587 Merge pull request #520 from kaysond/patch-1 2023-10-17 06:09:52 -07:00
Jason Kulatunga ab7fd107e7 Merge pull request #527 from kaysond/master 2023-10-17 06:08:58 -07:00
Aram Akhavan 550cd59093 Fix parsing of attribute 188 on seagate drives 2023-10-14 21:39:12 -07:00
Aram Akhavan a8621d2bb0 Update permissions setting in Dockerfile.web
This fixes issues with assets loading when you run as non root users
2023-09-29 11:51:47 -07:00
Jason Kulatunga 4b1d9dc2d3 Merge pull request #519 from SlavikCA/patch-1 2023-09-27 10:17:26 -07:00
Slavik 22769b962e Docs: few more details about Traefik proxy 2023-09-26 21:11:59 -07:00
Slavik feb7909961 Docs: add Traefik REVERSE_PROXY config example 2023-09-26 21:04:26 -07:00
Jason Kulatunga 2f01e8c8e0 Merge pull request #512 from kaysond/master 2023-09-06 12:51:50 -07:00
Aram Akhavan 31c2daedf7 fix smart 188 thresholds 2023-09-03 10:37:43 -07:00
packagrio-bot ee893cc360 (v0.7.1) Automated packaging of release by Packagr 2023-04-08 23:01:22 +00:00
Jason Kulatunga d73907d357 Merge pull request #474 from AnalogJ/beta 2023-04-08 15:58:51 -07:00
Jason Kulatunga 0b50305f38 fix invalid COPY instruction. 2023-04-07 00:00:11 -07:00
Jason Kulatunga ee3d719c3a simplify docker image build
changes contributed by @modem7

fixes #461
2023-04-06 23:57:15 -07:00
Jason Kulatunga d76cdca4a5 Merge pull request #472 from adamantike/misc/add-support-for-yml-config-files 2023-04-06 17:46:03 -07:00
Jason Kulatunga b34ed607b7 Merge pull request #471 from adamantike/feat/add-support-for-ntfy-notifications 2023-04-06 17:42:50 -07:00
Michael Manganiello 932e191510 Allow configuration files with yml extension
If a `collector.yml` or `scrutiny.yml` configuration file is present,
use it as long as a `.yaml` version is not available too.

Fixes #79
2023-04-06 20:55:22 -03:00
Michael Manganiello 3a6c407fe7 Add support for ntfy notifications
Updates [`shoutrrr`](containrrr.dev/shoutrrr/) to `v0.7.1` to enable
support for [ntfy](https://ntfy.sh/) notifications.

Fixes #433.
2023-04-06 17:04:48 -03:00
Jason Kulatunga 8c3afc31f4 Merge pull request #449 from adamantike/fix/delete-influxdb-deb-from-docker-image 2023-04-05 23:25:52 -07:00
packagrio-bot 2e4ba44952 (v0.7.0) Automated packaging of release by Packagr 2023-04-06 05:22:42 +00:00
Jason Kulatunga 4192ac719e Merge pull request #470 from AnalogJ/beta 2023-04-05 22:07:13 -07:00
Jason Kulatunga 539c94595f Merge pull request #469 from AnalogJ/angular-v13-upgrade 2023-04-05 21:09:35 -07:00
Jason Kulatunga de21e611a3 fixing migration for line_stroke setting. 2023-04-05 21:06:09 -07:00
Jason Kulatunga dea362361e Merge pull request #468 from AnalogJ/angular-v13-upgrade 2023-04-05 20:58:56 -07:00
Jason Kulatunga 7b77519f49 trying to fix tests. 2023-04-05 20:48:07 -07:00
Jason Kulatunga 94df7e1ec3 trying to fix tests. 2023-04-05 20:44:07 -07:00
Jason Kulatunga babd8d3089 trying to fix tests. 2023-04-05 20:35:18 -07:00
Jason Kulatunga 52ef28f091 removing NODE_OPTIONS. 2023-04-05 20:34:29 -07:00
Jason Kulatunga 80d72f8a1b regenerate package-lock with angular 13-lts packages. 2023-04-05 20:10:53 -07:00
Jason Kulatunga 2e8f4a0581 update with instructions for adding host id to UI.
- fixes #464
- related #151
2023-04-05 19:50:09 -07:00
Michael Manganiello 7fd2e2b050 Upgrade to Go 1.20
With the release of Go 1.20, version 1.18 is not supported anymore. This
change upgrades the project to Go 1.20.
2023-04-05 19:50:09 -07:00
Jason Kulatunga 8c65166a90 Merge pull request #446 from adamantike/misc/upgrade-go-1.20 2023-04-05 19:48:31 -07:00
Jason Kulatunga 733b49c2c4 Merge branch 'master' into misc/upgrade-go-1.20 2023-04-05 07:53:01 -07:00
Jason Kulatunga 711b5c40b1 update with instructions for adding host id to UI.
- fixes #464
- related #151
2023-04-04 21:53:39 -07:00
Jason Kulatunga f94e616d8d Merge branch 'master' into angular-refactoring 2023-04-04 21:38:57 -07:00
Michael Manganiello fb7848f341 Delete temporary deb file from omnibus Docker image
This considerably reduces the Docker image size for the `omnibus` variant:

```
scrutiny-omnibus-master         latest            cd7a7dde100b   9 minutes ago   538MB
scrutiny-omnibus-fix-applied    latest            5f7ac124ef50   6 minutes ago   431MB
```
2023-02-22 16:25:09 -03:00
Michael Manganiello 007857afd5 Upgrade to Go 1.20
With the release of Go 1.20, version 1.18 is not supported anymore. This
change upgrades the project to Go 1.20.
2023-02-19 21:18:51 -03:00
Jason Kulatunga e4bbe8c035 Merge pull request #441 from padhi-forks/master
Fixes https://github.com/AnalogJ/scrutiny/issues/440
2023-02-08 21:43:45 -08:00
Saswat Padhi cb5226f6e4 Update backend tests failing on insecure_skip_verify 2023-02-07 08:24:39 +00:00
Saswat Padhi c69770d1dd Add insecure_skip_verify in example.scrutiny.yaml 2023-02-06 22:34:06 +00:00
Saswat Padhi e07a53046f [FEAT] Allow insecure certificates on InfluxDB
This change allows users to skip TLS certificate verification on their
InfluxDB server, if they wish to do so, for instance when using self-
signed certificates.
Without this change, scrutiny failed to start and paniced with a
`x509: certificate signed by unknown authority` error.
2023-02-06 22:26:40 +00:00
Jason Kulatunga 19a0b8c2ac Update TROUBLESHOOTING_INFLUXDB.md
change volume mounts when upgrading from LSIO image
2023-02-04 08:41:47 -08:00
Jason Kulatunga 97f73703b1 update docs with instructions for customizing Influxdb creds. 2023-01-19 22:21:48 -08:00
Jason Kulatunga 4fcd11f497 update coverage makefile with workaround. 2023-01-11 21:31:25 -08:00
Jason Kulatunga 7f1023fa9b temporary fix for #426
using legacy open ssl provider for fixing `"error:0308010C:digital envelope routines::unsupported"` error.

See https://stackoverflow.com/questions/69692842/error-message-error0308010cdigital-envelope-routinesunsupported

We need to upgrade Angular version.
2023-01-11 21:30:16 -08:00
Jason Kulatunga d49497da80 update hub/spoke guide - thanks @TinJoy59
fixes #417
2023-01-11 18:03:49 -08:00
Jason Kulatunga d8c359bd8a instructions documenting added ability to trigger collector at startup.
Disabled by default, (enable by setting `-e COLLECTOR_RUN_STARTUP=true`)

Added COLLECTOR_RUN_STARTUP_SLEEP env variable to specify seconds before calling scrutiny collector on first run, default sleep value = 1s.
2023-01-11 17:49:58 -08:00
Jason Kulatunga ad4b117f6e fixes #412
Added ability to trigger collector at startup.
Disabled by default, (enable by setting `-e COLLECTOR_RUN_STARTUP=true`)

Added COLLECTOR_RUN_STARTUP_SLEEP env variable to specify seconds before calling scrutiny collector on first run, default sleep value = 1s.
2023-01-11 17:41:09 -08:00
Jason Kulatunga 22d2f9847c fixes #418
when using device type `sat,auto`, scrutiny config parser will treat `type: 'sat,auto'` as a list, which will cause it to be treated like a raid disk. force single command execution using `type: ['sat,auto']`
2023-01-11 17:24:01 -08:00
packagrio-bot 1c7f299b98 (v0.6.0) Automated packaging of release by Packagr 2023-01-12 00:42:30 +00:00
Jason Kulatunga 602fdce0ee Merge pull request #425 from AnalogJ/update_shoutrrr 2023-01-11 16:37:01 -08:00
Jason Kulatunga 61fde6a2ca update shoutrrr version.
fixes #355
2023-01-11 16:17:25 -08:00
Jason Kulatunga d843bcc258 fix nightly build. 2022-12-02 07:13:52 -08:00
Jason Kulatunga f2856e0f26 adding a healthcheck endpoint that confirms communication with influxdb and sqlite.
ref: #381
2022-11-30 08:25:27 -08:00
Jason Kulatunga 58ef1aa311 update readme. 2022-11-30 07:56:01 -08:00
Jason Kulatunga 6a6570b8e3 trying to fix nodejs build of frontend. 2022-11-29 22:45:40 -08:00
Jason Kulatunga 29a0860caa trying to fix nodejs build of frontend. 2022-11-29 22:24:16 -08:00
Jason Kulatunga fb760a9f6d make sure we print an error if the config file is invalid.
fixes #408
2022-11-29 22:06:42 -08:00
Jason Kulatunga c9f13f4398 update README to correctly call-out the influxdb requirement in hub-spoke deployments.
references #409
2022-11-29 21:07:51 -08:00
Jason Kulatunga dd03a8cf63 Revert "trying to fix build"
This reverts commit 0a6ade4da9.
2022-11-28 08:16:45 -08:00
Jason Kulatunga 0a6ade4da9 trying to fix build 2022-11-28 08:13:19 -08:00
Jason Kulatunga 5c8c11d78b Merge pull request #407 from StratusFearMe21/patch-1 2022-11-28 07:47:01 -08:00
Jason Kulatunga 00502cc565 Merge pull request #406 from boomam/patch-1 2022-11-28 07:46:35 -08:00
StratusFearMe21 0febe3fda5 Looks like this is already implemented 2022-11-28 05:46:21 +00:00
boomam fcd4bb4561 Added new formatting
....to match existing doc formatting standard.
2022-11-27 19:28:09 -05:00
boomam 89f763e65d Addition of notification testing command to troubleshooting
Addition of notification testing command to troubleshooting.
2022-11-27 19:27:10 -05:00
Jason Kulatunga 075eb94fa2 Merge pull request #394 from AnalogJ/skip_null_temp 2022-11-15 09:07:13 -08:00
adripo e9cf8a9180 fix: igeneric types 2022-11-12 22:27:07 +01:00
adripo 64ad353628 fix: remove fullcalendar 2022-11-12 22:26:03 +01:00
adripo 5518865bc6 fix: remove outdated option 2022-11-12 22:24:38 +01:00
adripo 50321d897a fix: prod build command 2022-11-12 22:24:02 +01:00
adripo e18a7e9ce0 refactor: update dependencies version 2022-11-12 22:23:37 +01:00
adripo 536b590080 feat: dynamic line stroke settings 2022-11-11 00:19:51 +01:00
Jason Kulatunga 098ce0673a Update TROUBLESHOOTING_DEVICE_COLLECTOR.md 2022-11-08 20:21:21 -08:00
Jason Kulatunga 2677796322 fixing bug. Null value for temperatures should be ignored. 2022-11-06 07:54:32 -08:00
Jason Kulatunga 5cc7fb30ed Merge pull request #391 from adripo/patch-1 2022-11-06 07:46:39 -08:00
adripo 222b8103d6 fix: increase timeout 2022-11-05 04:14:44 +01:00
Jason Kulatunga 727d5b0ace Update SUPPORTED_NAS_OS.md 2022-10-12 21:26:04 -07:00
Jason Kulatunga d7b45e5f01 update docs. 2022-10-12 20:58:41 -07:00
Jason Kulatunga 578a262d90 Merge pull request #372 from Robert-Zacchigna/master 2022-09-22 22:13:31 -07:00
Robert-Zacchigna c6e11f88b4 Minor Formatting Fix 2022-09-21 14:58:16 -05:00
Robert-Zacchigna b795331efb Doc for Manual Install on Windows 2022-09-21 14:51:14 -05:00
packagrio-bot f1e5bd3ed4 (v0.5.0) Automated packaging of release by Packagr 2022-08-04 15:11:04 +00:00
Jason Kulatunga d8d56f77f9 Merge pull request #352 from AnalogJ/beta 2022-08-04 08:07:56 -07:00
Jason Kulatunga 26b221532e fix tests. 2022-08-04 07:56:43 -07:00
Jason Kulatunga 15d3206f6f remove settings dialog from Details page. 2022-08-04 07:30:14 -07:00
Jason Kulatunga 59e2e928a8 remove the notify.level and notify.filter_attributes values from the example.scrutiny.yaml, since they are no longer allowed. 2022-08-03 23:12:09 -07:00
Jason Kulatunga 51f59e4fcd docs, added an explanation for why influxdb is required. 2022-08-03 22:59:19 -07:00
Jason Kulatunga f823127825 simplify logger creation (move logic into a function in main packages)
Ensure logger creation is consistent between Web and Collector
Create logger in main, pass down to downstream functions (like gin)
In debug mode, print a copy of AppConfig
Better debugging for logger.
2022-08-03 22:51:44 -07:00
Jason Kulatunga d41d535ab7 make sure that the device host id is provided in notifications (if available).
fixes #337
2022-08-03 20:55:34 -07:00
Jason Kulatunga 9a4a8de341 make sure the settings dialog width is 600px for readability. 2022-08-03 18:38:59 -07:00
Jason Kulatunga 2d6f60abaa attrHistory needs to be reversed, so the newest data is on the right
fixes #339
2022-08-03 18:23:58 -07:00
Jason Kulatunga d201f798fb Merge pull request #351 from AnalogJ/app_db_settings 2022-08-02 22:15:52 -07:00
Jason Kulatunga a1b0108503 Added PRAGMA settings support when connecting to SQLITE db.
When a transaction cannot lock the database, because it is already locked by another one,
SQLite by default throws an error: database is locked. This behavior is usually not appropriate when
concurrent access is needed, typically when multiple processes write to the same database.
PRAGMA busy_timeout lets you set a timeout or a handler for these events. When setting a timeout,
SQLite will try the transaction multiple times within this timeout.
https://rsqlite.r-dbi.org/reference/sqlitesetbusyhandler

retrying for 30000 milliseconds, 30seconds - this would be unreasonable for a distributed multi-tenant application,
but should be fine for local usage.

added mechanism for global settings (PRAGMA and DB level instructions).

fixes #341
2022-08-02 22:14:23 -07:00
Jason Kulatunga f0275d2349 Merge pull request #346 from KF5JWC/patch-3 2022-08-02 20:42:53 -07:00
Jason Kulatunga 9dafde8a43 Merge pull request #350 from MattKobayashi/docs_udev 2022-08-02 20:29:25 -07:00
Matthew Kobayashi fa8f86ab7b Add missing setup command 2022-08-03 11:02:51 +10:00
KF5JWC 41c9daa939 Make run_collect.sh executable
Synology task will fail when not executable:

```
/bin/bash: /volume1/@Entware/scrutiny/bin/run_collect.sh: Permission denied
```
2022-08-01 15:07:28 -05:00
Jason Kulatunga 83186ba36e Merge pull request #345 from KF5JWC/patch-2 2022-07-31 11:00:15 -07:00
KF5JWC 3205e3d022 Update INSTALL_SYNOLOGY_COLLECTOR.md
Typo: Created and loaded config into `conf/`, but specifies `config/` in argument
2022-07-31 00:07:04 -05:00
Jason Kulatunga 3f272b36d4 adding setting to allow users to customize between binary vs SI/Metric units in UI.
fixes #330
2022-07-30 08:50:23 -07:00
Jason Kulatunga b238579fe6 Merge pull request #343 from AnalogJ/app_db_settings
adding tests. Make sure that device status depends on the configured threshold
2022-07-30 08:05:19 -07:00
Jason Kulatunga ce2f990eb1 consolidate device status to string logic in DeviceStatusPipe.
Ensure device status takes into account new settings.
2022-07-29 07:11:57 -07:00
Jason Kulatunga b11b8732aa Merge pull request #342 from MattKobayashi/docs_udev 2022-07-29 06:43:49 -07:00
Matthew Kobayashi 5cd441da7b Add udev troubleshooting doc 2022-07-29 09:33:55 +10:00
Jason Kulatunga 2e768fb491 adding tests. Make sure that device status depends on the configured threshold. 2022-07-25 07:46:44 -07:00
Jason Kulatunga e8755ff617 Merge pull request #338 from AnalogJ/app_db_settings 2022-07-23 16:37:16 -07:00
Jason Kulatunga e41ee47371 filter attributes after notify 2022-07-23 16:21:53 -07:00
Jason Kulatunga 7a68a68e76 frontend, determine the device status by checking against the configured thresholds. 2022-07-23 16:11:49 -07:00
Jason Kulatunga 94594db20a on settings save, return the new settings.
update the frontend to persist settings to the database.
Using ScrutinyConfigService instead of TreoConfigService.
Using snake case settings in frontend.
Make sure we're using AppConfig type where possible.
2022-07-23 14:36:32 -07:00
Jason Kulatunga 7e672e8b8e adding tests for config.MergeConfigMap functionality. (Set vs SetDefault).
Converted all settings keys to snakecase.
2022-07-23 10:19:15 -07:00
Jason Kulatunga 54e2cacb00 move frontend settings into the DB (for consistent settings handling).
Flattened settings object.
2022-07-23 09:32:56 -07:00
Jason Kulatunga c0f1dfdb0b fixing config mock. 2022-07-20 22:38:30 -07:00
Jason Kulatunga 29bc79996b working settings update.
Settings are loaded from the DB and added to the AppConfig during startup.
When updating settings, they are stored in AppConfig, and written do  the database.
2022-07-19 23:12:23 -07:00
Jason Kulatunga 99af2b8b16 WIP settings system.
- updated dbdiagrams schema
- [BREAKING] force failure if `notify.filter_attributes` or `notify.level` is set
- added Settings table (and default values during migration)
- Added Save Settings and Get Settings functions.
- Added web API endpoints for getting and saving settings.
- Deprecated old Notify* constants. Created new MetricsStatus* and MetricsNotifyLevel constants.
2022-07-17 10:32:28 -07:00
Jason Kulatunga dd0c3e6fba rename the migration model package name. 2022-07-16 22:07:50 -07:00
Jason Kulatunga 5b2746f389 initial settings table. 2022-07-16 21:50:48 -07:00
Jason Kulatunga e9c1de9664 update support table in README.
- freebsd binaries for collector and web working
- macos binaries for arm and amd.
2022-07-16 10:12:30 -07:00
Jason Kulatunga 6ca4bd4912 fix the WORKDIR for collector image.
fixes #335
2022-07-13 21:56:58 -07:00
packagrio-bot c34ee85e48 (v0.4.16) Automated packaging of release by Packagr 2022-07-12 16:02:04 +00:00
Jason Kulatunga 91e8eb1def Merge pull request #333 from AnalogJ/beta 2022-07-12 08:58:39 -07:00
Jason Kulatunga a01b8fe083 manually bump version. 2022-07-12 08:58:18 -07:00
Jason Kulatunga 550fb542d4 Merge pull request #328 from AnalogJ/beta
pre v0.4.16
2022-07-12 08:57:42 -07:00
Jason Kulatunga 7841063783 remove solaris. 2022-07-11 20:54:07 -07:00
Jason Kulatunga 8e05b2e2f8 Revert "add a solaris collector detect engine."
This reverts commit 64e1c93d16.
https://gitlab.com/cznic/sqlite does not support Solaris.
> build constraints exclude all Go files in /home/runner/work/scrutiny/scrutiny/vendor/modernc.org/libc/errno

related #120
2022-07-11 20:52:15 -07:00
Jason Kulatunga 64e1c93d16 add a solaris collector detect engine. 2022-07-11 20:48:30 -07:00
Jason Kulatunga b227054b52 error if any step fails. 2022-07-11 20:47:32 -07:00
Jason Kulatunga 66bd6f99c5 compiling solaris binaries
related #120
2022-07-11 20:38:54 -07:00
Jason Kulatunga c6579864b8 added instructions for how to create a Scope restricted InfluxDB API token for use with Scrutiny.
- fixes #249
2022-07-10 11:31:33 -07:00
Jason Kulatunga 2361c329e2 added USB instructions to trouble shooting guide.
fixes #266

added solaris to supported os list.
2022-07-10 09:01:35 -07:00
Jason Kulatunga 5ea149d878 upgrading to go 1.18 for generics (and lodash-like library).
devices with an empty wwn should be filtered out (not uploaded during device registration, skipped when attempting to upload metrics).
added a migration to delete existing device entries with an empty `wwn`

fixes #314
2022-07-09 18:28:49 -07:00
Jason Kulatunga 30bd18f816 updating docs. 2022-07-09 17:00:51 -07:00
Jason Kulatunga 0f0efac866 fix update, using raw flux script. 2022-07-09 10:42:30 -07:00
Jason Kulatunga 04563c0d0d ensure we have the ability to keep influxdb tasks up-to-date. 2022-07-09 10:05:48 -07:00
Jason Kulatunga 9316eccabe adding tests for tasks and aggregation queries (temp). 2022-07-09 08:48:36 -07:00
Jason Kulatunga b71d6660a6 adding typescript interfaces for type hinting and testing
some code reformatting
adding tests for services and components.
cleanup of unused dependencies in components.
refactor dashboard service so that wrapper is removed before data is passed to component. (no more this.data.data...).
refactored components so that variable names are consistent (dashboardService vs smartService).
ensure argument and return types are specified everywhere.
adding tests for pipes.

adding ng test to ci steps.

change dir before running npm install.

trying to install nodejs in continer.

test frontend separately.

upload coverage for frontend and backend.

upload coverage for frontend and backend.

testing coverage file locations.

retry file upload.
2022-07-08 22:21:06 -07:00
Jason Kulatunga 0e2fec4e93 adding tests to frontend. 2022-07-08 22:19:43 -07:00
Jason Kulatunga ff171282cc Merge pull request #325 from AnalogJ/beta
make sure that make is installed when building binary frontend.
2022-07-07 08:56:28 -07:00
Jason Kulatunga ea8fe208d0 make sure that make is installed when building binary frontend. 2022-07-06 22:50:20 -07:00
Jason Kulatunga 9ae9c387cc Merge pull request #315 from AnalogJ/beta 2022-07-06 22:20:07 -07:00
Jason Kulatunga 772b4f6528 fix influxdb install. 2022-07-06 21:39:33 -07:00
Jason Kulatunga 4a16ca0d5a wip, migrate all scripts to new build pattern (Makefile + multiple GH agents). 2022-07-06 21:39:33 -07:00
Jason Kulatunga 316ce856f7 cleanup, remove -race flag when testing (requires CGO) 2022-07-06 21:39:33 -07:00
Jason Kulatunga 6e0321f488 add go.sum 2022-07-06 21:39:33 -07:00
Jason Kulatunga 338d2ae04e remove invalid freebsd arch.
remove invalid freebsd arch.
2022-07-06 21:39:28 -07:00
Jason Kulatunga 4419f7f429 remove zig. remove cgo dependency for sqlite (using pkg.go.dev/modernc.org/sqlite) 2022-07-06 21:39:28 -07:00
Jason Kulatunga 797a6b0429 make sure we dont depend on tests for building binaries.
empty commit.

fix checkout.

fix checkout.

fix zig.

fix zig.

fix zig.

fix zig.

fix zig.

fix zig.

fix zig.

fix zig.

fix zig.

fix zig.
2022-07-06 21:39:22 -07:00
Jason Kulatunga d0b545dfb7 fixing make frontend in docker builds. 2022-06-26 15:34:53 -07:00
Jason Kulatunga b0bff53bbd start refactoring the Makefile to build artifacts in parallel (eventually using Zig for cross compilation). 2022-06-26 15:26:20 -07:00
Jason Kulatunga b4adf3d88d cleanup before go generate (and multi-arch builds using zig). 2022-06-25 19:15:36 -07:00
packagrio-bot eefdc548b2 (v0.4.14) Automated packaging of release by Packagr 2022-06-25 22:13:15 +00:00
Jason Kulatunga fb918e2d6e Merge pull request #308 from AnalogJ/beta
pre v0.4.14 release
2022-06-25 15:03:35 -07:00
Jason Kulatunga 3d9001a5e4 when deviceType not specified in collector config, scrutiny will ignore the device. We need to make sure we correctly override the device.
fixes #255
2022-06-25 11:19:44 -07:00
Jason Kulatunga fbe7d63a24 trying to fix tests. 2022-06-20 18:01:43 -07:00
Jason Kulatunga d718b0898b trying to fix tests. 2022-06-20 17:21:27 -07:00
Jason Kulatunga 44c7211b5f temp artifacts for #304 2022-06-20 13:32:53 -07:00
Jason Kulatunga 157c93b967 provide a mechanism to specify the absolute path to the smartctl binary used by metrics collector.
- fixes #304
2022-06-20 12:09:56 -07:00
Jason Kulatunga 7babc280a0 ensure that users can filter their notifications by:
- failing attribute type (Critical vs All)
 - failure reason (Smart, Scrutiny, Both)

 fixes #300
2022-06-20 08:15:06 -07:00
Jason Kulatunga e364e480e8 update Synology Guide. 2022-06-15 07:10:36 -07:00
Jason Kulatunga bfefe7e98a Merge pull request #303 from SiM22/patch-1 2022-06-15 07:05:29 -07:00
Simon Garcia 831cca7853 Create INSTALL_COLLECTOR_SYNOLOGY_AARCH64.md
A little tutorial to get the collector running on Synology
2022-06-15 09:56:32 +01:00
Jason Kulatunga 46f3b1c02c fix using linter. 2022-06-14 22:21:00 -07:00
Jason Kulatunga 8a1ae2ffa0 Update TROUBLESHOOTING_DEVICE_COLLECTOR.md 2022-06-14 21:41:22 -07:00
packagrio-bot 145c819fc1 (v0.4.13) Automated packaging of release by Packagr 2022-06-14 14:42:54 +00:00
Jason Kulatunga a9ea231de0 Merge pull request #301 from AnalogJ/disable_seek_read_error_rates 2022-06-14 07:33:45 -07:00
Jason Kulatunga c2488af1c3 Disable Seek & Read error rate attribute analysis. Causes issues with Seagate Ironwolf drives.
Added documentation.
2022-06-14 07:32:33 -07:00
Jason Kulatunga ecf7a447a7 Disable Seek & Read error rate attribute analysis. Causes issues with Seagate Ironwolf drives.
Added documentation.
2022-06-14 07:29:23 -07:00
Jason Kulatunga f8e61af2f9 adding docs. 2022-06-13 08:55:27 -07:00
Jason Kulatunga ee61d986d8 Update docker-nightly.yaml 2022-06-12 10:13:10 -07:00
Jason Kulatunga 8fe8cec09a Update TROUBLESHOOTING_DOCKER.md 2022-06-12 10:09:25 -07:00
packagrio-bot b953456d6b (v0.4.12) Automated packaging of release by Packagr 2022-06-11 23:32:42 +00:00
Jason Kulatunga 4057699cad Merge pull request #296 from AnalogJ/beta 2022-06-11 16:22:55 -07:00
Jason Kulatunga d3e7fc6067 make sure we dont create incorrect temp data. 2022-06-11 15:57:12 -07:00
Jason Kulatunga 09a8574d83 fixing tooltip theme. 2022-06-11 15:21:20 -07:00
Jason Kulatunga 7695cc185f color the barchart data in the sparklines, so that we know when a failure/warning was detected (historically) 2022-06-11 14:43:34 -07:00
Jason Kulatunga fc7208020e remove status reason click for more details text. 2022-06-11 12:18:27 -07:00
Jason Kulatunga 75d5930835 correctly using the latest data for table. 2022-06-11 11:00:00 -07:00
Jason Kulatunga 3c9e16169e correctly using the latest data for table. 2022-06-11 09:28:37 -07:00
Jason Kulatunga 9e1076f302 using constants for Attribute status values. 2022-06-11 09:17:35 -07:00
Jason Kulatunga 75ab87e109 Update TROUBLESHOOTING_INFLUXDB.md 2022-06-11 08:13:29 -07:00
Jason Kulatunga 0b8251fce2 Merge pull request #295 from AnalogJ/expanding_row 2022-06-11 08:07:03 -07:00
Jason Kulatunga f57b71ae96 updated tooltips in details page (click for more details). 2022-06-11 08:05:52 -07:00
Jason Kulatunga ce324c3de1 moved nightly build into its own ci job.
fixes #289
2022-06-11 07:47:25 -07:00
packagrio-bot 281b56d287 (v0.4.11) Automated packaging of release by Packagr 2022-06-11 03:29:04 +00:00
Jason Kulatunga cbd23e334b Merge pull request #293 from AnalogJ/beta 2022-06-10 19:48:22 -07:00
Jason Kulatunga 7a0b9c9e0d trying to fix docker image build. 2022-06-10 08:20:13 -07:00
Jason Kulatunga 44b3d982dd trying to fix docker image build. 2022-06-10 08:19:25 -07:00
Jason Kulatunga 769f253e7d fixing the table header for failures. 2022-06-09 22:42:21 -07:00
Jason Kulatunga fbd5bb57ac update descriptions for SCSI attributes. 2022-06-09 22:31:35 -07:00
Jason Kulatunga b9eb5687cd working on expanding row content. 2022-06-09 22:13:44 -07:00
Jason Kulatunga cbd230a7e0 wip expanding row for more details for attributes.
see https://stackblitz.com/angular/eaajjobynjkl?file=src%2Fapp%2Ftable-expandable-rows-example.html

see https://material.angular.io/components/table/examples#table-expandable-rows
2022-06-09 19:39:46 -07:00
Jason Kulatunga 892e9685f3 attempting to fix https://github.com/AnalogJ/scrutiny/issues/281 by removing static flag in artifact build 2022-06-09 19:34:16 -07:00
Jason Kulatunga 7ba7b6efda Update TROUBLESHOOTING_DOCKER.md 2022-06-09 07:35:07 -07:00
Jason Kulatunga 453069deec Create TROUBLESHOOTING_DOCKER.md 2022-06-09 07:34:14 -07:00
packagrio-bot de5f2c3324 (v0.4.10) Automated packaging of release by Packagr 2022-06-09 06:10:18 +00:00
Jason Kulatunga d486f14433 Merge pull request #291 from AnalogJ/beta 2022-06-08 22:52:41 -07:00
Jason Kulatunga cb47dd7185 revert s6-overlay changes. 2022-06-07 22:03:02 -07:00
Jason Kulatunga 6ae4d233cd update bug report form to require docker info output. 2022-06-07 21:42:39 -07:00
Jason Kulatunga f8bb185854 trying to fix seg fault issues. Attempting to consolidate on debian-bullseye for runtime docker images. 2022-06-07 21:29:15 -07:00
Jason Kulatunga 1da07caaa6 fix background color for details page history tooltip.
fixes #283
2022-06-07 20:17:25 -07:00
Jason Kulatunga fe96c27732 trying to fix webUI. 2022-06-07 19:51:05 -07:00
Jason Kulatunga 7287775cca trying to fix webUI. 2022-06-07 18:59:25 -07:00
Jason Kulatunga 28ac3ac7ec fix settings persistence. 2022-06-04 22:53:27 -07:00
packagrio-bot a6208c0d49 (v0.4.9) Automated packaging of release by Packagr 2022-06-05 05:24:54 +00:00
Jason Kulatunga 7840fe66da Merge pull request #280 from AnalogJ/beta 2022-06-04 22:15:53 -07:00
Jason Kulatunga 2ca44c967e simplify darkmode ui toggle. 2022-06-04 20:32:12 -07:00
Jason Kulatunga 4b767421f3 change highlight color for dark mode. 2022-06-04 19:01:18 -07:00
Jason Kulatunga 6005b8609a trying to fix docker image builds (take 1h+ right now).
trying to fix docker image builds (take 1h+ right now).

trying to fix docker image builds (take 1h+ right now).

trying to fix docker image builds (take 1h+ right now).

trying to fix docker image builds (take 1h+ right now).

trying to fix docker image builds (take 1h+ right now).

trying to fix docker image builds (take 1h+ right now).

trying to fix docker image builds (take 1h+ right now).

trying to fix docker image builds (take 1h+ right now).
2022-06-04 12:22:07 -07:00
Jason Kulatunga df23ecdf33 fix typing for attribute status enum stored in database. 2022-06-04 09:42:45 -07:00
Jason Kulatunga f4988cbac5 try to speed up multi-arch docker builds by limiting qemu vm's to amd and arm only. 2022-06-04 09:29:17 -07:00
Jason Kulatunga f4f5d16b4a rename variable to themeUseSystem from darkModeUseSystem. 2022-06-04 08:18:40 -07:00
Jason Kulatunga 1c4dd33381 Merge pull request #276 from shamoon/dark-mode 2022-06-04 08:13:43 -07:00
Jason Kulatunga 9e0ba4d269 Merge branch 'beta' into dark-mode 2022-06-04 08:12:37 -07:00
Jason Kulatunga d9ecf6c0d3 make sure defaults are available if missing from localStorage
fixes #277
2022-06-04 08:08:45 -07:00
Michael Shamoon 8051ad4dde Tweak / fix some dark mode colors
Update styles.scss
2022-06-03 00:50:17 -07:00
Michael Shamoon 165f98dc09 Add settings UI for dark mode 2022-06-03 00:50:17 -07:00
Jason Kulatunga ca7772250c fix s6-overlay overwriting bin symlinks:
https://github.com/just-containers/s6-overlay/tree/v2.1.0.1#bin-and-sbin-are-symlinks

adding a makefile to build docker images locally.
2022-06-02 21:06:43 -07:00
Jason Kulatunga 6e02e4da02 fixing func def. 2022-06-02 12:21:54 -07:00
Jason Kulatunga 9c8498cea7 disable and re-enable bitwise operations 2022-06-02 12:20:50 -07:00
Jason Kulatunga 965fbb08da trying to fix installation. 2022-06-02 11:35:30 -07:00
Jason Kulatunga e16933eeac trying to fix installation. 2022-06-02 11:06:15 -07:00
Jason Kulatunga 4d0fc0eae8 trying to fix installation. 2022-06-02 10:49:22 -07:00
Jason Kulatunga 8296a973b8 trying to fix installation. 2022-06-02 10:48:44 -07:00
Jason Kulatunga 19a9957755 using ARG DEBIAN_FRONTEND=noninteractive 2022-06-02 10:40:28 -07:00
Jason Kulatunga 02e3947906 disable github action docker build caching - may be causing "cannot reuse body, request must be retried" errors 2022-06-02 10:22:55 -07:00
Jason Kulatunga 766a73455e update the base image for docker iamges to ubuntu:latest - which follows the LTS.
fixes #274
2022-06-02 10:04:36 -07:00
Jason Kulatunga 6e64ae09aa Update SUPPORTED_NAS_OS.md 2022-06-01 16:39:46 -07:00
Jason Kulatunga 411eca20e0 Update SUPPORTED_NAS_OS.md 2022-05-31 18:11:25 -07:00
Jason Kulatunga 0243d9e2fa Merge pull request #272 from BadCo-NZ/patch-1
Create INSTALL_PFSENSE.md
2022-05-31 18:10:33 -07:00
Jason Kulatunga 9aa0e97be0 display the device UUID and device Label in the details page.
fixes #265
2022-05-31 13:36:58 -07:00
Jason Kulatunga 488fcfc820 added AttributeStatus bit flag
ensure DeviceStatus is a valid bit flag.
[docs] added running tests section to contribution guide.
make sure UI correctly treats scrutiny failures as failed.
2022-05-31 13:31:34 -07:00
Jason Kulatunga b5dad487e5 updating bug report. 2022-05-31 11:32:58 -07:00
Jason Kulatunga 8b01187892 woarkound for volume mount w/privileged 2022-05-31 09:13:47 -07:00
Jason Kulatunga d9d6ce0f30 added docuemtnation about exit codes. 2022-05-31 08:50:38 -07:00
BadCo-NZ 8d203b3547 Create INSTALL_PFSENSE.md
As requested by @AnalogJ
2022-05-30 10:17:51 +00:00
Jason Kulatunga fe5dbcff1e documentation changes. 2022-05-28 16:15:26 -07:00
Jason Kulatunga 99df104cdd documentation changes. 2022-05-28 15:50:05 -07:00
Jason Kulatunga a53397210c adding mechanism to override the smartctl commands used by scrutiny for device scanning, device identification and smart data retrieval.
adding tests for command overrides.

rename GetScanOverrides() to GetDeviceOverrides()

fixes #255
2022-05-28 15:32:44 -07:00
Jason Kulatunga 2533d8d34f using Constants for git release/debug modes. 2022-05-28 09:53:45 -07:00
Jason Kulatunga af2523cfee setting GinMode to release by default. Users get confused otherwise. 2022-05-28 09:50:06 -07:00
Jason Kulatunga c6e1663f8a Update README.md 2022-05-28 09:10:52 -07:00
Jason Kulatunga ab83c389f7 Update INSTALL_HUB_SPOKE.md 2022-05-28 07:39:42 -07:00
Jason Kulatunga 6d22702864 Update INSTALL_MANUAL.md 2022-05-27 22:34:32 -07:00
packagrio-bot d78957353d (v0.4.8) Automated packaging of release by Packagr 2022-05-28 02:05:59 +00:00
Jason Kulatunga b208493af9 Merge pull request #263 from AnalogJ/beta 2022-05-27 18:56:56 -07:00
Jason Kulatunga 4aa1485246 using device title pipe to consistently set the device name based on configuration setting.
adding device status pipe to set the device status in a more readable way.
2022-05-27 16:12:27 -07:00
Jason Kulatunga e1e1d321dd fix git.version.sh script. 2022-05-27 13:05:58 -07:00
Jason Kulatunga 3971b37abc attempting to fix docker image build by generating frontend version information before docker build. 2022-05-27 12:59:32 -07:00
Jason Kulatunga cf1bd3ea6b trying to fix docker build, so it includes git sha info. 2022-05-27 00:13:34 -07:00
Jason Kulatunga 9b901766e3 trying to fix docker build, so it includes git sha info. 2022-05-27 00:02:11 -07:00
Jason Kulatunga e19ee78e70 trying to fix docker build, so it includes git sha info. 2022-05-26 23:54:17 -07:00
Jason Kulatunga c7c55ab95c trying to fix docker build, so it includes git sha info. 2022-05-26 23:44:13 -07:00
Jason Kulatunga d7ddf01ea0 trying to fix docker build, so it includes git sha info. 2022-05-26 23:04:57 -07:00
Jason Kulatunga c539af1a67 trying to fix docker build, so it includes git sha info. 2022-05-26 22:53:00 -07:00
Jason Kulatunga d93d24b52d using npm run commands for building angular application (supports pre steps).
Automatically embed the application version in the UI.
2022-05-26 22:37:45 -07:00
Jason Kulatunga 5dbfad68ad fix titles. 2022-05-26 21:51:02 -07:00
Jason Kulatunga 92c4506cfa adding a Caddy example to the TROUBLESHOOTING_REVERSE_PROXY.md guide
fixes #257
2022-05-26 21:49:57 -07:00
Jason Kulatunga fe80bed6bd adding a Caddy example to the TROUBLESHOOTING_REVERSE_PROXY.md guide
fixes #257
2022-05-26 21:48:27 -07:00
Jason Kulatunga b6e69021b2 ensure that the base href is set, as it's required when reloading subpages.
fixes #264
2022-05-26 21:25:36 -07:00
Jason Kulatunga 12e624a496 updating CONTRIBUTING.md guide. 2022-05-26 19:06:48 -07:00
Jason Kulatunga e95b44c690 make sure we use a reasonable number of decimal points for converted temps. 2022-05-26 14:21:46 -07:00
Jason Kulatunga 4ee947d55c trying to fix compilation/typing issues. 2022-05-26 13:49:59 -07:00
Jason Kulatunga 21212c0a1d add setting to change temperature between C and F.
fixes #175
2022-05-26 13:04:15 -07:00
Jason Kulatunga d1376a2200 serve all fonts locally
fixes #125
2022-05-26 10:12:59 -07:00
Jason Kulatunga 7d2daf4f6a add ability to sort devices by age (powered-on-hours)
fixes #100
2022-05-26 08:33:30 -07:00
Jason Kulatunga da4562d308 fixed UI issues related to deleting (component is now correctly removed from the dashboard device list).
fixes #69
2022-05-26 00:16:13 -07:00
Jason Kulatunga f51de52ff7 hide device dashboard component if deletion finishes successfully. 2022-05-25 19:19:11 -07:00
Jason Kulatunga 987632df39 working deletion code. 2022-05-25 19:02:30 -07:00
Jason Kulatunga 28a3c3e53f [WIP] Delete button for devices. 2022-05-25 17:55:23 -07:00
Jason Kulatunga 1bd86f5abd [WIP] Delete button for devices. 2022-05-25 14:59:55 -07:00
Jason Kulatunga 989fbc25f8 latest tag should consistently point to omnibus versions. 2022-05-25 09:52:17 -07:00
packagrio-bot 0f935ceb48 (v0.4.7) Automated packaging of release by Packagr 2022-05-25 15:04:55 +00:00
Jason Kulatunga f844a435fd fix error message. 2022-05-25 07:55:15 -07:00
Jason Kulatunga 3a970e7a27 Merge pull request #262 from AnalogJ/beta
pre-v0.4.7 release
2022-05-25 07:50:21 -07:00
Jason Kulatunga 307c2bcdef fix error message.
Simpler GormMigrateOptions.
2022-05-25 07:39:56 -07:00
Jason Kulatunga d62928aaae adding documentation for script based notifications. 2022-05-24 15:15:37 -07:00
Jason Kulatunga d08a1e3ef6 ignore retention policy errors during migration.
- fixes #256
2022-05-24 14:26:40 -07:00
Jason Kulatunga 2292041f9f never drop tables. 2022-05-24 10:00:41 -07:00
Jason Kulatunga 75e4bf1d6e added a helpful comment that the database migration might take a looong time. 2022-05-24 09:47:09 -07:00
Jason Kulatunga 97add04276 make sure the migration step runs with transactions, so that we can debug easier.
- related #256
2022-05-24 09:07:38 -07:00
Jason Kulatunga 1423f55d78 remove Power Cycle Count failure attribute for ATA drives. Unrealistic for consumer users (BackBlaze data is datacenter focused).
- fixed #31
2022-05-23 10:19:12 -07:00
Jason Kulatunga 46d0b70399 disable NVMe Scrutiny failures for "Numb Error Log Entries" attribute. More analysis needed for NVMe drives & their critical attributes.
- fixes #187
- fixes #247
2022-05-23 09:50:15 -07:00
Jason Kulatunga 168ca802d1 add support for specifying scheme for influxdb endpoint url (http vs https).
fixes #258
2022-05-23 09:34:16 -07:00
Jason Kulatunga 8c07e91f39 grey out and mark thresholds as not yet implemented. 2022-05-23 09:23:22 -07:00
Jason Kulatunga 7979950c3b fixing device sort and display title.
fixes #194
2022-05-23 08:49:51 -07:00
Jason Kulatunga 9846ba13e0 adding support for device sort in UI. 2022-05-23 08:19:58 -07:00
Jason Kulatunga 83839f7faf adding group by hostId support in dashboard.
fixes #151
2022-05-20 22:33:59 -07:00
Jason Kulatunga 85fa3b1f8f moved device summary info panel into isolated component. 2022-05-20 21:55:40 -07:00
Jason Kulatunga 4190f9a633 remove filter not implemented message. 2022-05-20 20:59:29 -07:00
Jason Kulatunga 743ce27d2e adding comment. 2022-05-20 20:59:29 -07:00
Jason Kulatunga 399a2450ff make sure we can change the temperature duration key for the chart. 2022-05-20 20:59:29 -07:00
Jason Kulatunga 934f16f0a5 persist settings across sessions (in local storage). 2022-05-20 20:59:29 -07:00
Jason Kulatunga 0aeb13c181 support custom display of devices by UUID/ID/Label & Scrutiny Name. (Does not persist).
Related #225
2022-05-20 20:59:29 -07:00
Jason Kulatunga 5899bf2026 started working on Dashboard UI sorting and naming 2022-05-20 20:59:29 -07:00
Jason Kulatunga 3b137964fc make sure we include the host id in the temp history label. 2022-05-20 20:59:29 -07:00
Jason Kulatunga 1bfdd0043f added a way to retrieve raw udev data. Can be used to retrieve disk label, UUID and "disk/by-id/*" device info.
Storing it in the database during device registration.
2022-05-20 20:59:29 -07:00
Jason Kulatunga 999c12748c added a way to retrieve raw udev data. Can be used to retrieve disk label, UUID and "disk/by-id/*" device info.
Storing it in the database during device registration.
2022-05-20 20:59:29 -07:00
Jason Kulatunga 6f283fd736 Update README.md 2022-05-20 10:25:02 -07:00
packagrio-bot 65d31046a0 (v0.4.6) Automated packaging of release by Packagr 2022-05-20 17:02:35 +00:00
Jason Kulatunga 601d632ae4 update xgo version. 2022-05-20 09:52:24 -07:00
Jason Kulatunga 8466c5e750 upgrade to v2.9.0 for influxdb sdk -- this includes the SetupWithToken method. 2022-05-20 09:18:01 -07:00
Jason Kulatunga aa786c0db8 upgrade to go 1.17 2022-05-18 09:40:52 -07:00
Jason Kulatunga f3faee389b trying to fix the docker builds. 2022-05-18 09:30:37 -07:00
Jason Kulatunga 5ac0aa8f74 Forked InfluxDB SDK and added support for using pre-generated admin token during setup. This ensures we no longer need to persist the token during startup.
fixes #248
2022-05-18 09:14:05 -07:00
Jason Kulatunga a589d11d01 update influxdb host default to localhost. 2022-05-17 09:39:03 -07:00
packagrio-bot 1a05868381 (v0.4.5) Automated packaging of release by Packagr 2022-05-15 05:29:17 +00:00
Jason Kulatunga a35c3bae08 make 50-cron-config and entrypoint-collector.sh mirrors of each other. This is for ease of maintainability.
related #121
2022-05-14 22:04:46 -07:00
Jason Kulatunga 0c908786e0 Update TROUBLESHOOTING_INFLUXDB.md 2022-05-14 18:13:25 -07:00
Jason Kulatunga 2ba196d6a8 added instructions about upgrading from 0.3.x to 0.4.x 2022-05-13 16:30:07 -07:00
Jason Kulatunga 0e00999d79 added instructions about upgrading from 0.3.x to 0.4.x 2022-05-13 16:29:07 -07:00
Jason Kulatunga 5adceeb9a5 update with SMART vs Scrutiny details. 2022-05-12 21:58:36 -07:00
packagrio-bot b5920e35e3 (v0.4.4) Automated packaging of release by Packagr 2022-05-13 02:00:33 +00:00
Jason Kulatunga fb8f248366 add image for instructions. 2022-05-12 17:29:13 -07:00
Jason Kulatunga 7a931bd018 instructions for how to get your influxdb admin token + other troubleshooting tricks. 2022-05-12 16:00:04 -07:00
Jason Kulatunga a004f85145 fixing cron.
related #121
2022-05-12 15:24:51 -07:00
Jason Kulatunga eeb086c77f using export -p ratehr than printenv to export environmental variables (export -p correctly wraps envs in quotes) 2022-05-12 15:06:29 -07:00
Jason Kulatunga 54b195f851 updating ignore file for testing. 2022-05-12 14:09:20 -07:00
Jason Kulatunga 5dc79134b2 cron file consistent logging (still broken) 2022-05-12 13:20:54 -07:00
Jason Kulatunga f3fad47d9e cleanup example config files. added instructions for influxdb. 2022-05-12 10:33:54 -07:00
Jason Kulatunga 489534cb73 update manual instructions to include InfluxDB installation. 2022-05-12 10:30:36 -07:00
Jason Kulatunga e7801619cd added additional tests from #187.
Detected that the frontend was incorrectly classifying Scrutiny Failures as Warnings.

Fixed.
2022-05-12 10:04:06 -07:00
Jason Kulatunga 7b75b5f9bb update docker instructions. 2022-05-12 09:26:00 -07:00
Jason Kulatunga 0022d848d6 added example hub/spoke docker-compose file. 2022-05-12 09:26:00 -07:00
Jason Kulatunga d47c4ea99a added example hub/spoke docker-compose file. 2022-05-12 09:26:00 -07:00
Jason Kulatunga 62354f2ab8 created example omnibus docker-compose file. 2022-05-12 09:26:00 -07:00
Jason Kulatunga 3a0adb406f Merge pull request #243 from Zorlin/master
Add Ansible instructions
2022-05-12 09:21:36 -07:00
Jason Kulatunga 2a39421524 Merge pull request #245 from henfri/patch-1 2022-05-11 11:34:52 -07:00
henfri a7dc68822f Update docker-compose.yml
to fix https://github.com/AnalogJ/scrutiny/issues/241
2022-05-11 20:09:40 +02:00
Benjamin Arntzen 3ad87aecc6 Add Ansible instructions 2022-05-11 10:06:09 +08:00
Jason Kulatunga 2ab714f575 Update README.md 2022-05-10 15:33:53 -07:00
Jason Kulatunga 9e60fb8d73 schedule the collector upload before the webapp upload (webapp seems to fail with OOM error during compilation) 2022-05-10 08:49:52 -07:00
Jason Kulatunga a846522830 disable sponsor label. broken. 2022-05-10 08:42:35 -07:00
Jason Kulatunga 3d7d276236 enable caching on all docker builds. 2022-05-09 21:55:48 -07:00
Jason Kulatunga 2660af7ce3 build arm32v7 images for web and collector (cannot for omnibus because InfluxDB does not support it yet. See #236 ) 2022-05-09 21:09:53 -07:00
Jason Kulatunga f5af86fd46 using docker build cache for speedier builds 2022-05-09 20:37:15 -07:00
Jason Kulatunga 4bad2d7b03 upgrading github actions for docker builds. (hopefully faster). 2022-05-09 20:21:52 -07:00
Jason Kulatunga a79930916e only run CI on PR. 2022-05-09 19:58:52 -07:00
packagrio-bot 9ea283e8d2 (v0.4.3) Automated packaging of release by Packagr 2022-05-10 02:57:43 +00:00
Jason Kulatunga bd6d192006 fix packagr config. 2022-05-09 19:49:18 -07:00
Jason Kulatunga 39848eda0b provide info about freebsd binaries. 2022-05-09 19:28:42 -07:00
packagrio-bot 2f67d6f9ae (v0.4.2) Automated packaging of release by Packagr 2022-05-10 02:17:50 +00:00
Jason Kulatunga 30153f9656 adding mechanism to rebuild freebsd artifacts manually. 2022-05-09 18:42:51 -07:00
Jason Kulatunga 3150201348 adding mechanism to rebuild freebsd artifacts manually. 2022-05-09 18:38:46 -07:00
Jason Kulatunga bce6225e9a added list of supported architectures. 2022-05-09 18:28:08 -07:00
Jason Kulatunga 893774c557 trying to fix freebsd builds. 2022-05-09 18:20:47 -07:00
Jason Kulatunga 145996055a use locked versions of database models when doing migrations. 2022-05-09 17:06:02 -07:00
Jason Kulatunga 1cae5ea864 create a "beta" branch for testing docker images. 2022-05-09 16:44:06 -07:00
Jason Kulatunga 0fe6e74eb4 clarification to REVERSE_PROXY docs. 2022-05-09 16:41:25 -07:00
Jason Kulatunga eb4a738746 documentation & updates for configuring scrutiny behind a reverse proxy with path rewriting. 2022-05-09 16:37:05 -07:00
packagrio-bot 90e5d219a2 (v0.4.1) Automated packaging of release by Packagr 2022-05-09 23:24:04 +00:00
Jason Kulatunga c397a3237f Merge pull request #237 from AnalogJ/pr_93_changes 2022-05-09 15:07:34 -07:00
Jason Kulatunga 381a6799cc updates for v0.4.0 release. Slight refactor/organization. 2022-05-09 14:50:35 -07:00
Jason Kulatunga 54178eaaf0 Merge branch 'master' into BASEPATH 2022-05-09 14:29:48 -07:00
Jason Kulatunga a6e34f7a44 fix for labeling issue. 2022-05-09 14:09:58 -07:00
Jason Kulatunga 3e444b199a make sure releases (tag events) have unique docker image builds. 2022-05-09 13:48:34 -07:00
packagrio-bot a2a80f3102 (v0.4.0) Automated packaging of release by Packagr 2022-05-09 20:05:53 +00:00
Jason Kulatunga 833fb96c15 fixing publish release step. 2022-05-09 12:55:40 -07:00
Jason Kulatunga 5acc98d221 trying to fix release asset setp. 2022-05-09 10:01:29 -07:00
Jason Kulatunga d9632ae34b Merge pull request #235 from AnalogJ/change_scrutiny_default_path 2022-05-09 09:50:22 -07:00
Jason Kulatunga de702414b9 moving all filesystem references to /scrutiny to /opt/scrutiny
fixes #230
2022-05-09 09:29:39 -07:00
Jason Kulatunga db73175f07 fixing release GH Action. 2022-05-09 09:05:44 -07:00
Jason Kulatunga 6a9db6a92b Merge pull request #222 from AnalogJ/influxdb 2022-05-09 08:45:39 -07:00
Jason Kulatunga 2967b6ca01 make sure that we set the config path when ReadConfig is called. 2022-05-07 20:11:17 -07:00
Jason Kulatunga 5ed69d7fc4 adding tests for Smart and parser. 2022-05-06 22:15:19 -07:00
Jason Kulatunga 2b5c864a74 update the codecov action version. 2022-05-06 20:38:20 -07:00
Jason Kulatunga 21d07a0712 adding tests for Detect struct in collector. Adding ability to mock out exec.Command calls. 2022-05-06 20:07:23 -07:00
Jason Kulatunga 2214febbd1 simple rename. 2022-05-06 18:38:09 -07:00
Jason Kulatunga 786e7d04f2 make sure we print the overall device status in the details page. 2022-05-06 17:25:40 -07:00
Jason Kulatunga 0cee744c29 highlight last updated dates when more than 2 weeks or 1 month. 2022-05-06 17:02:56 -07:00
Jason Kulatunga f39628efc3 by default show all temp data. 2022-05-06 15:08:35 -07:00
Jason Kulatunga 87ba8ff632 better message for what's services we're currently waiting for. 2022-05-06 10:25:08 -07:00
Jason Kulatunga 8ea194b6ba fixing download command (using curl). 2022-05-06 09:54:13 -07:00
Jason Kulatunga d26e452a4a fixing download command (using curl). 2022-05-06 09:52:01 -07:00
Jason Kulatunga 77da0f5a57 fixing download command (using curl). 2022-05-06 09:48:25 -07:00
Jason Kulatunga 2350c1335c fix multi-arch builds. 2022-05-06 09:36:22 -07:00
Jason Kulatunga a893d2db47 fix multi-arch builds. 2022-05-06 09:30:37 -07:00
Jason Kulatunga 6fbe710d09 Merge pull request #228 from AnalogJ/multiarch_builds 2022-05-06 09:00:49 -07:00
Jason Kulatunga d48d0b9d93 build multi arch images 2022-05-06 09:00:14 -07:00
Jason Kulatunga 57c0f899c6 build multi arch controller image. 2022-05-06 08:51:42 -07:00
Jason Kulatunga 5bab9ac04a make sure we can correctly save the config file if onboarding influx. 2022-05-05 23:25:00 -07:00
Jason Kulatunga fabc629e40 handle case where WWN not detected for a device (print error messages, but skip device collection & uploading). 2022-05-05 23:03:06 -07:00
Jason Kulatunga 3dbe59781c Merge pull request #224 from AnalogJ/influx_migrations 2022-05-05 08:08:55 -07:00
Jason Kulatunga 1ced2198c7 cleanup log messages. 2022-05-04 21:04:58 -07:00
Jason Kulatunga 5f12fbb510 enable final migration cleanup. 2022-05-04 20:50:17 -07:00
Jason Kulatunga 702518579b fixed summary query. 2022-05-04 20:40:48 -07:00
Jason Kulatunga fc5a9ba15e fixed device processing in details page. Summary query is still broken. 2022-05-04 20:13:11 -07:00
Jason Kulatunga 8fe0dbed6b partially working. Some datapoints are failing with panic and are silently ignored.
TODO must fix.
2022-05-03 22:40:31 -07:00
Jason Kulatunga 7d963c96a6 writing pseudocode algorithm for data migration. 2022-05-03 12:03:08 -07:00
Jason Kulatunga 2750ccef4a call out deprecated structs so they are not accidentally used via autocomplete. 2022-05-03 11:52:47 -07:00
Jason Kulatunga 9d85920f49 started working on migration code. 2022-05-03 11:50:22 -07:00
Jason Kulatunga 97f6564c1e adding documetnation for smartctl exit codes and addl test. 2022-05-02 10:26:25 -07:00
Jason Kulatunga 646d0eff15 change the environmental variable to COLLECTOR_CRON_SCHEDULE 2022-05-01 22:13:11 -07:00
Jason Kulatunga ce3d45e543 instructions in readme for how to override cron schedule. 2022-05-01 22:09:03 -07:00
Jason Kulatunga 75de6ebfe0 addign ability to customize the scrutiny collector cron schedule using SCRUTINY_COLLECTOR_CRON_SCHEDULE env variable.
similar to https://github.com/linuxserver/docker-scrutiny/pull/19

fixes https://github.com/linuxserver/docker-scrutiny/issues/17
fixes https://github.com/AnalogJ/scrutiny/issues/161
fixes https://github.com/AnalogJ/scrutiny/issues/66
fixes https://github.com/AnalogJ/scrutiny/issues/202
fixes https://github.com/AnalogJ/scrutiny/issues/121
2022-05-01 22:01:34 -07:00
Jason Kulatunga 49c1ef6a37 fixing local persistent dir for influxdb in omnibus. 2022-05-01 21:15:04 -07:00
Jason Kulatunga ae99ffd57e using github container registry images.
update documentation to enable persistence by default
update docs to include influxdb
2022-05-01 21:08:18 -07:00
Jason Kulatunga 08f247109a using environmental variable in files path. 2022-04-30 22:30:13 -07:00
Jason Kulatunga 36617c8f1f using environmental variable in files path. 2022-04-30 22:25:45 -07:00
Jason Kulatunga 035c94681f referencing Github container registry for all images. 2022-04-30 22:24:05 -07:00
Jason Kulatunga 20411afb82 adding name for coverage step. 2022-04-30 22:20:06 -07:00
Jason Kulatunga 84f3327790 adding codecov coverage support. 2022-04-30 22:19:16 -07:00
Jason Kulatunga ccbb9225c4 remove darwin builds. 2022-04-30 22:03:17 -07:00
Jason Kulatunga 8d052f0265 update the bug report template 2022-04-30 21:33:40 -07:00
Jason Kulatunga cfe77c9a36 install tzdata package everywhere.
fixes #78
fixes
2022-04-30 21:29:45 -07:00
Jason Kulatunga 62e5a71eb0 build darwin/amd64 and darwin/arm64 binaries. 2022-04-30 21:27:45 -07:00
Jason Kulatunga 0dba9f8011 Merge branch 'master' into influxdb 2022-04-30 21:21:53 -07:00
Jason Kulatunga d42faf30b0 fix WriteConfig interface. 2022-04-30 21:17:57 -07:00
Jason Kulatunga 5a4bcda1ec adding more docs. 2022-04-30 21:13:18 -07:00
Jason Kulatunga e243d55153 Document NVMe block device vs device controller binding.
Fixes #209.
2022-04-30 16:09:16 -07:00
Jason Kulatunga 9ee2674804 started writing a TROUBLESHOOTING guide for the device collector. 2022-04-30 15:59:45 -07:00
Jason Kulatunga 5a1e390acd started writing a TROUBLESHOOTING guide for the device collector. 2022-04-30 15:57:09 -07:00
Jason Kulatunga 5fb5b9afbe if we're completing the InfluxDB setup via automation, attempt to store the token in the config file automatically. 2022-04-30 15:56:48 -07:00
Jason Kulatunga 9ebf252d4f information about downsampling. 2022-04-30 11:47:17 -07:00
Jason Kulatunga 88a99a1c90 information about downsampling. 2022-04-29 23:08:39 -07:00
Jason Kulatunga 9a5c667437 tweak the wait script. 2022-04-29 22:48:10 -07:00
Jason Kulatunga 8462d21e14 waiting for influxdb before starting scrutiny app. 2022-04-29 22:16:56 -07:00
Jason Kulatunga 702c7cdf7a if running test iin github actions, use influxdb service for testing. 2022-04-29 16:39:14 -07:00
Jason Kulatunga 00bc6ecd92 make sure we can pull config from env variables. 2022-04-29 16:28:50 -07:00
Jason Kulatunga 7cd828ef0d update the influxdb version in the standalone container. 2022-04-29 16:20:35 -07:00
Jason Kulatunga bd39b2cd4d fixes for aggregation. 2022-04-29 16:11:12 -07:00
Jason Kulatunga 0a9d364aea adding duration key to smart attributes api endpoint 2022-04-29 15:26:15 -07:00
Jason Kulatunga f60636a6aa broke scrutiny_repository.go into multiple files for easier exploration & maintenance. 2022-04-28 22:33:09 -07:00
Jason Kulatunga 7a7771981a broke scrutiny_repository.go into multiple files for easier exploration & maintenance. 2022-04-28 22:29:09 -07:00
Jason Kulatunga 8e34ef8d79 Merge pull request #212 from PeterDaveHello/patch-1
Improve README.md a little bit
2022-04-28 17:49:05 -07:00
Jason Kulatunga f569ab6474 [BROKEN COMMIT]
This code leverages the new `types.isType` functionality introduced in the flux language (https://github.com/influxdata/flux/issues/2159)

This code will fix https://github.com/AnalogJ/scrutiny/issues/22 and all related issues.

Unfortunately this code is broken because the influxdb go client library does not correctly handle import statments in the task defintion.

blocked by
https://github.com/influxdata/influxdb-client-go/issues/322
2022-04-27 22:41:56 -07:00
Peter Dave Hello a8952eff0c Improve README.md a little bit
Set indentation of docker commands, and set the language of code block to enable syntax highlight in README.md
2022-01-05 21:03:37 +08:00
Jason Kulatunga 903d5713fc fixes for tests. 2021-11-21 14:39:39 -08:00
Jason Kulatunga 0872da57d7 fixes for tests. 2021-11-17 21:08:56 -08:00
Jason Kulatunga 47e8595c9d using constant vars for duration key magic strings. Fixing Errorf calls to correctly have template data. 2021-11-17 20:50:18 -08:00
Jason Kulatunga bff83de3a0 query temp data across multiple buckets 2021-11-17 18:35:50 -08:00
Jason Kulatunga 03bfdd3890 changing the duration dropdown for temp history data. adding an /api/summary/temp endpoint 2021-11-16 20:39:09 -08:00
Jason Kulatunga 772063a843 find the temp history for the last week (by default). Smooth out data using aggregate window for hourly numbers. Better temp casting during influx data inflating. 2021-11-16 19:07:37 -08:00
Jason Kulatunga b776fb8886 tweaking retention policy code so we can test downsampling scripts. 2021-11-16 18:32:29 -08:00
Jason Kulatunga 8fb58591a6 fixes for Scrutiny end-to-end testing. 2021-10-28 07:41:41 -07:00
Jason Kulatunga ce032c5609 fixes for Scrutiny end-to-end testing. 2021-10-28 07:35:25 -07:00
Jason Kulatunga 060ac7b83a fixes https://github.com/AnalogJ/scrutiny/issues/179 2021-10-25 21:03:06 -07:00
Jason Kulatunga 3ee342763c publish release using old mechanism. 2021-10-24 23:06:53 -07:00
Jason Kulatunga e572c2051f specify metadata path 2021-10-24 22:22:58 -07:00
Jason Kulatunga a4389b176d we're already building windows binaries. Lets make sure we attach them as well. 2021-10-24 22:09:57 -07:00
Jason Kulatunga c7603c8f62 karalabe/xgo-1.13.x is unmaintained, moving to techknowlogick/xgo:go-1.13.x 2021-10-24 22:05:28 -07:00
Jason Kulatunga 7b7b4fe4e3 fixing test. 2021-10-24 19:30:29 -07:00
Jason Kulatunga 5789c836db make sure the status is always exposed in the json data. make sure display_name for metadata is included. Update mocked test data for frontend. 2021-10-24 17:09:44 -07:00
Jason Kulatunga deba21fe19 update timestamps for testing. 2021-10-24 16:26:41 -07:00
Jason Kulatunga 31b5dfa038 ensure that all buckets are created during init. Remove all references to "name" field for attributes (shoudl come from metadata instead). Status is now an int64 (0 is passing). 2021-10-24 16:01:53 -07:00
Jason Kulatunga 9878985fa3 adding aggregation code 2021-10-24 13:07:12 -07:00
Jason Kulatunga fe3f38ae54 fixing packagr actions for publishing artifacts. 2021-10-23 17:03:31 -07:00
Jason Kulatunga 315b43ea05 fixing packagr actions for publishing artifacts. 2021-10-23 16:49:26 -07:00
Jason Kulatunga d346a394e6 fixing packagr actions for publishing artifacts. 2021-10-23 12:57:59 -07:00
Jason Kulatunga 5880e0af86 using packagr actions for publishing artifacts. 2021-10-23 12:46:45 -07:00
Jason Kulatunga 0460985bc5 fix windows asset builder 2021-10-23 12:28:30 -07:00
Jason Kulatunga 0d580932ad fix bumpr 2021-10-23 12:13:22 -07:00
Jason Kulatunga f26dbdf8b6 trigger windows binary creation on every release. 2021-10-23 11:02:43 -07:00
Jason Kulatunga 33e370e90f fix paths. 2021-10-23 11:02:14 -07:00
Jason Kulatunga 5c492986a5 fixes for windows collector build. 2021-10-23 10:58:56 -07:00
Jason Kulatunga fb564afe30 fixes for windows collector build. 2021-10-23 10:57:02 -07:00
Jason Kulatunga a4d151e3f7 fixes for windows collector build. 2021-10-23 10:54:25 -07:00
Jason Kulatunga 124973d9fa Merge pull request #197 from AnalogJ/windows_builds 2021-10-23 10:49:14 -07:00
Jason Kulatunga abfaa5259e adding windows collector. 2021-10-23 10:47:56 -07:00
Jason Kulatunga 975c034925 WIP downsample scripts. 2021-10-23 10:35:32 -07:00
Jason Kulatunga abe7a16507 creating influxdb config file during startup. 2021-08-06 23:26:40 -07:00
Jason Kulatunga 6d4196085e Update README.md 2021-08-02 21:53:44 -07:00
Jason Kulatunga fa5d0e44c4 Update README.md 2021-08-02 21:52:45 -07:00
Jason Kulatunga a60edfff26 fixing mocked data 2021-08-01 10:42:38 -07:00
Jason Kulatunga b197d349f4 setting the image type as suffix. 2021-07-31 21:24:45 -07:00
Jason Kulatunga c97388ff60 tagging. 2021-07-31 21:24:45 -07:00
Jason Kulatunga c8768387f6 adding docker build -> ghcr.io 2021-07-31 21:24:45 -07:00
Jason Kulatunga bd19230cbf make sure data is persisted to DB. 2021-07-25 22:32:13 -07:00
Jason Kulatunga 80f4660130 validate thresholds whenever SMART data is recieved. 2021-07-25 22:11:07 -07:00
Jason Kulatunga 1fc910f41b retest 2021-07-25 15:58:16 -07:00
Jason Kulatunga 967a927e2d setting the image type as suffix. 2021-07-25 15:52:07 -07:00
Jason Kulatunga ef014150a5 tagging. 2021-07-25 15:39:30 -07:00
Jason Kulatunga 5c614c5512 adding docker build -> ghcr.io 2021-07-25 14:51:28 -07:00
Jason Kulatunga 694fc74ca0 fixing history. 2021-06-27 13:44:01 -07:00
Jason Kulatunga 8a46931399 !!!!WIP!!!!
adding InfluxDB

- influxdb added to dockerfile
- influxdb s6 service
- influxdb config
- adding defaults to config
- creating a DeviceRepo interface (multiple db backends)
- implemented DeviceRepo interface as ScruitnyRepository
2021-06-27 10:55:18 -07:00
Andrea Spacca a7c8c75a49 fix new test 2021-05-30 10:17:09 +02:00
Andrea Spacca 2db6465639 BASEPATH 2021-05-30 10:17:07 +02:00
Andrea Spacca 8ac3ab79a4 cr fixes 2021-05-30 10:14:48 +02:00
Andrea Spacca 234a8f9b01 cr fixes 2021-05-30 10:14:48 +02:00
Andrea Spacca 54baeb4c4e trigger check 2021-05-30 10:14:48 +02:00
Andrea Spacca 48bc7cedf4 test cases 2021-05-30 10:14:48 +02:00
Andrea Spacca 9fc11b7140 BASEPATH 2021-05-30 10:14:48 +02:00
Andrea Spacca ea3fbc09f1 BASEPATH 2021-05-30 10:14:48 +02:00
Andrea Spacca 86145be2b1 BASEPATH 2021-05-30 10:14:48 +02:00
Jason Kulatunga fd4f0429e4 (0.3.12) Automated packaging of release by Packagr
Signed-off-by: Jason Kulatunga <jason@thesparktree.com>
2021-04-26 17:13:21 +00:00
Jason Kulatunga 33d4b99a4b fix collector. 2021-04-26 10:03:40 -07:00
Jason Kulatunga 9be57f2271 (0.3.11) Automated packaging of release by Packagr
Signed-off-by: Jason Kulatunga <jason@thesparktree.com>
2021-04-26 06:33:28 +00:00
Jason Kulatunga 8196447526 Merge pull request #170 from AnalogJ/fixes_webui
fixing ui when visible on small screen.
2021-04-25 23:24:22 -07:00
Jason Kulatunga 712119cb5e fixing ui when visible on small screen.
tweak local contrib instructions.
Fixing javascript mediaquery breakpoint for small screen.
2021-04-25 23:22:29 -07:00
Jason Kulatunga 644a9418dd (0.3.10) Automated packaging of release by Packagr
Signed-off-by: Jason Kulatunga <jason@thesparktree.com>
2021-04-26 05:22:22 +00:00
Jason Kulatunga 8431eef515 github.event.release.upload_url 2021-04-25 22:10:28 -07:00
Jason Kulatunga df07261c57 (0.3.9) Automated packaging of release by Packagr
Signed-off-by: Jason Kulatunga <jason@thesparktree.com>
2021-04-26 04:59:43 +00:00
Jason Kulatunga 08634f2a88 Merge pull request #169 from AnalogJ/freebsd_action 2021-04-25 21:50:22 -07:00
Jason Kulatunga 273be111b4 fixing path. 2021-04-25 21:45:43 -07:00
Jason Kulatunga a4e193fb25 remove freebsd config from makefile. Adding freebsd specific releases as a post release step 2021-04-25 21:44:47 -07:00
Jason Kulatunga d252333ba9 ignore makefile. 2021-04-25 21:25:07 -07:00
Jason Kulatunga 0864b8000c pass env vars. 2021-04-25 21:08:10 -07:00
Jason Kulatunga ecd6b7e128 cannot lock go version (for now). 2021-04-25 20:59:34 -07:00
Jason Kulatunga 527214f38c (0.3.8) Automated packaging of release by Packagr
Signed-off-by: Jason Kulatunga <jason@thesparktree.com>
2021-04-26 03:58:09 +00:00
Jason Kulatunga 0f788cc9ce lock go-version. installing make. fix copy command? add GOPATH 2021-04-25 20:49:24 -07:00
Jason Kulatunga 9ece82f3f5 change version 2021-04-25 20:35:42 -07:00
Jason Kulatunga 3780f8e864 fix error log. 2021-04-25 20:33:32 -07:00
Jason Kulatunga fc3d6a33e3 Merge pull request #168 from AnalogJ/cowboy_coding_fixes
fix error log.
2021-04-25 20:18:42 -07:00
Jason Kulatunga 2fc24d0e76 fix error log. 2021-04-25 20:14:58 -07:00
Jason Kulatunga d92a21fbca Merge pull request #132 from telyn/patch-1
Change temperature graph to 24-hour formatting
2021-04-25 11:44:47 -07:00
Jason Kulatunga 1d3d16eeaa Merge pull request #165 from AnalogJ/collector-config-enhancements 2021-04-25 11:38:57 -07:00
Jason Kulatunga 4331f86ed4 fixing #164 telegram notification issue while I'm here.
TODO: do a full check of all notification params in shoutrrr and ensure they match what we use.
2021-04-25 11:38:17 -07:00
Jason Kulatunga da890d95b6 Fixing forced logging of smartctl output irrespective of log level (now available at DEBUG level only)
TODO: add a table summary at INFO level.

fixes #123
2021-04-25 11:34:26 -07:00
Jason Kulatunga e5713e3a81 added ability to configure collector variables using config file (api endpoint, log level, log file).
fixes #124
2021-04-25 11:24:03 -07:00
Telyn 9778809cba Change temperature graph to 24-hour formatting 2020-12-27 14:57:27 +00:00
370 changed files with 50737 additions and 32247 deletions
+25
View File
@@ -0,0 +1,25 @@
services:
app:
image: mcr.microsoft.com/devcontainers/base:ubuntu-22.04
volumes:
- ..:/workspaces/scrutiny:cached
command: sleep infinity
network_mode: service:influxdb
influxdb:
image: influxdb:2.8
restart: unless-stopped
ports:
- "8086:8086"
environment:
- DOCKER_INFLUXDB_INIT_MODE=setup
- DOCKER_INFLUXDB_INIT_USERNAME=admin
- DOCKER_INFLUXDB_INIT_PASSWORD=password12345
- DOCKER_INFLUXDB_INIT_ORG=scrutiny
- DOCKER_INFLUXDB_INIT_BUCKET=metrics
- DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=my-super-secret-auth-token
volumes:
- scrutiny-influxdb-data:/var/lib/influxdb2
volumes:
scrutiny-influxdb-data:
@@ -0,0 +1,30 @@
{
"name": "Scrutiny Dev (rootless docker)",
"dockerComposeFile": "../docker-compose.yml",
"service": "app",
"workspaceFolder": "/workspaces/scrutiny",
"features": {
"ghcr.io/devcontainers/features/go:1": "1.25",
"ghcr.io/devcontainers/features/node:1": "lts"
},
"onCreateCommand": "sudo apt-get update && sudo apt-get install -y smartmontools iputils-ping chromium-browser",
"customizations": {
"vscode": {
"extensions": [
"golang.go",
"dbaeumer.vscode-eslint",
"esbenp.prettier-vscode"
]
}
},
"forwardPorts": [8080, 8086],
"postCreateCommand": "bash .devcontainer/setup.sh",
"remoteUser": "root",
"containerUser": "root",
"updateRemoteUserUID": false
}
+28
View File
@@ -0,0 +1,28 @@
{
"name": "Scrutiny Dev (docker)",
"dockerComposeFile": "../docker-compose.yml",
"service": "app",
"workspaceFolder": "/workspaces/scrutiny",
"features": {
"ghcr.io/devcontainers/features/go:1": "1.25",
"ghcr.io/devcontainers/features/node:1": "lts"
},
"onCreateCommand": "sudo apt-get update && sudo apt-get install -y smartmontools iputils-ping chromium-browser",
"customizations": {
"vscode": {
"extensions": [
"golang.go",
"dbaeumer.vscode-eslint",
"esbenp.prettier-vscode"
]
}
},
"forwardPorts": [8080, 8086],
"postCreateCommand": "bash .devcontainer/setup.sh",
"remoteUser": "vscode"
}
+32
View File
@@ -0,0 +1,32 @@
{
"name": "Scrutiny Dev (podman)",
"dockerComposeFile": "../docker-compose.yml",
"service": "app",
"workspaceFolder": "/workspaces/scrutiny",
"features": {
"ghcr.io/devcontainers/features/go:1": "1.25",
"ghcr.io/devcontainers/features/node:1": "lts"
},
"onCreateCommand": "sudo apt-get update && sudo apt-get install -y smartmontools iputils-ping chromium-browser",
"customizations": {
"vscode": {
"extensions": [
"golang.go",
"dbaeumer.vscode-eslint",
"esbenp.prettier-vscode"
]
}
},
"forwardPorts": [8080, 8086],
"postCreateCommand": "bash .devcontainer/setup.sh",
"remoteEnv": {
"PODMAN_USERNS": "keep-id"
},
"containerUser": "vscode",
"updateRemoteUserUID": true
}
+40
View File
@@ -0,0 +1,40 @@
#!/bin/bash
echo "Starting Scrutiny Setup..."
if [ ! -f "scrutiny.yaml" ]; then
echo "Creating scrutiny.yaml from template..."
cat <<EOF > scrutiny.yaml
version: 1
web:
listen:
port: 8080
host: 0.0.0.0
database:
location: ./scrutiny.db
src:
frontend:
path: ./dist
influxdb:
retention_policy: false
token: "my-super-secret-auth-token"
org: "scrutiny"
bucket: "metrics"
host: "localhost"
port: 8086
log:
file: 'web.log'
level: DEBUG
EOF
else
echo "scrutiny.yaml already exists."
fi
echo "Vendoring Go modules..."
go mod vendor
echo "Installing Node modules..."
cd webapp/frontend
npm install
echo "Setup Complete! Ready to code."
-1
View File
@@ -1,4 +1,3 @@
/dist
/vendor
/.idea
/.github
@@ -0,0 +1,154 @@
labels: ["needs-confirmation"]
body:
- type: markdown
attributes:
value: |
> [!IMPORTANT]
> Please read through [the Discussion rules](https://github.com/AnalogJ/scrutiny/discussions/876), review [the docs](https://github.com/AnalogJ/scrutiny/tree/master/docs), and check for both existing [Discussions](https://github.com/AnalogJ/scrutiny/discussions?discussions_q=) and [Issues](https://github.com/AnalogJ/scrutiny/issues?q=sort%3Areactions-desc) prior to opening a new Discussion.
- type: markdown
attributes:
value: "# Issue Details"
- type: textarea
attributes:
label: Issue Description
description: |
Provide a detailed description of the issue. Include relevant information, such as:
- The feature or configuration option you encounter the issue with.
- Screenshots, screen recordings, or other supporting media (as needed).
- If this is a regression of an existing issue that was closed or resolved, please include the previous item reference (Discussion, Issue, PR, commit) in your description.
placeholder: |
Temperature data is missing from the plots.
validations:
required: true
- type: textarea
attributes:
label: Expected Behavior
description: |
Describe how you expect scrutiny to behave in this situation. Include any relevant documentation links.
placeholder: |
All temperature data uploaded by collectors should make it into the plots.
validations:
required: true
- type: textarea
attributes:
label: Actual Behavior
description: |
Describe how scrutiny actually behaves in this situation. If it is not immediately obvious how the actual behavior differs from the expected behavior described above, please be sure to mention the deviation specifically.
placeholder: |
Only half the points appear.
validations:
required: true
- type: textarea
attributes:
label: Reproduction Steps
description: |
Provide a detailed set of step-by-step instructions for reproducing this issue. If you can't, describe what you were doing when the issue occurred.
placeholder: |
1. Set up the omnibus docker image
2. Launch the web dashboard
validations:
required: true
- type: textarea
attributes:
label: scrutiny logs
description: |
Provide any captured scrutiny logs or panic dumps during your issue reproduction in this field.
render: text
- type: input
attributes:
label: Scrutiny Version
description: The version of scrutiny you are using
placeholder: v0.8.2
validations:
required: true
- type: input
attributes:
label: Smartmontools Version
description: The version of smartmontools you are using (or "docker", if you're using the docker image)
placeholder: "7.2"
validations:
required: true
- type: input
attributes:
label: OS Version Information
description: |
Please tell us what operating system (name and version) you are using.
placeholder: Ubuntu 24.04.1 (Noble Numbat)
validations:
required: true
- type: dropdown
attributes:
label: Component
description: Which component of scrutiny has a problem?
options:
- web
- collector
- omnibus (docker only)
validations:
required: true
- type: checkboxes
attributes:
label: Deployment Method
description: How are you running scrutiny?
options:
- label: docker
- label: binaries
- label: systemd
validations:
required: true
- type: input
attributes:
label: Hard Drive Information
description: |
If the problem is related to a specific hard drive, what are the make and model?
placeholder: Seagate ST8000DM004-2CX188
validations:
required: false
- type: textarea
attributes:
label: smartctl output
description: |
What is the output of smartctl --xall --json <drive>?
render: json
validations:
required: false
- type: textarea
attributes:
label: scrutiny.yaml
description: |
Please provide your full scrutiny.yaml file.
render: yaml
validations:
required: false
- type: textarea
attributes:
label: collector.yaml
description: |
Please provide your full collector.yaml file.
render: yaml
validations:
required: false
- type: textarea
attributes:
label: Additional relevant configuration
description: |
Please any additional relevant configuration (e.g. systemd service definitions, OS configuration)
render: text
validations:
required: false
- type: markdown
attributes:
value: |
# User Acknowledgements
> [!TIP]
> Use these links to review the existing scrutiny [Discussions](https://github.com/AnalogJ/scrutiny/discussions?discussions_q=) and [Issues](https://github.com/AnalogJ/scrutiny/issues?q=sort%3Areactions-desc).
- type: checkboxes
attributes:
label: "I acknowledge that:"
options:
- label: I have reviewed the FAQ and confirm that my issue is NOT among them.
required: true
- label: I have searched the scrutiny repository (both open and closed Discussions and Issues) and confirm this is not a duplicate of an existing issue or discussion.
required: true
- label: I have checked the "Preview" tab on all text fields to ensure that everything looks right, and have wrapped all configuration and code in code blocks with a group of three backticks (` ``` `) on separate lines.
required: true
-41
View File
@@ -1,41 +0,0 @@
---
name: Bug report
about: Create a report to help us improve
title: "[BUG]"
labels: bug
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Log Files**
If related to missing devices or SMART data, please run the `collector` in DEBUG mode, and attach the log file.
```
docker run -it --rm -p 8080:8080 \
-v /run/udev:/run/udev:ro \
--cap-add SYS_RAWIO \
--device=/dev/sda \
--device=/dev/sdb \
-e DEBUG=true \
-e COLLECTOR_LOG_FILE=/tmp/collector.log \
-e SCRUTINY_LOG_FILE=/tmp/web.log \
--name scrutiny \
analogj/scrutiny
# in another terminal trigger the collector
docker exec scrutiny scrutiny-collector-metrics run
# then use docker cp to copy the log files out of the container.
docker cp scrutiny:/tmp/collector.log collector.log
docker cp scrutiny:/tmp/web.log web.log
```
+5
View File
@@ -0,0 +1,5 @@
blank_issues_enabled: false
contact_links:
- name: Features, Bug Reports, Questions
url: https://github.com/AnalogJ/scrutiny/discussions/new/choose
about: Our preferred starting point if you have any questions or suggestions about configuration, features or behavior.
-17
View File
@@ -1,17 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: "[FEAT]"
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Additional context**
Add any other context or screenshots about the feature request here.
+9
View File
@@ -0,0 +1,9 @@
---
name: Pre-Discussed and Approved Topics
about: |-
Only for topics already discussed and approved in the GitHub Discussions section.
---
**DO NOT OPEN A NEW ISSUE. PLEASE USE THE DISCUSSIONS SECTION.**
**I DIDN'T READ THE ABOVE LINE. PLEASE CLOSE THIS ISSUE.**
-58
View File
@@ -1,58 +0,0 @@
name: CI
# This workflow is triggered on pushes & pull requests
on: [push, pull_request]
jobs:
build:
name: Build
runs-on: ubuntu-latest
container: karalabe/xgo-1.13.x
env:
PROJECT_PATH: /go/src/github.com/analogj/scrutiny
CGO_ENABLED: 1
steps:
- name: Git
run: |
apt-get update && apt-get install -y software-properties-common
add-apt-repository ppa:git-core/ppa && apt-get update && apt-get install -y git
git --version
- name: Checkout
uses: actions/checkout@v2
- name: Test
run: |
mkdir -p $(dirname "$PROJECT_PATH")
cp -a $GITHUB_WORKSPACE $PROJECT_PATH
cd $PROJECT_PATH
go mod vendor
go test -race -coverprofile=coverage.txt -covermode=atomic -v -tags "static" $(go list ./... | grep -v /vendor/)
- name: Build Binaries
run: |
cd $PROJECT_PATH
make all
- name: Archive
uses: actions/upload-artifact@v2
with:
name: binaries.zip
path: |
/build/scrutiny-web-linux-amd64
/build/scrutiny-collector-metrics-linux-amd64
/build/scrutiny-web-linux-arm64
/build/scrutiny-collector-metrics-linux-arm64
/build/scrutiny-web-linux-arm-5
/build/scrutiny-collector-metrics-linux-arm-5
/build/scrutiny-web-linux-arm-6
/build/scrutiny-collector-metrics-linux-arm-6
/build/scrutiny-web-linux-arm-7
/build/scrutiny-collector-metrics-linux-arm-7
/build/scrutiny-web-windows-4.0-amd64.exe
/build/scrutiny-collector-metrics-windows-4.0-amd64.exe
# /build/scrutiny-web-freebsd-amd64
# /build/scrutiny-collector-metrics-freebsd-amd64
- uses: codecov/codecov-action@v1
with:
file: ${{ env.PROJECT_PATH }}/coverage.txt
flags: unittests
fail_ci_if_error: false
+173
View File
@@ -0,0 +1,173 @@
name: CI
# This workflow is triggered on pushes & pull requests
on:
push:
branches:
- master
pull_request:
permissions:
contents: read
jobs:
test-frontend:
name: Test Frontend
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Test Frontend
run: |
make binary-frontend-test-coverage
- name: Upload coverage
uses: actions/upload-artifact@v4
with:
name: coverage
path: ${{ github.workspace }}/webapp/frontend/coverage/lcov.info
retention-days: 1
test-backend:
name: Test Backend
runs-on: ubuntu-latest
# Service containers to run with `build` (Required for end-to-end testing)
services:
influxdb:
image: influxdb:2.8
env:
DOCKER_INFLUXDB_INIT_MODE: setup
DOCKER_INFLUXDB_INIT_USERNAME: admin
DOCKER_INFLUXDB_INIT_PASSWORD: password12345
DOCKER_INFLUXDB_INIT_ORG: scrutiny
DOCKER_INFLUXDB_INIT_BUCKET: metrics
DOCKER_INFLUXDB_INIT_ADMIN_TOKEN: my-super-secret-auth-token
ports:
- 8086:8086
env:
STATIC: true
steps:
- name: Add influxdb to hosts
run: echo "127.0.0.1 influxdb" | sudo tee -a /etc/hosts
- name: Checkout
uses: actions/checkout@v6
- name: Test Backend
run: |
make binary-clean binary-test-coverage
- name: Upload coverage
uses: actions/upload-artifact@v4
with:
name: coverage
path: ${{ github.workspace }}/coverage.txt
retention-days: 1
test-coverage:
name: Test Coverage Upload
needs:
- test-backend
- test-frontend
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Download coverage reports
uses: actions/download-artifact@v4
with:
name: coverage
- name: Upload coverage reports
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ${{ github.workspace }}/coverage.txt,${{ github.workspace }}/lcov.info
flags: unittests
fail_ci_if_error: true
verbose: true
golangci:
name: lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/setup-go@v6
with:
go-version: 1.25
- name: golangci-lint
uses: golangci/golangci-lint-action@v9
with:
args: --issues-exit-code=0
build:
name: Build ${{ matrix.cfg.goos }}/${{ matrix.cfg.goarch }}
runs-on: ${{ matrix.cfg.on }}
env:
GOOS: ${{ matrix.cfg.goos }}
GOARCH: ${{ matrix.cfg.goarch }}
GOARM: ${{ matrix.cfg.goarm }}
STATIC: true
strategy:
matrix:
cfg:
- { on: ubuntu-latest, goos: linux, goarch: amd64 }
- { on: ubuntu-latest, goos: linux, goarch: arm, goarm: 5 }
- { on: ubuntu-latest, goos: linux, goarch: arm, goarm: 6 }
- { on: ubuntu-latest, goos: linux, goarch: arm, goarm: 7 }
- { on: ubuntu-latest, goos: linux, goarch: arm64 }
- { on: macos-latest, goos: darwin, goarch: amd64 }
- { on: macos-latest, goos: darwin, goarch: arm64 }
- { on: macos-latest, goos: freebsd, goarch: amd64 }
- { on: windows-latest, goos: windows, goarch: amd64 }
- { on: windows-latest, goos: windows, goarch: arm64 }
steps:
- name: Checkout
uses: actions/checkout@v2
- uses: actions/setup-go@v3
with:
go-version: '^1.25'
- name: Build Binaries
run: |
make binary-clean binary-all
- name: Archive
uses: actions/upload-artifact@v4
with:
name: binaries-${{ matrix.cfg.on }}-${{ matrix.cfg.goos }}-${{ matrix.cfg.goarch }}-${{ matrix.cfg.goarm || 'na' }}.zip
path: |
scrutiny-web-*
scrutiny-collector-metrics-*
build-docker:
runs-on: ubuntu-latest
permissions:
contents: read
steps:
- name: Checkout repository
uses: actions/checkout@v6
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
with:
platforms: 'arm64,arm'
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build omnibus
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64
context: .
file: docker/Dockerfile
push: false
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Build collector
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64,linux/arm/v7
context: .
file: docker/Dockerfile.collector
push: false
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Build web
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64,linux/arm/v7
context: .
file: docker/Dockerfile.web
push: false
cache-from: type=gha
cache-to: type=gha,mode=max
+172
View File
@@ -0,0 +1,172 @@
name: Docker
on:
push:
# Publish semver tags as releases.
tags: [ 'v*.*.*' ]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
collector:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v6
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
with:
platforms: 'arm64,arm'
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
# Login against a Docker registry except on PR
# https://github.com/docker/login-action
- name: Log into registry ${{ env.REGISTRY }}
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
# Extract metadata (tags, labels) for Docker
# https://github.com/docker/metadata-action
- name: Extract Docker metadata
id: meta
uses: docker/metadata-action@v5
with:
flavor: |
latest=true
suffix=-collector,onlatest=true
tags: |
type=semver,pattern=v{{major}}.{{minor}}.{{patch}}
type=semver,pattern=v{{major}}.{{minor}}
type=semver,pattern=v{{major}}
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
# Build and push Docker image with Buildx
# https://github.com/docker/build-push-action
- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64,linux/arm/v7
context: .
file: docker/Dockerfile.collector
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
web:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v6
- name: "Populate frontend version information"
run: "cd webapp/frontend && ./git.version.sh"
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
with:
platforms: 'arm64,arm'
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
# Login against a Docker registry except on PR
# https://github.com/docker/login-action
- name: Log into registry ${{ env.REGISTRY }}
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
# Extract metadata (tags, labels) for Docker
# https://github.com/docker/metadata-action
- name: Extract Docker metadata
id: meta
uses: docker/metadata-action@v5
with:
flavor: |
latest=true
suffix=-web,onlatest=true
tags: |
type=semver,pattern=v{{major}}.{{minor}}.{{patch}}
type=semver,pattern=v{{major}}.{{minor}}
type=semver,pattern=v{{major}}
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
# Build and push Docker image with Buildx
# https://github.com/docker/build-push-action
- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64,linux/arm/v7
context: .
file: docker/Dockerfile.web
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
omnibus:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v6
- name: "Populate frontend version information"
run: "cd webapp/frontend && ./git.version.sh"
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
with:
platforms: 'arm64,arm'
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
# Login against a Docker registry except on PR
# https://github.com/docker/login-action
- name: Log into registry ${{ env.REGISTRY }}
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
# Extract metadata (tags, labels) for Docker
# https://github.com/docker/metadata-action
- name: Extract Docker metadata
id: meta
uses: docker/metadata-action@v5
# tag latest and latest-omnibus
with:
flavor: |
latest=true
suffix=-omnibus,onlatest=false
tags: |
type=raw,value=latest
type=semver,pattern=v{{major}}.{{minor}}.{{patch}}
type=semver,pattern=v{{major}}.{{minor}}
type=semver,pattern=v{{major}}
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
# Build and push Docker image with Buildx
# https://github.com/docker/build-push-action
- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64
context: .
file: docker/Dockerfile
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
+103
View File
@@ -0,0 +1,103 @@
name: Docker - Nightly
on:
workflow_dispatch:
# Note: this only runs on the default branch
schedule:
- cron: '36 12 * * *'
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build_nightlies:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v6
- name: "Populate frontend version information"
run: "cd webapp/frontend && ./git.version.sh"
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
with:
platforms: 'arm64'
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
# Login against a Docker registry except on PR
# https://github.com/docker/login-action
- name: Log into registry ${{ env.REGISTRY }}
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
# Extract metadata (tags, labels) for Docker
# https://github.com/docker/metadata-action
- name: Extract Docker metadata for omnibus
id: meta_omnibus
uses: docker/metadata-action@v5
with:
tags: |
type=raw,enable=true,value=nightly,suffix=-omnibus
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
# Build and push Docker image with Buildx (don't push on PR)
# https://github.com/docker/build-push-action
- name: Build and push omnibus Docker image
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64
context: .
file: docker/Dockerfile
push: true
tags: ${{ steps.meta_omnibus.outputs.tags }}
labels: ${{ steps.meta_omnibus.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
# Extract metadata (tags, labels) for Docker
# https://github.com/docker/metadata-action
- name: Extract Docker metadata for collector
id: meta_collector
uses: docker/metadata-action@v5
with:
tags: |
type=raw,enable=true,value=nightly,suffix=-collector
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
# Build and push Docker image with Buildx (don't push on PR)
# https://github.com/docker/build-push-action
- name: Build and push collector Docker image
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64
context: .
file: docker/Dockerfile.collector
push: true
tags: ${{ steps.meta_collector.outputs.tags }}
labels: ${{ steps.meta_collector.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
# Extract metadata (tags, labels) for Docker
# https://github.com/docker/metadata-action
- name: Extract Docker metadata for web
id: meta_web
uses: docker/metadata-action@v5
with:
tags: |
type=raw,enable=true,value=nightly,suffix=-web
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
# Build and push Docker image with Buildx (don't push on PR)
# https://github.com/docker/build-push-action
- name: Build and push web Docker image
uses: docker/build-push-action@v6
with:
platforms: linux/amd64,linux/arm64
context: .
file: docker/Dockerfile.web
push: true
tags: ${{ steps.meta_web.outputs.tags }}
labels: ${{ steps.meta_web.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
-35
View File
@@ -1,35 +0,0 @@
# compiles angular frontend and attaches it to the latest release.
name: Release Frontend
on:
release:
# Only use the types keyword to narrow down the activity types that will trigger your workflow.
types: [published]
jobs:
release-frontend:
name: Release Frontend
runs-on: ubuntu-latest
container: node:lts-slim
steps:
- name: Checkout
uses: actions/checkout@v2
with:
ref: ${{github.event.release.tag_name}}
- name: Build Frontend
run: |
cd webapp/frontend
npm install -g @angular/cli@9.1.4
npm install
mkdir -p dist
ng build --output-path=dist --deploy-url="/web/" --base-href="/web/" --prod
tar -czf scrutiny-web-frontend.tar.gz dist
- name: Upload Frontend Asset
id: upload-release-asset3
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.SCRUTINY_GITHUB_TOKEN }}
with:
upload_url: ${{ github.event.release.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps
asset_path: './webapp/frontend/scrutiny-web-frontend.tar.gz'
asset_name: scrutiny-web-frontend.tar.gz
asset_content_type: application/gzip
+148 -163
View File
@@ -13,13 +13,25 @@ on:
default: 'webapp/backend/pkg/version/version.go'
jobs:
build:
name: Build
release:
name: Create Release Commit
runs-on: ubuntu-latest
container: karalabe/xgo-1.13.x
container: ghcr.io/packagrio/packagr:latest-golang
# Service containers to run with `build` (Required for end-to-end testing)
services:
influxdb:
image: influxdb:2.8
env:
DOCKER_INFLUXDB_INIT_MODE: setup
DOCKER_INFLUXDB_INIT_USERNAME: admin
DOCKER_INFLUXDB_INIT_PASSWORD: password12345
DOCKER_INFLUXDB_INIT_ORG: scrutiny
DOCKER_INFLUXDB_INIT_BUCKET: metrics
DOCKER_INFLUXDB_INIT_ADMIN_TOKEN: my-super-secret-auth-token
ports:
- 8086:8086
env:
PROJECT_PATH: /go/src/github.com/analogj/scrutiny
CGO_ENABLED: 1
STATIC: true
steps:
- name: Git
run: |
@@ -27,177 +39,150 @@ jobs:
add-apt-repository ppa:git-core/ppa && apt-get update && apt-get install -y git
git --version
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v6
with:
fetch-depth: 0
- name: Bump version
id: bump_version
uses: packagrio/action-bumpr-go@master
with:
version_bump_type: ${{ github.event.inputs.version_bump_type }}
version_metadata_path: ${{ github.event.inputs.version_metadata_path }}
github_token: ${{ secrets.SCRUTINY_GITHUB_TOKEN }}
- name: Test
run: |
mkdir -p $(dirname "$PROJECT_PATH")
cp -a $GITHUB_WORKSPACE $PROJECT_PATH
cd $PROJECT_PATH
go mod vendor
go test -v -tags "static" $(go list ./... | grep -v /vendor/)
- name: Build Binaries
run: |
cd $PROJECT_PATH
make all
- name: Commit
uses: EndBug/add-and-commit@v4 # You can change this to use a specific version
with:
author_name: Jason Kulatunga
author_email: jason@thesparktree.com
cwd: ${{ env.PROJECT_PATH }}
force: false
signoff: true
message: '(${{steps.bump_version.outputs.release_version}}) Automated packaging of release by Packagr'
tag: ${{steps.bump_version.outputs.release_version}}
env:
GITHUB_TOKEN: ${{ secrets.SCRUTINY_GITHUB_TOKEN }} # Leave this line unchanged
- name: Test
run: |
make binary-clean binary-test-coverage
- name: Commit Changes Locally
id: commit
uses: packagrio/action-releasr-go@master
env:
GITHUB_TOKEN: ${{ secrets.SCRUTINY_GITHUB_TOKEN }} # Leave this line unchanged
with:
version_metadata_path: ${{ github.event.inputs.version_metadata_path }}
- name: Upload workspace
uses: actions/upload-artifact@v4
with:
name: workspace
include-hidden-files: true
path: ${{ github.workspace }}/**/*
retention-days: 1
- name: Create Release
id: create_release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.SCRUTINY_GITHUB_TOKEN }}
build:
name: Build ${{ matrix.cfg.goos }}/${{ matrix.cfg.goarch }}${{ matrix.cfg.goarm }}
needs: release
runs-on: ${{ matrix.cfg.on }}
env:
GOOS: ${{ matrix.cfg.goos }}
GOARCH: ${{ matrix.cfg.goarch }}
GOARM: ${{ matrix.cfg.goarm }}
STATIC: true
strategy:
matrix:
cfg:
- { on: ubuntu-latest, goos: linux, goarch: amd64 }
- { on: ubuntu-latest, goos: linux, goarch: arm, goarm: 5 }
- { on: ubuntu-latest, goos: linux, goarch: arm, goarm: 6 }
- { on: ubuntu-latest, goos: linux, goarch: arm, goarm: 7 }
- { on: ubuntu-latest, goos: linux, goarch: arm64 }
- { on: macos-latest, goos: darwin, goarch: amd64 }
- { on: macos-latest, goos: darwin, goarch: arm64 }
- { on: macos-latest, goos: freebsd, goarch: amd64 }
- { on: windows-latest, goos: windows, goarch: amd64 }
- { on: windows-latest, goos: windows, goarch: arm64 }
steps:
- name: Download workspace
uses: actions/download-artifact@v7
with:
tag_name: ${{ steps.bump_version.outputs.release_version }}
release_name: Release ${{ steps.bump_version.outputs.release_version }}
draft: false
prerelease: false
name: workspace
- uses: actions/setup-go@v6
with:
go-version: '1.25' # The Go version to download (if necessary) and use.
- name: Build Binaries
run: |
make binary-clean binary-all
- name: Upload artifacts
uses: actions/upload-artifact@v6
with:
name: scrutiny-${{ matrix.cfg.goos }}-${{ matrix.cfg.goarch }}${{ case(matrix.cfg.goarm != '', format('-{0}', matrix.cfg.goarm), '') }}.zip
path: |
scrutiny-web-${{ matrix.cfg.goos }}-${{ matrix.cfg.goarch }}${{ case(matrix.cfg.goarm != '', format('-{0}', matrix.cfg.goarm), '') }}${{ case(matrix.cfg.goos == 'windows', '.exe', '') }}
scrutiny-collector-metrics-${{ matrix.cfg.goos }}-${{ matrix.cfg.goarch }}${{ case(matrix.cfg.goarm != '', format('-{0}', matrix.cfg.goarm), '') }}${{ case(matrix.cfg.goos == 'windows', '.exe', '') }}
build_frontend:
name: Build Frontend
needs: release
runs-on: ubuntu-latest
container: node:lts-slim
steps:
- name: Download workspace
uses: actions/download-artifact@v7
with:
name: workspace
- name: "Generate frontend version information"
run: "cd webapp/frontend && chmod +x git.version.sh && ./git.version.sh"
- name: Build Frontend
run: |
apt-get update && apt-get install -y make
make binary-frontend
tar -czf scrutiny-web-frontend.tar.gz dist
- name: Upload artifacts
uses: actions/upload-artifact@v6
with:
name: scrutiny-web-frontend.zip
path: scrutiny-web-frontend.tar.gz
- name: Release Asset - Web - linux-amd64
id: upload-release-asset1
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.SCRUTINY_GITHUB_TOKEN }}
release-publish:
name: Publish Release
needs:
- build
- build_frontend
runs-on: ubuntu-latest
steps:
- name: Download workspace
uses: actions/download-artifact@v7
with:
upload_url: ${{ steps.create_release.outputs.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps
asset_path: /build/scrutiny-web-linux-amd64
asset_name: scrutiny-web-linux-amd64
asset_content_type: application/octet-stream
- name: Release Asset - Collector - linux-amd64
id: upload-release-asset2
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.SCRUTINY_GITHUB_TOKEN }}
name: workspace
- name: Download binaries
uses: actions/download-artifact@v7
with:
upload_url: ${{ steps.create_release.outputs.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps
asset_path: /build/scrutiny-collector-metrics-linux-amd64
asset_name: scrutiny-collector-metrics-linux-amd64
asset_content_type: application/octet-stream
- name: Release Asset - Web - linux-arm64
id: upload-release-asset3
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.SCRUTINY_GITHUB_TOKEN }}
merge-multiple: true
pattern: scrutiny-*.zip
- name: Download frontend
uses: actions/download-artifact@v7
with:
upload_url: ${{ steps.create_release.outputs.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps
asset_path: /build/scrutiny-web-linux-arm64
asset_name: scrutiny-web-linux-arm64
asset_content_type: application/octet-stream
- name: Release Asset - Collector - linux-arm64
id: upload-release-asset4
uses: actions/upload-release-asset@v1
name: scrutiny-web-frontend.zip
- name: List
shell: bash
run: |
ls -alt
- name: Publish Release & Assets
id: publish
uses: packagrio/action-publishr-go@master
env:
GITHUB_TOKEN: ${{ secrets.SCRUTINY_GITHUB_TOKEN }}
# This is necessary in order to push a commit to the repo
GITHUB_TOKEN: ${{ secrets.SCRUTINY_GITHUB_TOKEN }} # Leave this line unchanged
with:
upload_url: ${{ steps.create_release.outputs.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps
asset_path: /build/scrutiny-collector-metrics-linux-arm64
asset_name: scrutiny-collector-metrics-linux-arm64
asset_content_type: application/octet-stream
- name: Release Asset - Web - linux-arm-5
id: upload-release-asset5
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.SCRUTINY_GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps
asset_path: /build/scrutiny-web-linux-arm-5
asset_name: scrutiny-web-linux-arm-5
asset_content_type: application/octet-stream
- name: Release Asset - Collector - linux-arm-5
id: upload-release-asset6
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.SCRUTINY_GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps
asset_path: /build/scrutiny-collector-metrics-linux-arm-5
asset_name: scrutiny-collector-metrics-linux-arm-5
asset_content_type: application/octet-stream
- name: Release Asset - Web - linux-arm-6
id: upload-release-asset7
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.SCRUTINY_GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps
asset_path: /build/scrutiny-web-linux-arm-6
asset_name: scrutiny-web-linux-arm-6
asset_content_type: application/octet-stream
- name: Release Asset - Collector - linux-arm-6
id: upload-release-asset8
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.SCRUTINY_GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps
asset_path: /build/scrutiny-collector-metrics-linux-arm-6
asset_name: scrutiny-collector-metrics-linux-arm-6
asset_content_type: application/octet-stream
- name: Release Asset - Web - linux-arm-7
id: upload-release-asset9
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.SCRUTINY_GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps
asset_path: /build/scrutiny-web-linux-arm-7
asset_name: scrutiny-web-linux-arm-7
asset_content_type: application/octet-stream
- name: Release Asset - Collector - linux-arm-7
id: upload-release-asset10
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.SCRUTINY_GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps
asset_path: /build/scrutiny-collector-metrics-linux-arm-7
asset_name: scrutiny-collector-metrics-linux-arm-7
asset_content_type: application/octet-stream
# - name: Release Asset - Web - freebsd-amd64
# id: upload-release-asset7
# uses: actions/upload-release-asset@v1
# env:
# GITHUB_TOKEN: ${{ secrets.SCRUTINY_GITHUB_TOKEN }}
# with:
# upload_url: ${{ steps.create_release.outputs.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps
# asset_path: ${{ env.PROJECT_PATH }}/scrutiny-web-freebsd-amd64
# asset_name: scrutiny-web-freebsd-amd64
# asset_content_type: application/octet-stream
# - name: Release Asset - Collector - freebsd-amd64
# id: upload-release-asset8
# uses: actions/upload-release-asset@v1
# env:
# GITHUB_TOKEN: ${{ secrets.SCRUTINY_GITHUB_TOKEN }}
# with:
# upload_url: ${{ steps.create_release.outputs.upload_url }} # This pulls from the CREATE RELEASE step above, referencing it's ID to get its outputs object, which include a `upload_url`. See this blog post for more info: https://jasonet.co/posts/new-features-of-github-actions/#passing-data-to-future-steps
# asset_path: ${{ env.PROJECT_PATH }}/scrutiny-collector-metrics-freebsd-amd64
# asset_name: scrutiny-collector-metrics-freebsd-amd64
# asset_content_type: application/octet-stream
version_metadata_path: ${{ github.event.inputs.version_metadata_path }}
upload_assets:
scrutiny-collector-metrics-darwin-amd64
scrutiny-collector-metrics-darwin-arm64
scrutiny-collector-metrics-freebsd-amd64
scrutiny-collector-metrics-linux-amd64
scrutiny-collector-metrics-linux-arm-5
scrutiny-collector-metrics-linux-arm-6
scrutiny-collector-metrics-linux-arm-7
scrutiny-collector-metrics-linux-arm64
scrutiny-collector-metrics-windows-amd64.exe
scrutiny-collector-metrics-windows-arm64.exe
scrutiny-web-frontend.tar.gz
scrutiny-web-darwin-amd64
scrutiny-web-darwin-arm64
scrutiny-web-freebsd-amd64
scrutiny-web-linux-amd64
scrutiny-web-linux-arm-5
scrutiny-web-linux-arm-6
scrutiny-web-linux-arm-7
scrutiny-web-linux-arm64
scrutiny-web-windows-amd64.exe
scrutiny-web-windows-arm64.exe
@@ -1,19 +0,0 @@
name: Cleanup Artifacts
on:
schedule:
# Every day at 1am
- cron: '0 1 * * *'
jobs:
remove-old-artifacts:
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- name: Remove old artifacts
uses: c-hive/gha-remove-artifacts@v1
with:
age: '1 day'
skip-tags: true
skip-recent: 5
+2 -1
View File
@@ -8,7 +8,8 @@ jobs:
build:
name: is-sponsor-label
runs-on: ubuntu-latest
if: ${{ false }}
steps:
- uses: JasonEtco/is-sponsor-label-action@v1
- uses: JasonEtco/is-sponsor-label-action@v1.2.0
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+4
View File
@@ -64,3 +64,7 @@ scrutiny-*.db
scrutiny_test.db
scrutiny.yaml
coverage.txt
/config
/influxdb
.angular
web.log
+11
View File
@@ -0,0 +1,11 @@
version: "2"
formatters:
enable:
- gofmt
- goimports
linters:
enable:
- bodyclose
settings:
errcheck:
check-blank: true
+37
View File
@@ -0,0 +1,37 @@
{
"version": "0.2.0",
"configurations": [
{
"name": "Run Scrutiny",
"type": "go",
"request": "launch",
"mode": "auto",
"program": "${workspaceFolder}/webapp/backend/cmd/scrutiny/scrutiny.go",
"args": ["start", "--config", "./scrutiny.yaml"],
"cwd": "${workspaceFolder}",
"env": {
"DEBUG": "true"
},
"console": "integratedTerminal",
"preLaunchTask": "Build Frontend",
"serverReadyAction": {
"action": "openExternally",
"pattern": "Listening and serving HTTP on",
"uriFormat": "http://localhost:8080/web/"
}
},
{
"name": "Run Collector",
"type": "go",
"request": "launch",
"mode": "auto",
"program": "${workspaceFolder}/collector/cmd/collector-metrics/collector-metrics.go",
"args": ["run", "--debug"],
"cwd": "${workspaceFolder}",
"env": {
"COLLECTOR_DEBUG": "true"
},
"console": "integratedTerminal"
}
]
}
+10
View File
@@ -0,0 +1,10 @@
{
"version": "2.0.0",
"tasks": [
{
"label": "Build Frontend",
"type": "shell",
"command": "cd webapp/frontend && npm run build:prod -- --output-path=../../dist"
}
]
}
+73
View File
@@ -0,0 +1,73 @@
# AI Usage Policy
scrutiny has strict rules for AI usage:
- **All AI usage in any form must be disclosed.** You must state
the tool you used (e.g. Claude Code, Cursor, Amp) along with
the extent that the work was AI-assisted.
- **Pull requests created in any way by AI can only be for accepted issues.**
Drive-by pull requests that do not reference an accepted issue will be
closed. If AI isn't disclosed but a maintainer suspects its use, the
PR will be closed. If you want to share code for a non-accepted issue,
open a discussion or attach it to an existing discussion.
- **Pull requests created by AI must have been fully verified with
human use.** AI must not create hypothetically correct code that
hasn't been tested. Importantly, you must not allow AI to write
code for platforms or environments you don't have access to manually
test on.
- **Issues and discussions can use AI assistance but must have a full
human-in-the-loop.** This means that any content generated with AI
must have been reviewed _and edited_ by a human before submission.
AI is very good at being overly verbose and including noise that
distracts from the main point. Humans must do their research and
trim this down.
- **No AI-generated media is allowed (art, images, videos, audio, etc.).**
Text and code are the only acceptable AI-generated content, per the
other rules in this policy.
- **Bad AI drivers will be banned and ridiculed in public.** You've
been warned. We love to help junior developers learn and grow, but
if you're interested in that then don't use AI, and we'll help you.
I'm sorry that bad AI drivers have ruined this for you.
These rules apply only to outside contributions to scrutiny. Maintainers
and repeat contributors (with explicit permission) are exempt from these
rules and may use AI tools at their discretion; they've proven themselves
trustworthy to apply good judgment.
## There are Humans Here
Please remember that scrutiny is maintained by humans.
Every discussion, issue, and pull request is read and reviewed by
humans (and sometimes machines, too). It is a boundary point at which
people interact with each other and the work done. It is rude and
disrespectful to approach this boundary with low-effort, unqualified
work, since it puts the burden of validation on the maintainer.
In a perfect world, AI would produce high-quality, accurate work
every time. But today, that reality depends on the driver of the AI.
And today, most drivers of AI are just not good enough. So, until either
the people get better, the AI gets better, or both, we have to have
strict rules to protect maintainers.
## AI is Welcome Here
Many maintainers embrace AI tools as a productive tool in their workflow.
As a project, scrutiny welcomes AI as a tool!
**Our reason for the strict AI policy is not due to an anti-AI stance**, but
instead due to the number of highly unqualified people using AI. It's the
people, not the tools, that are the problem.
This section is included to be transparent about the project's usage about
AI for people who may disagree with it, and to address the misconception
that this policy is anti-AI in nature.
# Credit
Adopted from [ghostty's AI policy](https://github.com/ghostty-org/ghostty/blob/1b7a15899ad40fba4ce020f537055d30eaf99ee8/AI_POLICY.md)
+297 -61
View File
@@ -1,82 +1,289 @@
# Contributing
# Contributing to scrutiny
There are multiple ways to develop on the scrutiny codebase locally. The two most popular are:
- Docker Development Container - only requires docker
- Run Components Locally - requires smartmontools, golang & nodejs installed locally
This document describes the process of contributing to scrutiny. It is intended
for anyone considering opening an **issue**, **discussion** or **pull request**.
## Docker Development
```
docker build -f docker/Dockerfile . -t analogj/scrutiny
docker run -it --rm -p 8080:8080 \
-v /run/udev:/run/udev:ro \
--cap-add SYS_RAWIO \
--device=/dev/sda \
--device=/dev/sdb \
analogj/scrutiny
/scrutiny/bin/scrutiny-collector-metrics run
```
> [!NOTE]
>
> The intention of these policies is not to be difficult, and
> contributions are greatly appreciated. The goal is to streamline
> and simplify the efforts of both contributers and maintainers.
## AI Usage
scrutiny has strict rules for AI usage. Please see
the [AI Usage Policy](AI_POLICY.md). **This is very important.**
## Quick Guide
### I'd like to contribute
[All issues are actionable](#issues-are-actionable). Pick one and start
working on it. Thank you. If you need help or guidance, comment on the issue.
Issues that are extra friendly to new contributors are tagged with
["contributor friendly"].
["contributor friendly"]: https://github.com/AnalogJ/scrutiny/issues?q=is%3Aissue%20is%3Aopen%20label%3A%22contributor%20friendly%22
### I have a bug! / Something isn't working
First, search the issue tracker and discussions for similar issues. Tip: also
search for [closed issues] and [discussions] — your issue might have already
been fixed!
> [!NOTE]
>
> If there is an _open_ issue or discussion that matches your problem,
> **please do not comment on it unless you have valuable insight to add**.
>
> GitHub has a very _noisy_ set of default notification settings which
> sends an email to _every participant_ in an issue/discussion every time
> someone adds a comment. Instead, use the handy upvote button for discussions,
> and/or emoji reactions on both discussions and issues, which are a visible
> yet non-disruptive way to show your support.
If your issue hasn't been reported already, open an ["Issue Triage"] discussion
and make sure to fill in the template **completely**. They are vital for
maintainers to figure out important details about your setup.
> [!WARNING]
>
> A _very_ common mistake is to file a bug report either as a Q&A or a Feature
> Request. **Please don't do this.** Otherwise, maintainers would have to ask
> for your system information again manually, and sometimes they will even ask
> you to create a new discussion because of how few detailed information is
> required for other discussion types compared to Issue Triage.
>
> Because of this, please make sure that you _only_ use the "Issue Triage"
> category for reporting bugs — thank you!
[closed issues]: https://github.com/AnalogJ/scrutiny/issues?q=is%3Aissue%20state%3Aclosed
[discussions]: https://github.com/AnalogJ/scrutiny/discussions?discussions_q=is%3Aclosed
["Issue Triage"]: https://github.com/AnalogJ/scrutiny/discussions/new?category=issue-triage
### I have an idea for a feature
Like bug reports, first search through both issues and discussions and try to
find if your feature has already been requested. Otherwise, open a discussion
in the ["Feature Requests, Ideas"] category.
["Feature Requests, Ideas"]: https://github.com/AnalogJ/scrutiny/discussions/new?category=feature-requests-ideas
### I've implemented a feature
1. If there is an issue for the feature, open a pull request straight away.
2. If there is no issue, open a discussion and link to your branch.
3. If you want to live dangerously, open a pull request and
[hope for the best](#pull-requests-implement-an-issue).
### I have a question which is neither a bug report nor a feature request
Open a [Q&A discussion].
> [!NOTE]
> If your question is about a missing feature, please open a discussion under
> the ["Feature Requests, Ideas"] category. If scrutiny is behaving
> unexpectedly, use the ["Issue Triage"] category.
>
> The "Q&A" category is strictly for other kinds of discussions and do not
> require detailed information unlike the two other categories, meaning that
> maintainers would have to spend the extra effort to ask for basic information
> if you submit a bug report under this category.
>
> Therefore, please **pay attention to the category** before opening
> discussions to save us all some time and energy. Thank you!
[Q&A discussion]: https://github.com/AnalogJ/scrutiny/discussions/new?category=q-a
## General Patterns
### Issues are Actionable
The scrutiny [issue tracker](https://github.com/AnalogJ/scrutiny/issues)
is for _actionable items_.
Unlike some other projects, scrutiny **does not use the issue tracker for
discussion or feature requests**. Instead, we use GitHub
[discussions](https://github.com/AnalogJ/scrutiny/discussions) for that.
Once a discussion reaches a point where a well-understood, actionable
item is identified, it is moved to the issue tracker. **This pattern
makes it easier for maintainers or contributors to find issues to work on
since _every issue_ is ready to be worked on.**
If you are experiencing a bug and have clear steps to reproduce it, please
open an issue. If you are experiencing a bug but you are not sure how to
reproduce it or aren't sure if it's a bug, please open a discussion.
If you have an idea for a feature, please open a discussion.
### Pull Requests Implement an Issue
Pull requests should be associated with a previously accepted issue.
**If you open a pull request for something that wasn't previously discussed,**
it may be closed or remain stale for an indefinite period of time. I'm not
saying it will never be accepted, but the odds are stacked against you.
Issues tagged with "feature" represent accepted, well-scoped feature requests.
If you implement an issue tagged with feature as described in the issue, your
pull request will be accepted with a high degree of certainty.
> [!NOTE]
>
> **Pull requests are NOT a place to discuss feature design.** Please do
> not open a WIP pull request to discuss a feature. Instead, use a discussion
> and link to your branch.
# Developer Guide
> [!NOTE]
>
> **The remainder of this file is dedicated to developers actively
> working on scrutiny.** If you're a user reporting an issue, you can
> ignore the rest of this document.
The Scrutiny repository is a [monorepo](https://en.wikipedia.org/wiki/Monorepo) containing source code for:
- Scrutiny Backend Server (API)
- Scrutiny Frontend Angular SPA
- S.M.A.R.T Collector
Depending on the functionality you are adding, you may need to setup a development environment for 1 or more projects.
# Devcontainer
Devcontainer configurations are available to build and run Scrutiny (WebUI and Collector) in a fully isolated environment.
When opening the project with vscode, choose "Reopen in Container". Three configurations are available depending on your
container runtime and setup: docker, docker-rootless, and podman.
# Modifying the Scrutiny Backend Server (API)
1. install the [Go runtime](https://go.dev/doc/install) (v1.25)
2. download the `scrutiny-web-frontend.tar.gz` for
the [latest release](https://github.com/AnalogJ/scrutiny/releases/latest). Extract to a folder named `dist`
3. create a `scrutiny.yaml` config file
```yaml
# config file for local development. store as scrutiny.yaml
version: 1
web:
listen:
port: 8080
host: 0.0.0.0
database:
# can also set absolute path here
location: ./scrutiny.db
src:
frontend:
path: ./dist
influxdb:
retention_policy: false
log:
file: 'web.log' #absolute or relative paths allowed, eg. web.log
level: DEBUG
```
4. start a InfluxDB docker container.
```bash
docker run -p 8086:8086 --rm influxdb:2.8
```
5. start the scrutiny web server
```bash
go mod vendor
go run webapp/backend/cmd/scrutiny/scrutiny.go start --config ./scrutiny.yaml
```
6. open your browser to [http://localhost:8080/web](http://localhost:8080/web)
# Modifying the Scrutiny Frontend Angular SPA
The frontend is written in Angular. If you're working on the frontend and can use mocked data rather than a real backend, you can follow the instructions below:
1. install [NodeJS](https://nodejs.org/en/download/)
2. start the Angular Frontend Application
```bash
cd webapp/frontend
npm install
npm run start -- --serve-path="/web/" --port 4200
```
3. open your browser and visit [http://localhost:4200/web](http://localhost:4200/web)
# Modifying both Scrutiny Backend and Frontend Applications
If you're developing a feature that requires changes to the backend and the frontend, or a frontend feature that requires real data,
you'll need to follow the steps below:
1. install the [Go runtime](https://go.dev/doc/install) (v1.20+)
2. install [NodeJS](https://nodejs.org/en/download/)
3. create a `scrutiny.yaml` config file
```yaml
# config file for local development. store as scrutiny.yaml
version: 1
web:
listen:
port: 8080
host: 0.0.0.0
database:
# can also set absolute path here
location: ./scrutiny.db
src:
frontend:
path: ./dist
influxdb:
retention_policy: false
log:
file: 'web.log' #absolute or relative paths allowed, eg. web.log
level: DEBUG
```
4. start a InfluxDB docker container.
```bash
docker run -p 8086:8086 --rm influxdb:2.8
```
5. build the Angular Frontend Application
```bash
cd webapp/frontend
npm install
npm run build:prod -- --watch --output-path=../../dist
# Note: if you do not add `--prod` flag, app will display mocked data for api calls.
```
6. start the scrutiny web server
```bash
go mod vendor
go run webapp/backend/cmd/scrutiny/scrutiny.go start --config ./scrutiny.yaml
```
7. open your browser to [http://localhost:8080/web](http://localhost:8080/web)
## Local Development
If you'd like to populate the database with some test data, you can run the following commands:
### Frontend
The frontend is written in Angular.
If you're working on the frontend and can use mocked data rather than a real backend, you can use
```
cd webapp/frontend && ng serve
```
However, if you need to also run the backend, and use real data, you'll need to run the following command:
```
cd webapp/frontend && ng build --watch --output-path=../../dist --deploy-url="/web/" --base-href="/web/" --prod
```
> Note: if you do not add `--prod` flag, app will display mocked data for api calls.
### Backend
If you're using the `ng build` command above to generate your frontend, you'll need to create a custom config file and
override the `web.src.frontend.path` value.
> NOTE: you may need to update the `local_time` key within the JSON file, any timestamps older than ~3 weeks will be automatically ignored
> (since the downsampling & retention policy takes effect at 2 weeks)
> This is done automatically by the `webapp/backend/pkg/models/testdata/helper.go` script
```
# config file for local development. store as scrutiny.yaml
version: 1
web:
listen:
port: 8080
host: 0.0.0.0
database:
# can also set absolute path here
location: ./scrutiny.db
src:
frontend:
path: ./dist
docker run -p 8086:8086 --rm influxdb:2.8
log:
file: 'web.log' #absolute or relative paths allowed, eg. web.log
level: DEBUG
# curl -X POST -H "Content-Type: application/json" -d @webapp/backend/pkg/web/testdata/register-devices-req.json localhost:8080/api/devices/register
# curl -X POST -H "Content-Type: application/json" -d @webapp/backend/pkg/models/testdata/smart-ata.json localhost:8080/api/device/0x5000cca264eb01d7/smart
# curl -X POST -H "Content-Type: application/json" -d @webapp/backend/pkg/models/testdata/smart-ata-date.json localhost:8080/api/device/0x5000cca264eb01d7/smart
# curl -X POST -H "Content-Type: application/json" -d @webapp/backend/pkg/models/testdata/smart-ata-date2.json localhost:8080/api/device/0x5000cca264eb01d7/smart
# curl -X POST -H "Content-Type: application/json" -d @webapp/backend/pkg/models/testdata/smart-fail2.json localhost:8080/api/device/0x5000cca264ec3183/smart
# curl -X POST -H "Content-Type: application/json" -d @webapp/backend/pkg/models/testdata/smart-nvme.json localhost:8080/api/device/0x5002538e40a22954/smart
# curl -X POST -H "Content-Type: application/json" -d @webapp/backend/pkg/models/testdata/smart-scsi.json localhost:8080/api/device/0x5000cca252c859cc/smart
# curl -X POST -H "Content-Type: application/json" -d @webapp/backend/pkg/models/testdata/smart-scsi2.json localhost:8080/api/device/0x5000cca264ebc248/smart
go run webapp/backend/pkg/models/testdata/helper.go
curl localhost:8080/api/summary
```
Once you've created a config file, you can pass it to the scrutiny binary during startup.
```
go run webapp/backend/cmd/scrutiny/scrutiny.go start --config ./scrutiny.yaml
```
Now visit http://localhost:8080
### Collector
# Modifying the Collector
```
brew install smartmontools
go run collector/cmd/collector-metrics/collector-metrics.go run --debug
```
## Debugging
# Debugging
If you need more verbose logs for debugging, you can use the following environmental variables:
@@ -95,3 +302,32 @@ Finally, you can copy the files from the scrutiny container to your host using t
docker cp scrutiny:/tmp/collector.log collector.log
docker cp scrutiny:/tmp/web.log web.log
```
# Docker Development
```
docker build -f docker/Dockerfile . -t ghcr.io/analogj/scrutiny:master-omnibus
docker run -it --rm -p 8080:8080 \
-v /run/udev:/run/udev:ro \
--cap-add SYS_RAWIO \
--device=/dev/sda \
--device=/dev/sdb \
ghcr.io/analogj/scrutiny:master-omnibus
/opt/scrutiny/bin/scrutiny-collector-metrics run
```
# Running Tests
```bash
docker run -p 8086:8086 -d --rm \
-e DOCKER_INFLUXDB_INIT_MODE=setup \
-e DOCKER_INFLUXDB_INIT_USERNAME=admin \
-e DOCKER_INFLUXDB_INIT_PASSWORD=password12345 \
-e DOCKER_INFLUXDB_INIT_ORG=scrutiny \
-e DOCKER_INFLUXDB_INIT_BUCKET=metrics \
-e DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=my-super-secret-auth-token \
influxdb:2.8
go test ./...
```
+125 -66
View File
@@ -1,84 +1,143 @@
export CGO_ENABLED = 1
.ONESHELL: # Applies to every targets in the file! .ONESHELL instructs make to invoke a single instance of the shell and provide it with the entire recipe, regardless of how many lines it contains.
.SHELLFLAGS = -ec
export GOTOOLCHAIN=go1.25.5
########################################################################################################################
# Global Env Settings
########################################################################################################################
GO_WORKSPACE ?= /go/src/github.com/analogj/scrutiny
BINARY=\
linux/amd64 \
linux/arm-5 \
linux/arm-6 \
linux/arm-7 \
linux/arm64 \
COLLECTOR_BINARY_NAME = scrutiny-collector-metrics
WEB_BINARY_NAME = scrutiny-web
LD_FLAGS =
STATIC_TAGS =
# enable multiarch docker image builds
DOCKER_TARGETARCH_BUILD_ARG =
ifdef TARGETARCH
DOCKER_TARGETARCH_BUILD_ARG := $(DOCKER_TARGETARCH_BUILD_ARG) --build-arg TARGETARCH=$(TARGETARCH)
endif
# enable to build static binaries.
ifdef STATIC
export CGO_ENABLED = 0
LD_FLAGS := $(LD_FLAGS) -extldflags=-static
STATIC_TAGS := $(STATIC_TAGS) -tags "static netgo"
endif
ifdef GOOS
COLLECTOR_BINARY_NAME := $(COLLECTOR_BINARY_NAME)-$(GOOS)
WEB_BINARY_NAME := $(WEB_BINARY_NAME)-$(GOOS)
LD_FLAGS := $(LD_FLAGS) -X main.goos=$(GOOS)
endif
ifdef GOARCH
COLLECTOR_BINARY_NAME := $(COLLECTOR_BINARY_NAME)-$(GOARCH)
WEB_BINARY_NAME := $(WEB_BINARY_NAME)-$(GOARCH)
LD_FLAGS := $(LD_FLAGS) -X main.goarch=$(GOARCH)
endif
ifdef GOARM
COLLECTOR_BINARY_NAME := $(COLLECTOR_BINARY_NAME)-$(GOARM)
WEB_BINARY_NAME := $(WEB_BINARY_NAME)-$(GOARM)
endif
ifeq ($(OS),Windows_NT)
COLLECTOR_BINARY_NAME := $(COLLECTOR_BINARY_NAME).exe
WEB_BINARY_NAME := $(WEB_BINARY_NAME).exe
endif
########################################################################################################################
# Binary
########################################################################################################################
.PHONY: all
all: binary-all
.PHONY: binary-all
binary-all: binary-collector binary-web
@echo "built binary-collector and binary-web targets"
.PHONY: all $(BINARY)
all: $(BINARY) windows/amd64
.PHONY: binary-clean
binary-clean:
go clean
$(BINARY): OS = $(word 1,$(subst /, ,$*))
$(BINARY): ARCH = $(word 2,$(subst /, ,$*))
$(BINARY): build/scrutiny-web-%:
@echo "building web binary (OS = $(OS), ARCH = $(ARCH))"
xgo -v --targets="$(OS)/$(ARCH)" -ldflags "-extldflags=-static -X main.goos=$(OS) -X main.goarch=$(ARCH)" -out scrutiny-web -tags "static netgo sqlite_omit_load_extension" ${GO_WORKSPACE}/webapp/backend/cmd/scrutiny/
.PHONY: binary-dep
binary-dep:
go mod vendor
chmod +x "/build/scrutiny-web-$(OS)-$(ARCH)"
file "/build/scrutiny-web-$(OS)-$(ARCH)" || true
ldd "/build/scrutiny-web-$(OS)-$(ARCH)" || true
.PHONY: binary-test
binary-test: binary-dep
go test -v $(STATIC_TAGS) ./...
@echo "building collector binary (OS = $(OS), ARCH = $(ARCH))"
xgo -v --targets="$(OS)/$(ARCH)" -ldflags "-extldflags=-static -X main.goos=$(OS) -X main.goarch=$(ARCH)" -out scrutiny-collector-metrics -tags "static netgo" ${GO_WORKSPACE}/collector/cmd/collector-metrics/
.PHONY: lint
lint:
GOTOOLCHAIN=go1.25.5 go install github.com/golangci/golangci-lint/v2/cmd/golangci-lint@v2.8.0
golangci-lint run ./...
chmod +x "/build/scrutiny-collector-metrics-$(OS)-$(ARCH)"
file "/build/scrutiny-collector-metrics-$(OS)-$(ARCH)" || true
ldd "/build/scrutiny-collector-metrics-$(OS)-$(ARCH)" || true
.PHONY: binary-test-coverage
binary-test-coverage: binary-dep
go test -coverprofile=coverage.txt -covermode=atomic -v $(STATIC_TAGS) ./...
windows/amd64: export OS = windows
windows/amd64: export ARCH = amd64
windows/amd64:
@echo "building web binary (OS = $(OS), ARCH = $(ARCH))"
xgo -v --targets="$(OS)/$(ARCH)" -ldflags "-extldflags=-static -X main.goos=$(OS) -X main.goarch=$(ARCH)" -out scrutiny-web -tags "static netgo sqlite_omit_load_extension" ${GO_WORKSPACE}/webapp/backend/cmd/scrutiny/
.PHONY: binary-collector
binary-collector: binary-dep
go build -ldflags "$(LD_FLAGS)" -o $(COLLECTOR_BINARY_NAME) $(STATIC_TAGS) ./collector/cmd/collector-metrics/
ifneq ($(OS),Windows_NT)
chmod +x $(COLLECTOR_BINARY_NAME)
file $(COLLECTOR_BINARY_NAME) || true
ldd $(COLLECTOR_BINARY_NAME) || true
./$(COLLECTOR_BINARY_NAME) || true
endif
@echo "building collector binary (OS = $(OS), ARCH = $(ARCH))"
xgo -v --targets="$(OS)/$(ARCH)" -ldflags "-extldflags=-static -X main.goos=$(OS) -X main.goarch=$(ARCH)" -out scrutiny-collector-metrics -tags "static netgo" ${GO_WORKSPACE}/collector/cmd/collector-metrics/
.PHONY: binary-web
binary-web: binary-dep
go build -ldflags "$(LD_FLAGS)" -o $(WEB_BINARY_NAME) $(STATIC_TAGS) ./webapp/backend/cmd/scrutiny/
ifneq ($(OS),Windows_NT)
chmod +x $(WEB_BINARY_NAME)
file $(WEB_BINARY_NAME) || true
ldd $(WEB_BINARY_NAME) || true
./$(WEB_BINARY_NAME) || true
endif
freebsd/amd64: export GOOS = freebsd
freebsd/amd64: export GOARCH = amd64
freebsd/amd64:
mkdir -p /build
########################################################################################################################
# Binary
########################################################################################################################
@echo "building web binary (OS = $(GOOS), ARCH = $(GOARCH))"
go build -ldflags "-extldflags=-static -X main.goos=$(GOOS) -X main.goarch=$(GOARCH)" -o /build/scrutiny-web-$(GOOS)-$(GOARCH) -tags "static netgo sqlite_omit_load_extension" webapp/backend/cmd/scrutiny/scrutiny.go
.PHONY: binary-frontend
# reduce logging, disable angular-cli analytics for ci environment
binary-frontend: export NPM_CONFIG_LOGLEVEL = warn
binary-frontend: export NG_CLI_ANALYTICS = false
binary-frontend:
cd webapp/frontend
npm install -g @angular/cli@v13-lts
mkdir -p $(CURDIR)/dist
npm ci
npm run build:prod -- --output-path=$(CURDIR)/dist
chmod +x "/build/scrutiny-web-$(GOOS)-$(GOARCH)"
file "/build/scrutiny-web-$(GOOS)-$(GOARCH)" || true
ldd "/build/scrutiny-web-$(GOOS)-$(GOARCH)" || true
.PHONY: binary-frontend-test-coverage
# reduce logging, disable angular-cli analytics for ci environment
binary-frontend-test-coverage:
cd webapp/frontend
npm ci
npx ng test --watch=false --browsers=ChromeHeadless --code-coverage
@echo "building collector binary (OS = $(GOOS), ARCH = $(GOARCH))"
go build -ldflags "-extldflags=-static -X main.goos=$(GOOS) -X main.goarch=$(GOARCH)" -o /build/scrutiny-collector-metrics-$(GOOS)-$(GOARCH) -tags "static netgo" collector/cmd/collector-metrics/collector-metrics.go
########################################################################################################################
# Docker
# NOTE: these docker make targets are only used for local development (not used by Github Actions/CI)
########################################################################################################################
.PHONY: docker-smartmontools
docker-smartmontools:
@echo "building smartmontools docker image"
docker build $(DOCKER_TARGETARCH_BUILD_ARG) -f docker/Dockerfile.smartmontools -t smartmontools-build .
chmod +x "/build/scrutiny-collector-metrics-$(GOOS)-$(GOARCH)"
file "/build/scrutiny-collector-metrics-$(GOOS)-$(GOARCH)" || true
ldd "/build/scrutiny-collector-metrics-$(GOOS)-$(GOARCH)" || true
.PHONY: docker-collector
docker-collector: docker-smartmontools
@echo "building collector docker image"
docker build $(DOCKER_TARGETARCH_BUILD_ARG) -f docker/Dockerfile.collector -t ghcr.io/analogj/scrutiny-dev:collector .
freebsd/386: export GOOS = freebsd
freebsd/386: export GOARCH = 386
freebsd/386:
mkdir -p /build
.PHONY: docker-web
docker-web:
@echo "building web docker image"
docker build $(DOCKER_TARGETARCH_BUILD_ARG) -f docker/Dockerfile.web -t ghcr.io/analogj/scrutiny-dev:web .
@echo "building web binary (OS = $(GOOS), ARCH = $(GOARCH))"
go build -ldflags "-extldflags=-static -X main.goos=$(GOOS) -X main.goarch=$(GOARCH)" -o /build/scrutiny-web-$(GOOS)-$(GOARCH) -tags "static netgo sqlite_omit_load_extension" webapp/backend/cmd/scrutiny/scrutiny.go
chmod +x "/build/scrutiny-web-$(GOOS)-$(GOARCH)"
file "/build/scrutiny-web-$(GOOS)-$(GOARCH)" || true
ldd "/build/scrutiny-web-$(GOOS)-$(GOARCH)" || true
@echo "building collector binary (OS = $(GOOS), ARCH = $(GOARCH))"
go build -ldflags "-extldflags=-static -X main.goos=$(GOOS) -X main.goarch=$(GOARCH)" -o /build/scrutiny-collector-metrics-$(GOOS)-$(GOARCH) -tags "static netgo" collector/cmd/collector-metrics/collector-metrics.go
chmod +x "/build/scrutiny-collector-metrics-$(GOOS)-$(GOARCH)"
file "/build/scrutiny-collector-metrics-$(GOOS)-$(GOARCH)" || true
ldd "/build/scrutiny-collector-metrics-$(GOOS)-$(GOARCH)" || true
# clean:
# rm scrutiny-collector-metrics-* scrutiny-web-*
.PHONY: docker-omnibus
docker-omnibus: docker-smartmontools
@echo "building omnibus docker image"
docker build $(DOCKER_TARGETARCH_BUILD_ARG) -f docker/Dockerfile -t ghcr.io/analogj/scrutiny-dev:omnibus .
+90 -38
View File
@@ -7,14 +7,13 @@
# scrutiny
[![CI](https://github.com/AnalogJ/scrutiny/workflows/CI/badge.svg?branch=master)](https://github.com/AnalogJ/scrutiny/actions?query=workflow%3ACI)
[![CI](https://github.com/AnalogJ/scrutiny/actions/workflows/ci.yaml/badge.svg)](https://github.com/AnalogJ/scrutiny/actions/workflows/ci.yaml)
[![codecov](https://codecov.io/gh/AnalogJ/scrutiny/branch/master/graph/badge.svg)](https://codecov.io/gh/AnalogJ/scrutiny)
[![GitHub license](https://img.shields.io/github/license/AnalogJ/scrutiny.svg?style=flat-square)](https://github.com/AnalogJ/scrutiny/blob/master/LICENSE)
[![Godoc](https://img.shields.io/badge/godoc-reference-blue.svg?style=flat-square)](https://godoc.org/github.com/analogj/scrutiny)
[![Go Report Card](https://goreportcard.com/badge/github.com/AnalogJ/scrutiny?style=flat-square)](https://goreportcard.com/report/github.com/AnalogJ/scrutiny)
[![GitHub release](http://img.shields.io/github/release/AnalogJ/scrutiny.svg?style=flat-square)](https://github.com/AnalogJ/scrutiny/releases)
WebUI for smartd S.M.A.R.T monitoring
> NOTE: Scrutiny is a Work-in-Progress and still has some rough edges.
@@ -27,7 +26,7 @@ If you run a server with more than a couple of hard drives, you're probably alre
> smartd is a daemon that monitors the Self-Monitoring, Analysis and Reporting Technology (SMART) system built into many ATA, IDE and SCSI-3 hard drives. The purpose of SMART is to monitor the reliability of the hard drive and predict drive failures, and to carry out different types of drive self-tests.
Theses S.M.A.R.T hard drive self-tests can help you detect and replace failing hard drives before they cause permanent data loss. However, there's a couple issues with `smartd`:
These S.M.A.R.T hard drive self-tests can help you detect and replace failing hard drives before they cause permanent data loss. However, there's a couple issues with `smartd`:
- There are more than a hundred S.M.A.R.T attributes, however `smartd` does not differentiate between critical and informational metrics
- `smartd` does not record S.M.A.R.T attribute history, so it can be hard to determine if an attribute is degrading slowly over time.
@@ -47,7 +46,7 @@ Scrutiny is a simple but focused application, with a couple of core features:
- Customized thresholds using real world failure rates
- Temperature tracking
- Provided as an all-in-one Docker image (but can be installed manually)
- Future Configurable Alerting/Notifications via Webhooks
- Configurable Alerting/Notifications via Webhooks
- (Future) Hard Drive performance testing & tracking
# Getting Started
@@ -59,54 +58,80 @@ Scrutiny uses `smartctl --scan` to detect devices/drives.
- All RAID controllers supported by `smartctl` are automatically supported by Scrutiny.
- While some RAID controllers support passing through the underlying SMART data to `smartctl` others do not.
- In some cases `--scan` does not correctly detect the device type, returning [incomplete SMART data](https://github.com/AnalogJ/scrutiny/issues/45).
Scrutiny will eventually support overriding detected device type via the config file.
Scrutiny supports overriding detected device type via the config file: see [example.collector.yaml](https://github.com/AnalogJ/scrutiny/blob/master/example.collector.yaml)
- If you use docker, you **must** pass though the RAID virtual disk to the container using `--device` (see below)
- This device may be in `/dev/*` or `/dev/bus/*`.
- If you're unsure, run `smartctl --scan` on your host, and pass all listed devices to the container.
See [docs/TROUBLESHOOTING_DEVICE_COLLECTOR.md](./docs/TROUBLESHOOTING_DEVICE_COLLECTOR.md) for help
## Docker
> [!IMPORTANT]
> Using `latest-` tags is dangerous as it can update your image without warning. It is a best practice to pin a specific version. scrutiny pushes releases with semver tags,
> so you can use tags like `v0.8.2-omnibus`, `v0.8-web`, `v0-collector`, etc. For a list of all image tags see
> [scrutiny package versions](https://github.com/AnalogJ/scrutiny/pkgs/container/scrutiny/versions?filters%5Bversion_type%5D=tagged)
If you're using Docker, getting started is as simple as running the following command:
> See [docker/example.omnibus.docker-compose.yml](https://github.com/AnalogJ/scrutiny/blob/master/docker/example.omnibus.docker-compose.yml) for a docker-compose file.
```bash
docker run -it --rm -p 8080:8080 \
-v /run/udev:/run/udev:ro \
--cap-add SYS_RAWIO \
--device=/dev/sda \
--device=/dev/sdb \
--name scrutiny \
analogj/scrutiny
docker run -p 8080:8080 -p 8086:8086 --restart unless-stopped \
-v `pwd`/scrutiny:/opt/scrutiny/config \
-v `pwd`/influxdb2:/opt/scrutiny/influxdb \
-v /run/udev:/run/udev:ro \
--cap-add SYS_RAWIO \
--device=/dev/sda \
--device=/dev/sdb \
--name scrutiny \
ghcr.io/analogj/scrutiny:latest-omnibus
```
- `/run/udev` is necessary to provide the Scrutiny collector with access to your device metadata
- `--cap-add SYS_RAWIO` is necessary to allow `smartctl` permission to query your device SMART data
- NOTE: If you have **NVMe** drives, you must add `--cap-add SYS_ADMIN` as well. See issue [#26](https://github.com/AnalogJ/scrutiny/issues/26#issuecomment-696817130)
- `--device` entries are required to ensure that your hard disk devices are accessible within the container.
- `analogj/scrutiny` is a omnibus image, containing both the webapp server (frontend & api) as well as the S.M.A.R.T metric collector. (see below)
- `ghcr.io/analogj/scrutiny:latest-omnibus` is a omnibus image, containing both the webapp server (frontend & api) as well as the S.M.A.R.T metric collector. (see below)
### Hub/Spoke Deployment
In addition to the Omnibus image (available under the `latest` tag) there are 2 other Docker images available:
In addition to the Omnibus image (available under the `latest` tag) you can deploy in Hub/Spoke mode, which requires 3
other Docker images:
- `analogj/scrutiny:collector` - Contains the Scrutiny data collector, `smartctl` binary and cron-like scheduler. You can run one collector on each server.
- `analogj/scrutiny:web` - Contains the Web UI, API and Database. Only one container necessary
- `ghcr.io/analogj/scrutiny:latest-collector` - Contains the Scrutiny data collector, `smartctl` binary and cron-like
scheduler. You can run one collector on each server.
- `ghcr.io/analogj/scrutiny:latest-web` - Contains the Web UI and API. Only one container necessary
- `influxdb:2.8` - InfluxDB image, used by the Web container to persist SMART data. Only one container necessary
See [docs/TROUBLESHOOTING_INFLUXDB.md](./docs/TROUBLESHOOTING_INFLUXDB.md)
> See [docker/example.hubspoke.docker-compose.yml](https://github.com/AnalogJ/scrutiny/blob/master/docker/example.hubspoke.docker-compose.yml) for a docker-compose file.
```bash
docker run -it --rm -p 8080:8080 \
--name scrutiny-web \
analogj/scrutiny:web
docker run -p 8086:8086 --restart unless-stopped \
-v `pwd`/influxdb2:/var/lib/influxdb2 \
--name scrutiny-influxdb \
influxdb:2.8
docker run -it --rm \
-v /run/udev:/run/udev:ro \
--cap-add SYS_RAWIO \
--device=/dev/sda \
--device=/dev/sdb \
-e SCRUTINY_API_ENDPOINT=http://SCRUTINY_WEB_IPADDRESS:8080 \
--name scrutiny-collector \
analogj/scrutiny:collector
docker run -p 8080:8080 --restart unless-stopped \
-v `pwd`/scrutiny:/opt/scrutiny/config \
--name scrutiny-web \
ghcr.io/analogj/scrutiny:latest-web
docker run --restart unless-stopped \
-v /run/udev:/run/udev:ro \
--cap-add SYS_RAWIO \
--device=/dev/sda \
--device=/dev/sdb \
-e COLLECTOR_API_ENDPOINT=http://SCRUTINY_WEB_IPADDRESS:8080 \
--name scrutiny-collector \
ghcr.io/analogj/scrutiny:latest-collector
```
### Hub rootless installation using Podman Quadlets
See [docs/INSTALL_ROOTLESS_PODMAN.md](docs/INSTALL_ROOTLESS_PODMAN.md) for instructions.
## Manual Installation (without-Docker)
While the easiest way to get started with [Scrutiny is using Docker](https://github.com/AnalogJ/scrutiny#docker),
@@ -125,12 +150,12 @@ drive that Scrutiny detected. The collector is configured to run once a day, but
For users of the docker Hub/Spoke deployment or manual install: initially the dashboard will be empty.
After the first collector run, you'll be greeted with a list of all your hard drives and their current smart status.
```
docker exec scrutiny /scrutiny/bin/scrutiny-collector-metrics run
```bash
docker exec scrutiny /opt/scrutiny/bin/scrutiny-collector-metrics run
```
# Configuration
By default Scrutiny looks for its YAML configuration files in `/scrutiny/config`
By default Scrutiny looks for its YAML configuration files in `/opt/scrutiny/config`
There are two configuration files available:
@@ -139,6 +164,13 @@ There are two configuration files available:
Neither file is required, however if provided, it allows you to configure how Scrutiny functions.
## Cron Schedule
Unfortunately the Cron schedule cannot be configured via the `collector.yaml` (as the collector binary needs to be trigged by a scheduler/cron).
However, if you are using the official `ghcr.io/analogj/scrutiny:latest-collector` or `ghcr.io/analogj/scrutiny:latest-omnibus` docker images,
you can use the `COLLECTOR_CRON_SCHEDULE` environmental variable to override the default cron schedule (daily @ midnight - `0 0 * * *`).
`docker run -e COLLECTOR_CRON_SCHEDULE="0 0 * * *" ...`
## Notifications
Scrutiny supports sending SMART device failure notifications via the following services:
@@ -151,6 +183,7 @@ Scrutiny supports sending SMART device failure notifications via the following s
- IFTTT
- Join
- Mattermost
- ntfy
- Pushbullet
- Pushover
- Slack
@@ -158,13 +191,15 @@ Scrutiny supports sending SMART device failure notifications via the following s
- Telegram
- Tulip
Check the `notify.urls` section of [example.scrutiny.yml](example.scrutiny.yaml) for more information and documentation for service specific setup.
Check the `notify.urls` section of [example.scrutiny.yml](example.scrutiny.yaml) for examples.
For more information and troubleshooting, see the [TROUBLESHOOTING_NOTIFICATIONS.md](./docs/TROUBLESHOOTING_NOTIFICATIONS.md) file
### Testing Notifications
You can test that your notifications are configured correctly by posting an empty payload to the notifications health check API.
```
```bash
curl -X POST http://localhost:8080/api/health/notify
```
@@ -175,14 +210,14 @@ Scrutiny provides various methods to change the log level to debug and generate
You can use environmental variables to enable debug logging and/or log files for the web server:
```
```bash
DEBUG=true
SCRUTINY_LOG_FILE=/tmp/web.log
```
You can configure the log level and log file in the config file:
```
```yml
log:
file: '/tmp/web.log'
level: DEBUG
@@ -190,7 +225,7 @@ log:
Or if you're not using docker, you can pass CLI arguments to the web server during startup:
```
```bash
scrutiny start --debug --log-file /tmp/web.log
```
@@ -198,17 +233,33 @@ scrutiny start --debug --log-file /tmp/web.log
You can use environmental variables to enable debug logging and/or log files for the collector:
```
```bash
DEBUG=true
COLLECTOR_LOG_FILE=/tmp/collector.log
```
Or if you're not using docker, you can pass CLI arguments to the collector during startup:
```
```bash
scrutiny-collector-metrics run --debug --log-file /tmp/collector.log
```
# Supported Architectures
| Architecture Name | Binaries | Docker |
| --- | --- | --- |
| linux-amd64 | :white_check_mark: | :white_check_mark: |
| linux-arm-5 | :white_check_mark: | |
| linux-arm-6 | :white_check_mark: | |
| linux-arm-7 | :white_check_mark: | web/collector only. see [#236](https://github.com/AnalogJ/scrutiny/issues/236) |
| linux-arm64 | :white_check_mark: | :white_check_mark: |
| freebsd-amd64 | :white_check_mark: | |
| macos-amd64 | :white_check_mark: | :white_check_mark: |
| macos-arm64 | :white_check_mark: | :white_check_mark: |
| windows-amd64 | :white_check_mark: | WIP, see [#15](https://github.com/AnalogJ/scrutiny/issues/15) |
| windows-arm64 | :white_check_mark: | |
# Contributing
Please see the [CONTRIBUTING.md](CONTRIBUTING.md) for instructions for how to develop and contribute to the scrutiny codebase.
@@ -223,7 +274,8 @@ We use SemVer for versioning. For the versions available, see the tags on this r
# Authors
Jason Kulatunga - Initial Development - @AnalogJ
* Jason Kulatunga - Initial Development - [@AnalogJ](https://github.com/AnalogJ/)
* Aram Akhavan - Maintenence - [@kaysond](https://github.com/kaysond/)
# Licenses
@@ -1,16 +1,19 @@
package main
import (
"encoding/json"
"fmt"
"io"
"log"
"os"
"strings"
"time"
"github.com/analogj/scrutiny/collector/pkg/collector"
"github.com/analogj/scrutiny/collector/pkg/config"
"github.com/analogj/scrutiny/collector/pkg/errors"
"github.com/analogj/scrutiny/webapp/backend/pkg/version"
"github.com/sirupsen/logrus"
"io"
"log"
"os"
"time"
utils "github.com/analogj/go-util/utils"
"github.com/fatih/color"
@@ -28,9 +31,15 @@ func main() {
os.Exit(1)
}
configFilePath := "/opt/scrutiny/config/collector.yaml"
configFilePathAlternative := "/opt/scrutiny/config/collector.yml"
if !utils.FileExists(configFilePath) && utils.FileExists(configFilePathAlternative) {
configFilePath = configFilePathAlternative
}
//we're going to load the config file manually, since we need to validate it.
err = config.ReadConfig("/scrutiny/config/collector.yaml") // Find and read the config file
if _, ok := err.(errors.ConfigFileMissingError); ok { // Handle errors reading the config file
err = config.ReadConfig(configFilePath) // Find and read the config file
if _, ok := err.(errors.ConfigFileMissingError); ok { // Handle errors reading the config file
//ignore "could not find config file"
} else if err != nil {
os.Exit(1)
@@ -73,7 +82,7 @@ OPTIONS:
subtitle := collectorMetrics + utils.LeftPad2Len(versionInfo, " ", 65-len(collectorMetrics))
color.New(color.FgGreen).Fprintf(c.App.Writer, fmt.Sprintf(utils.StripIndent(
color.New(color.FgGreen).Fprintf(c.App.Writer, utils.StripIndent(
`
___ ___ ____ __ __ ____ ____ _ _ _ _
/ __) / __)( _ \( )( )(_ _)(_ _)( \( )( \/ )
@@ -81,7 +90,7 @@ OPTIONS:
(___/ \___)(_)\_)(______) (__) (____)(_)\_) (__)
%s
`), subtitle))
`), subtitle)
return nil
},
@@ -99,34 +108,40 @@ OPTIONS:
return err
}
}
//override config with flags if set
if c.IsSet("host-id") {
config.Set("host.id", c.String("host-id")) // set/override the host-id using CLI.
}
collectorLogger := logrus.WithFields(logrus.Fields{
"type": "metrics",
})
if c.Bool("debug") {
logrus.SetLevel(logrus.DebugLevel)
} else {
logrus.SetLevel(logrus.InfoLevel)
config.Set("log.level", "DEBUG")
}
if c.IsSet("log-file") {
logFile, err := os.OpenFile(c.String("log-file"), os.O_CREATE|os.O_WRONLY, 0644)
if err != nil {
logrus.Errorf("Failed to open log file %s for output: %s", c.String("log-file"), err)
return err
}
defer logFile.Close()
logrus.SetOutput(io.MultiWriter(os.Stderr, logFile))
config.Set("log.file", c.String("log-file"))
}
if c.IsSet("api-endpoint") {
//if the user is providing an api-endpoint with a basepath (eg. http://localhost:8080/scrutiny),
//we need to ensure the basepath has a trailing slash, otherwise the url.Parse() path concatenation doesnt work.
apiEndpoint := strings.TrimSuffix(c.String("api-endpoint"), "/") + "/"
config.Set("api.endpoint", apiEndpoint)
}
collectorLogger, logFile, err := CreateLogger(config)
if logFile != nil {
defer logFile.Close()
}
if err != nil {
return err
}
settingsData, err := json.MarshalIndent(config.AllSettings(), "", "\t")
collectorLogger.Debug(string(settingsData), err)
metricCollector, err := collector.CreateMetricsCollector(
config,
collectorLogger,
c.String("api-endpoint"),
config.GetString("api.endpoint"),
)
if err != nil {
@@ -144,14 +159,13 @@ OPTIONS:
&cli.StringFlag{
Name: "api-endpoint",
Usage: "The api server endpoint",
Value: "http://localhost:8080",
EnvVars: []string{"SCRUTINY_API_ENDPOINT"},
EnvVars: []string{"COLLECTOR_API_ENDPOINT", "SCRUTINY_API_ENDPOINT"},
//SCRUTINY_API_ENDPOINT is deprecated, but kept for backwards compatibility
},
&cli.StringFlag{
Name: "log-file",
Usage: "Path to file for logging. Leave empty to use STDOUT",
Value: "",
EnvVars: []string{"COLLECTOR_LOG_FILE"},
},
@@ -176,5 +190,28 @@ OPTIONS:
if err != nil {
log.Fatal(color.HiRedString("ERROR: %v", err))
}
}
func CreateLogger(appConfig config.Interface) (*logrus.Entry, *os.File, error) {
logger := logrus.WithFields(logrus.Fields{
"type": "metrics",
})
if level, err := logrus.ParseLevel(appConfig.GetString("log.level")); err == nil {
logger.Logger.SetLevel(level)
} else {
logger.Logger.SetLevel(logrus.InfoLevel)
}
var logFile *os.File
var err error
if appConfig.IsSet("log.file") && len(appConfig.GetString("log.file")) > 0 {
logFile, err = os.OpenFile(appConfig.GetString("log.file"), os.O_CREATE|os.O_WRONLY, 0644)
if err != nil {
logger.Logger.Errorf("Failed to open log file %s for output: %s", appConfig.GetString("log.file"), err)
return nil, logFile, err
}
logger.Logger.SetOutput(io.MultiWriter(os.Stderr, logFile))
}
return logger, logFile, nil
}
@@ -2,14 +2,15 @@ package main
import (
"fmt"
"github.com/analogj/scrutiny/collector/pkg/collector"
"github.com/analogj/scrutiny/webapp/backend/pkg/version"
"github.com/sirupsen/logrus"
"io"
"log"
"os"
"time"
"github.com/analogj/scrutiny/collector/pkg/collector"
"github.com/analogj/scrutiny/webapp/backend/pkg/version"
"github.com/sirupsen/logrus"
utils "github.com/analogj/go-util/utils"
"github.com/fatih/color"
"github.com/urfave/cli/v2"
@@ -57,7 +58,7 @@ OPTIONS:
subtitle := collectorSelfTest + utils.LeftPad2Len(versionInfo, " ", 65-len(collectorSelfTest))
color.New(color.FgGreen).Fprintf(c.App.Writer, fmt.Sprintf(utils.StripIndent(
color.New(color.FgGreen).Fprintf(c.App.Writer, utils.StripIndent(
`
___ ___ ____ __ __ ____ ____ _ _ _ _
/ __) / __)( _ \( )( )(_ _)(_ _)( \( )( \/ )
@@ -65,7 +66,7 @@ OPTIONS:
(___/ \___)(_)\_)(______) (__) (____)(_)\_) (__)
%s
`), subtitle))
`), subtitle)
return nil
},
@@ -96,6 +97,7 @@ OPTIONS:
logrus.SetOutput(io.MultiWriter(os.Stderr, logFile))
}
//TODO: pass in the collector, use configuration from collector-metrics
stCollector, err := collector.CreateSelfTestCollector(
collectorLogger,
c.String("api-endpoint"),
@@ -113,7 +115,7 @@ OPTIONS:
Name: "api-endpoint",
Usage: "The api server endpoint",
Value: "http://localhost:8080",
EnvVars: []string{"SCRUTINY_API_ENDPOINT"},
EnvVars: []string{"COLLECTOR_API_ENDPOINT"},
},
&cli.StringFlag{
+4 -13
View File
@@ -3,28 +3,18 @@ package collector
import (
"bytes"
"encoding/json"
"github.com/sirupsen/logrus"
"net/http"
"time"
"github.com/sirupsen/logrus"
)
var httpClient = &http.Client{Timeout: 10 * time.Second}
var httpClient = &http.Client{Timeout: 60 * time.Second}
type BaseCollector struct {
logger *logrus.Entry
}
func (c *BaseCollector) getJson(url string, target interface{}) error {
r, err := httpClient.Get(url)
if err != nil {
return err
}
defer r.Body.Close()
return json.NewDecoder(r.Body).Decode(target)
}
func (c *BaseCollector) postJson(url string, body interface{}, target interface{}) error {
requestBody, err := json.Marshal(body)
if err != nil {
@@ -40,6 +30,7 @@ func (c *BaseCollector) postJson(url string, body interface{}, target interface{
return json.NewDecoder(r.Body).Decode(target)
}
// http://www.linuxguide.it/command_line/linux-manpage/do.php?file=smartctl#sect7
func (c *BaseCollector) LogSmartctlExitCode(exitCode int) {
if exitCode&0x01 != 0 {
c.logger.Errorln("smartctl could not parse commandline")
+35 -18
View File
@@ -4,22 +4,26 @@ import (
"bytes"
"encoding/json"
"fmt"
"github.com/analogj/scrutiny/collector/pkg/common"
"github.com/analogj/scrutiny/collector/pkg/config"
"github.com/analogj/scrutiny/collector/pkg/detect"
"github.com/analogj/scrutiny/collector/pkg/errors"
"github.com/analogj/scrutiny/collector/pkg/models"
"github.com/sirupsen/logrus"
"net/url"
"os"
"os/exec"
"strings"
"time"
"github.com/analogj/scrutiny/collector/pkg/common/shell"
"github.com/analogj/scrutiny/collector/pkg/config"
"github.com/analogj/scrutiny/collector/pkg/detect"
"github.com/analogj/scrutiny/collector/pkg/errors"
"github.com/analogj/scrutiny/collector/pkg/models"
"github.com/samber/lo"
"github.com/sirupsen/logrus"
)
type MetricsCollector struct {
config config.Interface
BaseCollector
apiEndpoint *url.URL
shell shell.Interface
}
func CreateMetricsCollector(appConfig config.Interface, logger *logrus.Entry, apiEndpoint string) (MetricsCollector, error) {
@@ -34,6 +38,7 @@ func CreateMetricsCollector(appConfig config.Interface, logger *logrus.Entry, ap
BaseCollector: BaseCollector{
logger: logger,
},
shell: shell.Create(),
}
return sc, nil
@@ -46,7 +51,7 @@ func (mc *MetricsCollector) Run() error {
}
apiEndpoint, _ := url.Parse(mc.apiEndpoint.String())
apiEndpoint.Path = "/api/devices/register"
apiEndpoint, _ = apiEndpoint.Parse("api/devices/register") //this acts like filepath.Join()
deviceRespWrapper := new(models.DeviceWrapper)
@@ -54,11 +59,16 @@ func (mc *MetricsCollector) Run() error {
Logger: mc.logger,
Config: mc.config,
}
detectedStorageDevices, err := deviceDetector.Start()
rawDetectedStorageDevices, err := deviceDetector.Start()
if err != nil {
return err
}
//filter any device with empty wwn (they are invalid)
detectedStorageDevices := lo.Filter[models.Device](rawDetectedStorageDevices, func(dev models.Device, _ int) bool {
return len(dev.WWN) > 0
})
mc.logger.Infoln("Sending detected devices to API, for filtering & validation")
jsonObj, _ := json.Marshal(detectedStorageDevices)
mc.logger.Debugf("Detected devices: %v", string(jsonObj))
@@ -71,6 +81,7 @@ func (mc *MetricsCollector) Run() error {
if !deviceRespWrapper.Success {
mc.logger.Errorln("An error occurred while retrieving filtered devices")
mc.logger.Debugln(deviceRespWrapper)
return errors.ApiServerCommunicationError("An error occurred while retrieving filtered devices")
} else {
mc.logger.Debugln(deviceRespWrapper)
@@ -81,8 +92,9 @@ func (mc *MetricsCollector) Run() error {
//go mc.Collect(&wg, device.WWN, device.DeviceName, device.DeviceType)
mc.Collect(device.WWN, device.DeviceName, device.DeviceType)
// TODO: we may need to sleep for between each call to smartctl -a
//time.Sleep(30 * time.Millisecond)
if mc.config.GetInt("commands.metrics_smartctl_wait") > 0 {
time.Sleep(time.Duration(mc.config.GetInt("commands.metrics_smartctl_wait")) * time.Second)
}
}
//mc.logger.Infoln("Main: Waiting for workers to finish")
@@ -95,28 +107,33 @@ func (mc *MetricsCollector) Run() error {
func (mc *MetricsCollector) Validate() error {
mc.logger.Infoln("Verifying required tools")
_, lookErr := exec.LookPath("smartctl")
_, lookErr := exec.LookPath(mc.config.GetString("commands.metrics_smartctl_bin"))
if lookErr != nil {
return errors.DependencyMissingError("smartctl is missing")
return errors.DependencyMissingError(fmt.Sprintf("%s binary is missing", mc.config.GetString("commands.metrics_smartctl_bin")))
}
return nil
}
//func (mc *MetricsCollector) Collect(wg *sync.WaitGroup, deviceWWN string, deviceName string, deviceType string) {
// func (mc *MetricsCollector) Collect(wg *sync.WaitGroup, deviceWWN string, deviceName string, deviceType string) {
func (mc *MetricsCollector) Collect(deviceWWN string, deviceName string, deviceType string) {
//defer wg.Done()
if len(deviceWWN) == 0 {
mc.logger.Errorf("no device WWN detected for %s. Skipping collection for this device (no data association possible).\n", deviceName)
return
}
mc.logger.Infof("Collecting smartctl results for %s\n", deviceName)
args := []string{"-x", "-j"}
fullDeviceName := fmt.Sprintf("%s%s", detect.DevicePrefix(), deviceName)
args := strings.Split(mc.config.GetCommandMetricsSmartArgs(fullDeviceName), " ")
//only include the device type if its a non-standard one. In some cases ata drives are detected as scsi in docker, and metadata is lost.
if len(deviceType) > 0 && deviceType != "scsi" && deviceType != "ata" {
args = append(args, "-d", deviceType)
args = append(args, "--device", deviceType)
}
args = append(args, fmt.Sprintf("%s%s", detect.DevicePrefix(), deviceName))
args = append(args, fullDeviceName)
result, err := common.ExecCmd(mc.logger, "smartctl", args, "", os.Environ())
result, err := mc.shell.Command(mc.logger, mc.config.GetString("commands.metrics_smartctl_bin"), args, "", os.Environ())
resultBytes := []byte(result)
if err != nil {
if exitError, ok := err.(*exec.ExitError); ok {
@@ -140,7 +157,7 @@ func (mc *MetricsCollector) Publish(deviceWWN string, payload []byte) error {
mc.logger.Infof("Publishing smartctl results for %s\n", deviceWWN)
apiEndpoint, _ := url.Parse(mc.apiEndpoint.String())
apiEndpoint.Path = fmt.Sprintf("/api/device/%s/smart", strings.ToLower(deviceWWN))
apiEndpoint, _ = apiEndpoint.Parse(fmt.Sprintf("api/device/%s/smart", strings.ToLower(deviceWWN)))
resp, err := httpClient.Post(apiEndpoint.String(), "application/json", bytes.NewBuffer(payload))
if err != nil {
+38
View File
@@ -0,0 +1,38 @@
package collector
import (
"github.com/stretchr/testify/require"
"net/url"
"testing"
)
func TestApiEndpointParse(t *testing.T) {
baseURL, _ := url.Parse("http://localhost:8080/")
url1, _ := baseURL.Parse("d/e")
require.Equal(t, "http://localhost:8080/d/e", url1.String())
url2, _ := baseURL.Parse("/d/e")
require.Equal(t, "http://localhost:8080/d/e", url2.String())
}
func TestApiEndpointParse_WithBasepathWithoutTrailingSlash(t *testing.T) {
baseURL, _ := url.Parse("http://localhost:8080/scrutiny")
//This testcase is unexpected and can cause issues. We need to ensure the apiEndpoint always has a trailing slash.
url1, _ := baseURL.Parse("d/e")
require.Equal(t, "http://localhost:8080/d/e", url1.String())
url2, _ := baseURL.Parse("/d/e")
require.Equal(t, "http://localhost:8080/d/e", url2.String())
}
func TestApiEndpointParse_WithBasepathWithTrailingSlash(t *testing.T) {
baseURL, _ := url.Parse("http://localhost:8080/scrutiny/")
url1, _ := baseURL.Parse("d/e")
require.Equal(t, "http://localhost:8080/scrutiny/d/e", url1.String())
url2, _ := baseURL.Parse("/d/e")
require.Equal(t, "http://localhost:8080/d/e", url2.String())
}
+5
View File
@@ -0,0 +1,5 @@
package shell
func Create() Interface {
return new(localShell)
}
+11
View File
@@ -0,0 +1,11 @@
package shell
import (
"github.com/sirupsen/logrus"
)
// Create mock using:
// mockgen -source=collector/pkg/common/shell/interface.go -destination=collector/pkg/common/shell/mock/mock_shell.go
type Interface interface {
Command(logger *logrus.Entry, cmdName string, cmdArgs []string, workingDir string, environ []string) (string, error)
}
@@ -1,21 +1,32 @@
package common
package shell
import (
"bytes"
"errors"
"github.com/sirupsen/logrus"
"io"
"os/exec"
"path"
"strings"
"github.com/sirupsen/logrus"
)
func ExecCmd(logger *logrus.Entry, cmdName string, cmdArgs []string, workingDir string, environ []string) (string, error) {
type localShell struct{}
func (s *localShell) Command(logger *logrus.Entry, cmdName string, cmdArgs []string, workingDir string, environ []string) (string, error) {
logger.Infof("Executing command: %s %s", cmdName, strings.Join(cmdArgs, " "))
cmd := exec.Command(cmdName, cmdArgs...)
var stdBuffer bytes.Buffer
mw := io.MultiWriter(logger.Logger.Out, &stdBuffer)
logWriters := []io.Writer{
&stdBuffer,
}
if logger.Logger.Level == logrus.DebugLevel {
logWriters = append(logWriters, logger.Logger.Out)
}
mw := io.MultiWriter(logWriters...)
cmd.Stdout = mw
cmd.Stderr = mw
@@ -26,7 +37,7 @@ func ExecCmd(logger *logrus.Entry, cmdName string, cmdArgs []string, workingDir
if workingDir != "" && path.IsAbs(workingDir) {
cmd.Dir = workingDir
} else if workingDir != "" {
return "", errors.New("Working Directory must be an absolute path")
return "", errors.New("working directory must be an absolute path")
}
err := cmd.Run()
@@ -1,33 +1,33 @@
package common_test
package shell
import (
"github.com/analogj/scrutiny/collector/pkg/common"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/require"
"os/exec"
"testing"
)
func TestExecCmd(t *testing.T) {
func TestLocalShellCommand(t *testing.T) {
t.Parallel()
//setup
testShell := localShell{}
//test
result, err := common.ExecCmd(logrus.WithField("exec", "test"), "echo", []string{"hello world"}, "", nil)
result, err := testShell.Command(logrus.WithField("exec", "test"), "echo", []string{"hello world"}, "", nil)
//assert
require.NoError(t, err)
require.Equal(t, "hello world\n", result)
}
func TestExecCmd_Date(t *testing.T) {
func TestLocalShellCommand_Date(t *testing.T) {
t.Parallel()
//setup
testShell := localShell{}
//test
_, err := common.ExecCmd(logrus.WithField("exec", "test"), "date", []string{}, "", nil)
_, err := testShell.Command(logrus.WithField("exec", "test"), "date", []string{}, "", nil)
//assert
require.NoError(t, err)
@@ -51,13 +51,14 @@ func TestExecCmd_Date(t *testing.T) {
//}
//
func TestExecCmd_InvalidCommand(t *testing.T) {
func TestLocalShellCommand_InvalidCommand(t *testing.T) {
t.Parallel()
//setup
testShell := localShell{}
//test
_, err := common.ExecCmd(logrus.WithField("exec", "test"), "invalid_binary", []string{}, "", nil)
_, err := testShell.Command(logrus.WithField("exec", "test"), "invalid_binary", []string{}, "", nil)
//assert
_, castOk := err.(*exec.ExitError)
@@ -0,0 +1,50 @@
// Code generated by MockGen. DO NOT EDIT.
// Source: collector/pkg/common/shell/interface.go
// Package mock_shell is a generated GoMock package.
package mock_shell
import (
reflect "reflect"
logrus "github.com/sirupsen/logrus"
gomock "go.uber.org/mock/gomock"
)
// MockInterface is a mock of Interface interface.
type MockInterface struct {
ctrl *gomock.Controller
recorder *MockInterfaceMockRecorder
}
// MockInterfaceMockRecorder is the mock recorder for MockInterface.
type MockInterfaceMockRecorder struct {
mock *MockInterface
}
// NewMockInterface creates a new mock instance.
func NewMockInterface(ctrl *gomock.Controller) *MockInterface {
mock := &MockInterface{ctrl: ctrl}
mock.recorder = &MockInterfaceMockRecorder{mock}
return mock
}
// EXPECT returns an object that allows the caller to indicate expected use.
func (m *MockInterface) EXPECT() *MockInterfaceMockRecorder {
return m.recorder
}
// Command mocks base method.
func (m *MockInterface) Command(logger *logrus.Entry, cmdName string, cmdArgs []string, workingDir string, environ []string) (string, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "Command", logger, cmdName, cmdArgs, workingDir, environ)
ret0, _ := ret[0].(string)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// Command indicates an expected call of Command.
func (mr *MockInterfaceMockRecorder) Command(logger, cmdName, cmdArgs, workingDir, environ interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Command", reflect.TypeOf((*MockInterface)(nil).Command), logger, cmdName, cmdArgs, workingDir, environ)
}
+120 -8
View File
@@ -1,13 +1,17 @@
package config
import (
"fmt"
"log"
"os"
"sort"
"strings"
"github.com/analogj/go-util/utils"
"github.com/analogj/scrutiny/collector/pkg/errors"
"github.com/analogj/scrutiny/collector/pkg/models"
"github.com/mitchellh/mapstructure"
"github.com/go-viper/mapstructure/v2"
"github.com/spf13/viper"
"log"
"os"
)
// When initializing this class the following methods must be called:
@@ -16,6 +20,8 @@ import (
// This is done automatically when created via the Factory.
type configuration struct {
*viper.Viper
deviceOverrides []models.ScanOverride
}
//Viper uses the following precedence order. Each item takes precedence over the item below it:
@@ -33,8 +39,26 @@ func (c *configuration) Init() error {
c.SetDefault("devices", []string{})
c.SetDefault("log.level", "INFO")
c.SetDefault("log.file", "")
c.SetDefault("api.endpoint", "http://localhost:8080")
c.SetDefault("commands.metrics_smartctl_bin", "smartctl")
c.SetDefault("commands.metrics_scan_args", "--scan --json")
c.SetDefault("commands.metrics_info_args", "--info --json")
c.SetDefault("commands.metrics_smart_args", "--xall --json")
c.SetDefault("commands.metrics_smartctl_wait", 0)
//configure env variable parsing.
c.SetEnvPrefix("COLLECTOR")
c.SetEnvKeyReplacer(strings.NewReplacer("-", "_", ".", "_"))
c.AutomaticEnv()
//c.SetDefault("collect.short.command", "-a -o on -S on")
c.SetDefault("allow_listed_devices", []string{})
//if you want to load a non-standard location system config file (~/drawbridge.yml), use ReadConfig
c.SetConfigType("yaml")
//c.SetConfigName("drawbridge")
@@ -85,16 +109,104 @@ func (c *configuration) ValidateConfig() error {
// check that device prefix matches OS
// check that schema of config file is valid
return nil
// check that the collector commands are valid
commandArgStrings := map[string]string{
"commands.metrics_scan_args": c.GetString("commands.metrics_scan_args"),
"commands.metrics_info_args": c.GetString("commands.metrics_info_args"),
"commands.metrics_smart_args": c.GetString("commands.metrics_smart_args"),
}
errorStrings := []string{}
for configKey, commandArgString := range commandArgStrings {
args := strings.Split(commandArgString, " ")
//ensure that the args string contains `--json` or `-j` flag
containsJsonFlag := false
containsDeviceFlag := false
for _, flag := range args {
if strings.HasPrefix(flag, "--json") || strings.HasPrefix(flag, "-j") {
containsJsonFlag = true
}
if strings.HasPrefix(flag, "--device") || strings.HasPrefix(flag, "-d") {
containsDeviceFlag = true
}
}
if !containsJsonFlag {
errorStrings = append(errorStrings, fmt.Sprintf("configuration key '%s' is missing '--json' flag", configKey))
}
if containsDeviceFlag {
errorStrings = append(errorStrings, fmt.Sprintf("configuration key '%s' must not contain '--device' or '-d' flag", configKey))
}
}
//sort(errorStrings)
sort.Strings(errorStrings)
if len(errorStrings) == 0 {
return nil
} else {
return errors.ConfigValidationError(strings.Join(errorStrings, ", "))
}
}
func (c *configuration) GetScanOverrides() []models.ScanOverride {
func (c *configuration) GetDeviceOverrides() []models.ScanOverride {
// we have to support 2 types of device types.
// - simple device type (device_type: 'sat')
// and list of device types (type: \n- 3ware,0 \n- 3ware,1 \n- 3ware,2)
// GetString will return "" if this is a list of device types.
overrides := []models.ScanOverride{}
c.UnmarshalKey("devices", &overrides, func(c *mapstructure.DecoderConfig) { c.WeaklyTypedInput = true })
return overrides
if c.deviceOverrides == nil {
overrides := []models.ScanOverride{}
c.UnmarshalKey("devices", &overrides, func(c *mapstructure.DecoderConfig) { c.WeaklyTypedInput = true })
c.deviceOverrides = overrides
}
return c.deviceOverrides
}
func (c *configuration) GetCommandMetricsInfoArgs(deviceName string) string {
overrides := c.GetDeviceOverrides()
for _, deviceOverrides := range overrides {
if strings.EqualFold(deviceName, deviceOverrides.Device) {
//found matching device
if len(deviceOverrides.Commands.MetricsInfoArgs) > 0 {
return deviceOverrides.Commands.MetricsInfoArgs
} else {
return c.GetString("commands.metrics_info_args")
}
}
}
return c.GetString("commands.metrics_info_args")
}
func (c *configuration) GetCommandMetricsSmartArgs(deviceName string) string {
overrides := c.GetDeviceOverrides()
for _, deviceOverrides := range overrides {
if strings.EqualFold(deviceName, deviceOverrides.Device) {
//found matching device
if len(deviceOverrides.Commands.MetricsSmartArgs) > 0 {
return deviceOverrides.Commands.MetricsSmartArgs
} else {
return c.GetString("commands.metrics_smart_args")
}
}
}
return c.GetString("commands.metrics_smart_args")
}
func (c *configuration) IsAllowlistedDevice(deviceName string) bool {
allowList := c.GetStringSlice("allow_listed_devices")
if len(allowList) == 0 {
return true
}
for _, item := range allowList {
if item == deviceName {
return true
}
}
return false
}
+111 -3
View File
@@ -8,6 +8,19 @@ import (
"testing"
)
func TestConfiguration_InvalidConfigPath(t *testing.T) {
t.Parallel()
//setup
testConfig, _ := config.Create()
//test
err := testConfig.ReadConfig("does_not_exist.yaml")
//assert
require.Error(t, err, "should return an error")
}
func TestConfiguration_GetScanOverrides_Simple(t *testing.T) {
t.Parallel()
@@ -17,12 +30,31 @@ func TestConfiguration_GetScanOverrides_Simple(t *testing.T) {
//test
err := testConfig.ReadConfig(path.Join("testdata", "simple_device.yaml"))
require.NoError(t, err, "should correctly load simple device config")
scanOverrides := testConfig.GetScanOverrides()
scanOverrides := testConfig.GetDeviceOverrides()
//assert
require.Equal(t, []models.ScanOverride{{Device: "/dev/sda", DeviceType: []string{"sat"}, Ignore: false}}, scanOverrides)
}
// fixes #418
func TestConfiguration_GetScanOverrides_DeviceTypeComma(t *testing.T) {
t.Parallel()
//setup
testConfig, _ := config.Create()
//test
err := testConfig.ReadConfig(path.Join("testdata", "device_type_comma.yaml"))
require.NoError(t, err, "should correctly load simple device config")
scanOverrides := testConfig.GetDeviceOverrides()
//assert
require.Equal(t, []models.ScanOverride{
{Device: "/dev/sda", DeviceType: []string{"sat", "auto"}, Ignore: false},
{Device: "/dev/sdb", DeviceType: []string{"sat,auto"}, Ignore: false},
}, scanOverrides)
}
func TestConfiguration_GetScanOverrides_Ignore(t *testing.T) {
t.Parallel()
@@ -32,7 +64,7 @@ func TestConfiguration_GetScanOverrides_Ignore(t *testing.T) {
//test
err := testConfig.ReadConfig(path.Join("testdata", "ignore_device.yaml"))
require.NoError(t, err, "should correctly load ignore device config")
scanOverrides := testConfig.GetScanOverrides()
scanOverrides := testConfig.GetDeviceOverrides()
//assert
require.Equal(t, []models.ScanOverride{{Device: "/dev/sda", DeviceType: nil, Ignore: true}}, scanOverrides)
@@ -47,7 +79,7 @@ func TestConfiguration_GetScanOverrides_Raid(t *testing.T) {
//test
err := testConfig.ReadConfig(path.Join("testdata", "raid_device.yaml"))
require.NoError(t, err, "should correctly load ignore device config")
scanOverrides := testConfig.GetScanOverrides()
scanOverrides := testConfig.GetDeviceOverrides()
//assert
require.Equal(t, []models.ScanOverride{
@@ -62,3 +94,79 @@ func TestConfiguration_GetScanOverrides_Raid(t *testing.T) {
Ignore: false,
}}, scanOverrides)
}
func TestConfiguration_InvalidCommands_MissingJson(t *testing.T) {
t.Parallel()
//setup
testConfig, _ := config.Create()
//test
err := testConfig.ReadConfig(path.Join("testdata", "invalid_commands_missing_json.yaml"))
require.EqualError(t, err, `ConfigValidationError: "configuration key 'commands.metrics_scan_args' is missing '--json' flag"`, "should throw an error because json flag is missing")
}
func TestConfiguration_InvalidCommands_IncludesDevice(t *testing.T) {
t.Parallel()
//setup
testConfig, _ := config.Create()
//test
err := testConfig.ReadConfig(path.Join("testdata", "invalid_commands_includes_device.yaml"))
require.EqualError(t, err, `ConfigValidationError: "configuration key 'commands.metrics_info_args' must not contain '--device' or '-d' flag, configuration key 'commands.metrics_smart_args' must not contain '--device' or '-d' flag"`, "should throw an error because device flags detected")
}
func TestConfiguration_OverrideCommands(t *testing.T) {
t.Parallel()
//setup
testConfig, _ := config.Create()
//test
err := testConfig.ReadConfig(path.Join("testdata", "override_commands.yaml"))
require.NoError(t, err, "should not throw an error")
require.Equal(t, "--xall --json -T permissive", testConfig.GetString("commands.metrics_smart_args"))
}
func TestConfiguration_OverrideDeviceCommands_MetricsInfoArgs(t *testing.T) {
t.Parallel()
//setup
testConfig, _ := config.Create()
//test
err := testConfig.ReadConfig(path.Join("testdata", "override_device_commands.yaml"))
require.NoError(t, err, "should correctly override device command")
//assert
require.Equal(t, "--info --json -T permissive", testConfig.GetCommandMetricsInfoArgs("/dev/sda"))
require.Equal(t, "--info --json", testConfig.GetCommandMetricsInfoArgs("/dev/sdb"))
//require.Equal(t, []models.ScanOverride{{Device: "/dev/sda", DeviceType: nil, Commands: {MetricsInfoArgs: "--info --json -T "}}}, scanOverrides)
}
func TestConfiguration_DeviceAllowList(t *testing.T) {
t.Parallel()
t.Run("present", func(t *testing.T) {
testConfig, err := config.Create()
require.NoError(t, err)
require.NoError(t, testConfig.ReadConfig(path.Join("testdata", "allow_listed_devices_present.yaml")))
require.True(t, testConfig.IsAllowlistedDevice("/dev/sda"), "/dev/sda should be allow listed")
require.False(t, testConfig.IsAllowlistedDevice("/dev/sdc"), "/dev/sda should not be allow listed")
})
t.Run("missing", func(t *testing.T) {
testConfig, err := config.Create()
require.NoError(t, err)
// Really just any other config where the key is full missing
require.NoError(t, testConfig.ReadConfig(path.Join("testdata", "override_device_commands.yaml")))
// Anything should be allow listed if the key isnt there
require.True(t, testConfig.IsAllowlistedDevice("/dev/sda"), "/dev/sda should be allow listed")
require.True(t, testConfig.IsAllowlistedDevice("/dev/sdc"), "/dev/sda should be allow listed")
})
}
+5 -1
View File
@@ -22,5 +22,9 @@ type Interface interface {
GetStringSlice(key string) []string
UnmarshalKey(key string, rawVal interface{}, decoderOpts ...viper.DecoderConfigOption) error
GetScanOverrides() []models.ScanOverride
GetDeviceOverrides() []models.ScanOverride
GetCommandMetricsInfoArgs(deviceName string) string
GetCommandMetricsSmartArgs(deviceName string) string
IsAllowlistedDevice(deviceName string) bool
}
+144 -101
View File
@@ -5,88 +5,37 @@
package mock_config
import (
models "github.com/analogj/scrutiny/collector/pkg/models"
gomock "github.com/golang/mock/gomock"
viper "github.com/spf13/viper"
reflect "reflect"
models "github.com/analogj/scrutiny/collector/pkg/models"
viper "github.com/spf13/viper"
gomock "go.uber.org/mock/gomock"
)
// MockInterface is a mock of Interface interface
// MockInterface is a mock of Interface interface.
type MockInterface struct {
ctrl *gomock.Controller
recorder *MockInterfaceMockRecorder
}
// MockInterfaceMockRecorder is the mock recorder for MockInterface
// MockInterfaceMockRecorder is the mock recorder for MockInterface.
type MockInterfaceMockRecorder struct {
mock *MockInterface
}
// NewMockInterface creates a new mock instance
// NewMockInterface creates a new mock instance.
func NewMockInterface(ctrl *gomock.Controller) *MockInterface {
mock := &MockInterface{ctrl: ctrl}
mock.recorder = &MockInterfaceMockRecorder{mock}
return mock
}
// EXPECT returns an object that allows the caller to indicate expected use
// EXPECT returns an object that allows the caller to indicate expected use.
func (m *MockInterface) EXPECT() *MockInterfaceMockRecorder {
return m.recorder
}
// Init mocks base method
func (m *MockInterface) Init() error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "Init")
ret0, _ := ret[0].(error)
return ret0
}
// Init indicates an expected call of Init
func (mr *MockInterfaceMockRecorder) Init() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Init", reflect.TypeOf((*MockInterface)(nil).Init))
}
// ReadConfig mocks base method
func (m *MockInterface) ReadConfig(configFilePath string) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "ReadConfig", configFilePath)
ret0, _ := ret[0].(error)
return ret0
}
// ReadConfig indicates an expected call of ReadConfig
func (mr *MockInterfaceMockRecorder) ReadConfig(configFilePath interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ReadConfig", reflect.TypeOf((*MockInterface)(nil).ReadConfig), configFilePath)
}
// Set mocks base method
func (m *MockInterface) Set(key string, value interface{}) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "Set", key, value)
}
// Set indicates an expected call of Set
func (mr *MockInterfaceMockRecorder) Set(key, value interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Set", reflect.TypeOf((*MockInterface)(nil).Set), key, value)
}
// SetDefault mocks base method
func (m *MockInterface) SetDefault(key string, value interface{}) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetDefault", key, value)
}
// SetDefault indicates an expected call of SetDefault
func (mr *MockInterfaceMockRecorder) SetDefault(key, value interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetDefault", reflect.TypeOf((*MockInterface)(nil).SetDefault), key, value)
}
// AllSettings mocks base method
// AllSettings mocks base method.
func (m *MockInterface) AllSettings() map[string]interface{} {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "AllSettings")
@@ -94,27 +43,13 @@ func (m *MockInterface) AllSettings() map[string]interface{} {
return ret0
}
// AllSettings indicates an expected call of AllSettings
// AllSettings indicates an expected call of AllSettings.
func (mr *MockInterfaceMockRecorder) AllSettings() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "AllSettings", reflect.TypeOf((*MockInterface)(nil).AllSettings))
}
// IsSet mocks base method
func (m *MockInterface) IsSet(key string) bool {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "IsSet", key)
ret0, _ := ret[0].(bool)
return ret0
}
// IsSet indicates an expected call of IsSet
func (mr *MockInterfaceMockRecorder) IsSet(key interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "IsSet", reflect.TypeOf((*MockInterface)(nil).IsSet), key)
}
// Get mocks base method
// Get mocks base method.
func (m *MockInterface) Get(key string) interface{} {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "Get", key)
@@ -122,13 +57,13 @@ func (m *MockInterface) Get(key string) interface{} {
return ret0
}
// Get indicates an expected call of Get
// Get indicates an expected call of Get.
func (mr *MockInterfaceMockRecorder) Get(key interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Get", reflect.TypeOf((*MockInterface)(nil).Get), key)
}
// GetBool mocks base method
// GetBool mocks base method.
func (m *MockInterface) GetBool(key string) bool {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetBool", key)
@@ -136,13 +71,55 @@ func (m *MockInterface) GetBool(key string) bool {
return ret0
}
// GetBool indicates an expected call of GetBool
// GetBool indicates an expected call of GetBool.
func (mr *MockInterfaceMockRecorder) GetBool(key interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetBool", reflect.TypeOf((*MockInterface)(nil).GetBool), key)
}
// GetInt mocks base method
// GetCommandMetricsInfoArgs mocks base method.
func (m *MockInterface) GetCommandMetricsInfoArgs(deviceName string) string {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetCommandMetricsInfoArgs", deviceName)
ret0, _ := ret[0].(string)
return ret0
}
// GetCommandMetricsInfoArgs indicates an expected call of GetCommandMetricsInfoArgs.
func (mr *MockInterfaceMockRecorder) GetCommandMetricsInfoArgs(deviceName interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetCommandMetricsInfoArgs", reflect.TypeOf((*MockInterface)(nil).GetCommandMetricsInfoArgs), deviceName)
}
// GetCommandMetricsSmartArgs mocks base method.
func (m *MockInterface) GetCommandMetricsSmartArgs(deviceName string) string {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetCommandMetricsSmartArgs", deviceName)
ret0, _ := ret[0].(string)
return ret0
}
// GetCommandMetricsSmartArgs indicates an expected call of GetCommandMetricsSmartArgs.
func (mr *MockInterfaceMockRecorder) GetCommandMetricsSmartArgs(deviceName interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetCommandMetricsSmartArgs", reflect.TypeOf((*MockInterface)(nil).GetCommandMetricsSmartArgs), deviceName)
}
// GetDeviceOverrides mocks base method.
func (m *MockInterface) GetDeviceOverrides() []models.ScanOverride {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetDeviceOverrides")
ret0, _ := ret[0].([]models.ScanOverride)
return ret0
}
// GetDeviceOverrides indicates an expected call of GetDeviceOverrides.
func (mr *MockInterfaceMockRecorder) GetDeviceOverrides() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetDeviceOverrides", reflect.TypeOf((*MockInterface)(nil).GetDeviceOverrides))
}
// GetInt mocks base method.
func (m *MockInterface) GetInt(key string) int {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetInt", key)
@@ -150,13 +127,13 @@ func (m *MockInterface) GetInt(key string) int {
return ret0
}
// GetInt indicates an expected call of GetInt
// GetInt indicates an expected call of GetInt.
func (mr *MockInterfaceMockRecorder) GetInt(key interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetInt", reflect.TypeOf((*MockInterface)(nil).GetInt), key)
}
// GetString mocks base method
// GetString mocks base method.
func (m *MockInterface) GetString(key string) string {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetString", key)
@@ -164,13 +141,13 @@ func (m *MockInterface) GetString(key string) string {
return ret0
}
// GetString indicates an expected call of GetString
// GetString indicates an expected call of GetString.
func (mr *MockInterfaceMockRecorder) GetString(key interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetString", reflect.TypeOf((*MockInterface)(nil).GetString), key)
}
// GetStringSlice mocks base method
// GetStringSlice mocks base method.
func (m *MockInterface) GetStringSlice(key string) []string {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetStringSlice", key)
@@ -178,13 +155,93 @@ func (m *MockInterface) GetStringSlice(key string) []string {
return ret0
}
// GetStringSlice indicates an expected call of GetStringSlice
// GetStringSlice indicates an expected call of GetStringSlice.
func (mr *MockInterfaceMockRecorder) GetStringSlice(key interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetStringSlice", reflect.TypeOf((*MockInterface)(nil).GetStringSlice), key)
}
// UnmarshalKey mocks base method
// Init mocks base method.
func (m *MockInterface) Init() error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "Init")
ret0, _ := ret[0].(error)
return ret0
}
// Init indicates an expected call of Init.
func (mr *MockInterfaceMockRecorder) Init() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Init", reflect.TypeOf((*MockInterface)(nil).Init))
}
// IsAllowlistedDevice mocks base method.
func (m *MockInterface) IsAllowlistedDevice(deviceName string) bool {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "IsAllowlistedDevice", deviceName)
ret0, _ := ret[0].(bool)
return ret0
}
// IsAllowlistedDevice indicates an expected call of IsAllowlistedDevice.
func (mr *MockInterfaceMockRecorder) IsAllowlistedDevice(deviceName interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "IsAllowlistedDevice", reflect.TypeOf((*MockInterface)(nil).IsAllowlistedDevice), deviceName)
}
// IsSet mocks base method.
func (m *MockInterface) IsSet(key string) bool {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "IsSet", key)
ret0, _ := ret[0].(bool)
return ret0
}
// IsSet indicates an expected call of IsSet.
func (mr *MockInterfaceMockRecorder) IsSet(key interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "IsSet", reflect.TypeOf((*MockInterface)(nil).IsSet), key)
}
// ReadConfig mocks base method.
func (m *MockInterface) ReadConfig(configFilePath string) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "ReadConfig", configFilePath)
ret0, _ := ret[0].(error)
return ret0
}
// ReadConfig indicates an expected call of ReadConfig.
func (mr *MockInterfaceMockRecorder) ReadConfig(configFilePath interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ReadConfig", reflect.TypeOf((*MockInterface)(nil).ReadConfig), configFilePath)
}
// Set mocks base method.
func (m *MockInterface) Set(key string, value interface{}) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "Set", key, value)
}
// Set indicates an expected call of Set.
func (mr *MockInterfaceMockRecorder) Set(key, value interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Set", reflect.TypeOf((*MockInterface)(nil).Set), key, value)
}
// SetDefault mocks base method.
func (m *MockInterface) SetDefault(key string, value interface{}) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetDefault", key, value)
}
// SetDefault indicates an expected call of SetDefault.
func (mr *MockInterfaceMockRecorder) SetDefault(key, value interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetDefault", reflect.TypeOf((*MockInterface)(nil).SetDefault), key, value)
}
// UnmarshalKey mocks base method.
func (m *MockInterface) UnmarshalKey(key string, rawVal interface{}, decoderOpts ...viper.DecoderConfigOption) error {
m.ctrl.T.Helper()
varargs := []interface{}{key, rawVal}
@@ -196,23 +253,9 @@ func (m *MockInterface) UnmarshalKey(key string, rawVal interface{}, decoderOpts
return ret0
}
// UnmarshalKey indicates an expected call of UnmarshalKey
// UnmarshalKey indicates an expected call of UnmarshalKey.
func (mr *MockInterfaceMockRecorder) UnmarshalKey(key, rawVal interface{}, decoderOpts ...interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
varargs := append([]interface{}{key, rawVal}, decoderOpts...)
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UnmarshalKey", reflect.TypeOf((*MockInterface)(nil).UnmarshalKey), varargs...)
}
// GetScanOverrides mocks base method
func (m *MockInterface) GetScanOverrides() []models.ScanOverride {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetScanOverrides")
ret0, _ := ret[0].([]models.ScanOverride)
return ret0
}
// GetScanOverrides indicates an expected call of GetScanOverrides
func (mr *MockInterfaceMockRecorder) GetScanOverrides() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetScanOverrides", reflect.TypeOf((*MockInterface)(nil).GetScanOverrides))
}
@@ -0,0 +1,3 @@
allow_listed_devices:
- /dev/sda
- /dev/sdb
+9
View File
@@ -0,0 +1,9 @@
version: 1
devices:
# the scrutiny config parser will detect `sat,auto` as two separate items in a list. If you want to use `-d sat,auto` you must
# set 'sat,auto' in a list (see eg. /dev/sbd)
- device: /dev/sda
type: 'sat,auto'
- device: /dev/sdb
type:
- sat,auto
@@ -0,0 +1,4 @@
commands:
metrics_scan_args: '--scan --json' # used to detect devices
metrics_info_args: '--info --json --device=sat' # used to determine device unique ID & register device with Scrutiny
metrics_smart_args: '--xall --json -d sat' # used to retrieve smart data for each device.
@@ -0,0 +1,4 @@
commands:
metrics_scan_args: '--scan' # used to detect devices
metrics_info_args: '--info -j' # used to determine device unique ID & register device with Scrutiny
metrics_smart_args: '--xall --json' # used to retrieve smart data for each device.
+4
View File
@@ -0,0 +1,4 @@
commands:
metrics_scan_args: '--scan --json' # used to detect devices
metrics_info_args: '--info -j' # used to determine device unique ID & register device with Scrutiny
metrics_smart_args: '--xall --json -T permissive' # used to retrieve smart data for each device.
@@ -0,0 +1,5 @@
version: 1
devices:
- device: /dev/sda
commands:
metrics_info_args: "--info --json -T permissive"
+54 -14
View File
@@ -3,18 +3,20 @@ package detect
import (
"encoding/json"
"fmt"
"github.com/analogj/scrutiny/collector/pkg/common"
"os"
"strings"
"github.com/analogj/scrutiny/collector/pkg/common/shell"
"github.com/analogj/scrutiny/collector/pkg/config"
"github.com/analogj/scrutiny/collector/pkg/models"
"github.com/analogj/scrutiny/webapp/backend/pkg/models/collector"
"github.com/sirupsen/logrus"
"os"
"strings"
)
type Detect struct {
Logger *logrus.Entry
Config config.Interface
Shell shell.Interface
}
//private/common functions
@@ -27,7 +29,8 @@ type Detect struct {
// models.Device returned from this function only contain the minimum data for smartctl to execute: device type and device name (device file).
func (d *Detect) SmartctlScan() ([]models.Device, error) {
//we use smartctl to detect all the drives available.
detectedDeviceConnJson, err := common.ExecCmd(d.Logger, "smartctl", []string{"--scan", "-j"}, "", os.Environ())
args := strings.Split(d.Config.GetString("commands.metrics_scan_args"), " ")
detectedDeviceConnJson, err := d.Shell.Command(d.Logger, d.Config.GetString("commands.metrics_smartctl_bin"), args, "", os.Environ())
if err != nil {
d.Logger.Errorf("Error scanning for devices: %v", err)
return nil, err
@@ -45,20 +48,20 @@ func (d *Detect) SmartctlScan() ([]models.Device, error) {
return detectedDevices, nil
}
//updates a device model with information from smartctl --scan
// updates a device model with information from smartctl --scan
// It has a couple of issues however:
// - WWN is provided as component data, rather than a "string". We'll have to generate the WWN value ourselves
// - WWN from smartctl only provided for ATA protocol drives, NVMe and SCSI drives do not include WWN.
func (d *Detect) SmartCtlInfo(device *models.Device) error {
args := []string{"--info", "-j"}
fullDeviceName := fmt.Sprintf("%s%s", DevicePrefix(), device.DeviceName)
args := strings.Split(d.Config.GetCommandMetricsInfoArgs(fullDeviceName), " ")
//only include the device type if its a non-standard one. In some cases ata drives are detected as scsi in docker, and metadata is lost.
if len(device.DeviceType) > 0 && device.DeviceType != "scsi" && device.DeviceType != "ata" {
args = append(args, "-d", device.DeviceType)
args = append(args, "--device", device.DeviceType)
}
args = append(args, fmt.Sprintf("%s%s", DevicePrefix(), device.DeviceName))
args = append(args, fullDeviceName)
availableDeviceInfoJson, err := common.ExecCmd(d.Logger, "smartctl", args, "", os.Environ())
availableDeviceInfoJson, err := d.Shell.Command(d.Logger, d.Config.GetString("commands.metrics_smartctl_bin"), args, "", os.Environ())
if err != nil {
d.Logger.Errorf("Could not retrieve device information for %s: %v", device.DeviceName, err)
return err
@@ -79,8 +82,9 @@ func (d *Detect) SmartCtlInfo(device *models.Device) error {
device.SerialNumber = availableDeviceInfo.SerialNumber
device.Firmware = availableDeviceInfo.FirmwareVersion
device.RotationSpeed = availableDeviceInfo.RotationRate
device.Capacity = availableDeviceInfo.UserCapacity.Bytes
device.Capacity = availableDeviceInfo.Capacity()
device.FormFactor = availableDeviceInfo.FormFactor.Name
device.DeviceType = availableDeviceInfo.Device.Type
device.DeviceProtocol = availableDeviceInfo.Device.Protocol
if len(availableDeviceInfo.Vendor) > 0 {
device.Manufacturer = availableDeviceInfo.Vendor
@@ -100,6 +104,12 @@ func (d *Detect) SmartCtlInfo(device *models.Device) error {
d.Logger.Info("Using WWN Fallback")
d.wwnFallback(device)
}
if len(device.WWN) == 0 {
// no WWN populated after WWN lookup and fallback. we need to throw an error
errMsg := fmt.Sprintf("no WWN (or fallback) populated for device: %s. Device will be registered, but no data will be published for this device. ", device.DeviceName)
d.Logger.Errorf("%v", errMsg)
return fmt.Errorf("%v", errMsg)
}
return nil
}
@@ -114,6 +124,11 @@ func (d *Detect) TransformDetectedDevices(detectedDeviceConns models.Scan) []mod
deviceFile := strings.ToLower(scannedDevice.Name)
// If the user has defined a device allow list, and this device isnt there, then ignore it
if !d.Config.IsAllowlistedDevice(deviceFile) {
continue
}
detectedDevice := models.Device{
HostId: d.Config.GetString("host.id"),
DeviceType: scannedDevice.Type,
@@ -131,7 +146,7 @@ func (d *Detect) TransformDetectedDevices(detectedDeviceConns models.Scan) []mod
//now tha we've "grouped" all the devices, lets override any groups specified in the config file.
for _, overrideDevice := range d.Config.GetScanOverrides() {
for _, overrideDevice := range d.Config.GetDeviceOverrides() {
overrideDeviceFile := strings.ToLower(overrideDevice.Device)
if overrideDevice.Ignore {
@@ -141,10 +156,35 @@ func (d *Detect) TransformDetectedDevices(detectedDeviceConns models.Scan) []mod
//create a new device group, and replace the one generated by smartctl --scan
overrideDeviceGroup := []models.Device{}
for _, overrideDeviceType := range overrideDevice.DeviceType {
if overrideDevice.DeviceType != nil {
for _, overrideDeviceType := range overrideDevice.DeviceType {
overrideDeviceGroup = append(overrideDeviceGroup, models.Device{
HostId: d.Config.GetString("host.id"),
DeviceType: overrideDeviceType,
DeviceName: strings.TrimPrefix(overrideDeviceFile, DevicePrefix()),
})
}
} else {
//user may have specified device in config file without device type (default to scanned device type)
//check if the device file was detected by the scanner
var deviceType string
if scannedDevice, foundScannedDevice := groupedDevices[overrideDeviceFile]; foundScannedDevice {
if len(scannedDevice) > 0 {
//take the device type from the first grouped device
deviceType = scannedDevice[0].DeviceType
} else {
deviceType = "ata"
}
} else {
//fallback to ata if no scanned device detected
deviceType = "ata"
}
overrideDeviceGroup = append(overrideDeviceGroup, models.Device{
HostId: d.Config.GetString("host.id"),
DeviceType: overrideDeviceType,
DeviceType: deviceType,
DeviceName: strings.TrimPrefix(overrideDeviceFile, DevicePrefix()),
})
}
+285 -24
View File
@@ -1,21 +1,122 @@
package detect_test
import (
"os"
"strings"
"testing"
mock_shell "github.com/analogj/scrutiny/collector/pkg/common/shell/mock"
mock_config "github.com/analogj/scrutiny/collector/pkg/config/mock"
"github.com/analogj/scrutiny/collector/pkg/detect"
"github.com/analogj/scrutiny/collector/pkg/models"
"github.com/golang/mock/gomock"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"testing"
"go.uber.org/mock/gomock"
)
func TestDetect_TransformDetectedDevices_Empty(t *testing.T) {
//setup
func TestDetect_SmartctlScan(t *testing.T) {
// setup
mockCtrl := gomock.NewController(t)
defer mockCtrl.Finish()
fakeConfig := mock_config.NewMockInterface(mockCtrl)
fakeConfig.EXPECT().GetString("host.id").AnyTimes().Return("")
fakeConfig.EXPECT().GetScanOverrides().AnyTimes().Return([]models.ScanOverride{})
fakeConfig.EXPECT().GetDeviceOverrides().AnyTimes().Return([]models.ScanOverride{})
fakeConfig.EXPECT().GetString("commands.metrics_smartctl_bin").AnyTimes().Return("smartctl")
fakeConfig.EXPECT().GetString("commands.metrics_scan_args").AnyTimes().Return("--scan --json")
fakeConfig.EXPECT().IsAllowlistedDevice(gomock.Any()).AnyTimes().Return(true)
fakeShell := mock_shell.NewMockInterface(mockCtrl)
testScanResults, err := os.ReadFile("testdata/smartctl_scan_simple.json")
fakeShell.EXPECT().Command(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).AnyTimes().Return(string(testScanResults), err)
d := detect.Detect{
Logger: logrus.WithFields(logrus.Fields{}),
Shell: fakeShell,
Config: fakeConfig,
}
// test
scannedDevices, err := d.SmartctlScan()
// assert
require.NoError(t, err)
require.Equal(t, 7, len(scannedDevices))
require.Equal(t, "scsi", scannedDevices[0].DeviceType)
}
func TestDetect_SmartctlScan_Megaraid(t *testing.T) {
// setup
mockCtrl := gomock.NewController(t)
fakeConfig := mock_config.NewMockInterface(mockCtrl)
fakeConfig.EXPECT().GetString("host.id").AnyTimes().Return("")
fakeConfig.EXPECT().GetDeviceOverrides().AnyTimes().Return([]models.ScanOverride{})
fakeConfig.EXPECT().GetString("commands.metrics_smartctl_bin").AnyTimes().Return("smartctl")
fakeConfig.EXPECT().GetString("commands.metrics_scan_args").AnyTimes().Return("--scan --json")
fakeConfig.EXPECT().IsAllowlistedDevice(gomock.Any()).AnyTimes().Return(true)
fakeShell := mock_shell.NewMockInterface(mockCtrl)
testScanResults, err := os.ReadFile("testdata/smartctl_scan_megaraid.json")
fakeShell.EXPECT().Command(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).AnyTimes().Return(string(testScanResults), err)
d := detect.Detect{
Logger: logrus.WithFields(logrus.Fields{}),
Shell: fakeShell,
Config: fakeConfig,
}
// test
scannedDevices, err := d.SmartctlScan()
// assert
require.NoError(t, err)
require.Equal(t, 2, len(scannedDevices))
require.Equal(t, []models.Device{
{DeviceName: "bus/0", DeviceType: "megaraid,0"},
{DeviceName: "bus/0", DeviceType: "megaraid,1"},
}, scannedDevices)
}
func TestDetect_SmartctlScan_Nvme(t *testing.T) {
// setup
mockCtrl := gomock.NewController(t)
fakeConfig := mock_config.NewMockInterface(mockCtrl)
fakeConfig.EXPECT().GetString("host.id").AnyTimes().Return("")
fakeConfig.EXPECT().GetDeviceOverrides().AnyTimes().Return([]models.ScanOverride{})
fakeConfig.EXPECT().GetString("commands.metrics_smartctl_bin").AnyTimes().Return("smartctl")
fakeConfig.EXPECT().GetString("commands.metrics_scan_args").AnyTimes().Return("--scan --json")
fakeConfig.EXPECT().IsAllowlistedDevice(gomock.Any()).AnyTimes().Return(true)
fakeShell := mock_shell.NewMockInterface(mockCtrl)
testScanResults, err := os.ReadFile("testdata/smartctl_scan_nvme.json")
fakeShell.EXPECT().Command(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).AnyTimes().Return(string(testScanResults), err)
d := detect.Detect{
Logger: logrus.WithFields(logrus.Fields{}),
Shell: fakeShell,
Config: fakeConfig,
}
// test
scannedDevices, err := d.SmartctlScan()
// assert
require.NoError(t, err)
require.Equal(t, 1, len(scannedDevices))
require.Equal(t, []models.Device{
{DeviceName: "nvme0", DeviceType: "nvme"},
}, scannedDevices)
}
func TestDetect_TransformDetectedDevices_Empty(t *testing.T) {
// setup
mockCtrl := gomock.NewController(t)
fakeConfig := mock_config.NewMockInterface(mockCtrl)
fakeConfig.EXPECT().GetString("host.id").AnyTimes().Return("")
fakeConfig.EXPECT().GetDeviceOverrides().AnyTimes().Return([]models.ScanOverride{})
fakeConfig.EXPECT().GetString("commands.metrics_smartctl_bin").AnyTimes().Return("smartctl")
fakeConfig.EXPECT().GetString("commands.metrics_scan_args").AnyTimes().Return("--scan --json")
fakeConfig.EXPECT().IsAllowlistedDevice(gomock.Any()).AnyTimes().Return(true)
detectedDevices := models.Scan{
Devices: []models.ScanDevice{
{
@@ -31,21 +132,24 @@ func TestDetect_TransformDetectedDevices_Empty(t *testing.T) {
Config: fakeConfig,
}
//test
// test
transformedDevices := d.TransformDetectedDevices(detectedDevices)
//assert
// assert
require.Equal(t, "sda", transformedDevices[0].DeviceName)
require.Equal(t, "scsi", transformedDevices[0].DeviceType)
}
func TestDetect_TransformDetectedDevices_Ignore(t *testing.T) {
//setup
// setup
mockCtrl := gomock.NewController(t)
defer mockCtrl.Finish()
fakeConfig := mock_config.NewMockInterface(mockCtrl)
fakeConfig.EXPECT().GetString("host.id").AnyTimes().Return("")
fakeConfig.EXPECT().GetScanOverrides().AnyTimes().Return([]models.ScanOverride{{Device: "/dev/sda", DeviceType: nil, Ignore: true}})
fakeConfig.EXPECT().GetDeviceOverrides().AnyTimes().Return([]models.ScanOverride{{Device: "/dev/sda", DeviceType: nil, Ignore: true}})
fakeConfig.EXPECT().GetString("commands.metrics_smartctl_bin").AnyTimes().Return("smartctl")
fakeConfig.EXPECT().GetString("commands.metrics_scan_args").AnyTimes().Return("--scan --json")
fakeConfig.EXPECT().IsAllowlistedDevice(gomock.Any()).AnyTimes().Return(true)
detectedDevices := models.Scan{
Devices: []models.ScanDevice{
{
@@ -61,20 +165,22 @@ func TestDetect_TransformDetectedDevices_Ignore(t *testing.T) {
Config: fakeConfig,
}
//test
// test
transformedDevices := d.TransformDetectedDevices(detectedDevices)
//assert
// assert
require.Equal(t, []models.Device{}, transformedDevices)
}
func TestDetect_TransformDetectedDevices_Raid(t *testing.T) {
//setup
// setup
mockCtrl := gomock.NewController(t)
defer mockCtrl.Finish()
fakeConfig := mock_config.NewMockInterface(mockCtrl)
fakeConfig.EXPECT().GetString("host.id").AnyTimes().Return("")
fakeConfig.EXPECT().GetScanOverrides().AnyTimes().Return([]models.ScanOverride{
fakeConfig.EXPECT().GetString("commands.metrics_smartctl_bin").AnyTimes().Return("smartctl")
fakeConfig.EXPECT().GetString("commands.metrics_scan_args").AnyTimes().Return("--scan --json")
fakeConfig.EXPECT().IsAllowlistedDevice(gomock.Any()).AnyTimes().Return(true)
fakeConfig.EXPECT().GetDeviceOverrides().AnyTimes().Return([]models.ScanOverride{
{
Device: "/dev/bus/0",
DeviceType: []string{"megaraid,14", "megaraid,15", "megaraid,18", "megaraid,19", "megaraid,20", "megaraid,21"},
@@ -84,7 +190,8 @@ func TestDetect_TransformDetectedDevices_Raid(t *testing.T) {
Device: "/dev/twa0",
DeviceType: []string{"3ware,0", "3ware,1", "3ware,2", "3ware,3", "3ware,4", "3ware,5"},
Ignore: false,
}})
},
})
detectedDevices := models.Scan{
Devices: []models.ScanDevice{
{
@@ -100,20 +207,22 @@ func TestDetect_TransformDetectedDevices_Raid(t *testing.T) {
Config: fakeConfig,
}
//test
// test
transformedDevices := d.TransformDetectedDevices(detectedDevices)
//assert
// assert
require.Equal(t, 12, len(transformedDevices))
}
func TestDetect_TransformDetectedDevices_Simple(t *testing.T) {
//setup
// setup
mockCtrl := gomock.NewController(t)
defer mockCtrl.Finish()
fakeConfig := mock_config.NewMockInterface(mockCtrl)
fakeConfig.EXPECT().GetString("host.id").AnyTimes().Return("")
fakeConfig.EXPECT().GetScanOverrides().AnyTimes().Return([]models.ScanOverride{{Device: "/dev/sda", DeviceType: []string{"sat+megaraid"}}})
fakeConfig.EXPECT().GetString("commands.metrics_smartctl_bin").AnyTimes().Return("smartctl")
fakeConfig.EXPECT().GetString("commands.metrics_scan_args").AnyTimes().Return("--scan --json")
fakeConfig.EXPECT().GetDeviceOverrides().AnyTimes().Return([]models.ScanOverride{{Device: "/dev/sda", DeviceType: []string{"sat+megaraid"}}})
fakeConfig.EXPECT().IsAllowlistedDevice(gomock.Any()).AnyTimes().Return(true)
detectedDevices := models.Scan{
Devices: []models.ScanDevice{
{
@@ -129,10 +238,162 @@ func TestDetect_TransformDetectedDevices_Simple(t *testing.T) {
Config: fakeConfig,
}
//test
// test
transformedDevices := d.TransformDetectedDevices(detectedDevices)
//assert
// assert
require.Equal(t, 1, len(transformedDevices))
require.Equal(t, "sat+megaraid", transformedDevices[0].DeviceType)
}
// test https://github.com/AnalogJ/scrutiny/issues/255#issuecomment-1164024126
func TestDetect_TransformDetectedDevices_WithoutDeviceTypeOverride(t *testing.T) {
// setup
mockCtrl := gomock.NewController(t)
fakeConfig := mock_config.NewMockInterface(mockCtrl)
fakeConfig.EXPECT().GetString("host.id").AnyTimes().Return("")
fakeConfig.EXPECT().GetString("commands.metrics_smartctl_bin").AnyTimes().Return("smartctl")
fakeConfig.EXPECT().GetString("commands.metrics_scan_args").AnyTimes().Return("--scan --json")
fakeConfig.EXPECT().GetDeviceOverrides().AnyTimes().Return([]models.ScanOverride{{Device: "/dev/sda"}})
fakeConfig.EXPECT().IsAllowlistedDevice(gomock.Any()).AnyTimes().Return(true)
detectedDevices := models.Scan{
Devices: []models.ScanDevice{
{
Name: "/dev/sda",
InfoName: "/dev/sda",
Protocol: "ata",
Type: "scsi",
},
},
}
d := detect.Detect{
Config: fakeConfig,
}
// test
transformedDevices := d.TransformDetectedDevices(detectedDevices)
// assert
require.Equal(t, 1, len(transformedDevices))
require.Equal(t, "scsi", transformedDevices[0].DeviceType)
}
func TestDetect_TransformDetectedDevices_WhenDeviceNotDetected(t *testing.T) {
// setup
mockCtrl := gomock.NewController(t)
fakeConfig := mock_config.NewMockInterface(mockCtrl)
fakeConfig.EXPECT().GetString("host.id").AnyTimes().Return("")
fakeConfig.EXPECT().GetString("commands.metrics_smartctl_bin").AnyTimes().Return("smartctl")
fakeConfig.EXPECT().GetString("commands.metrics_scan_args").AnyTimes().Return("--scan --json")
fakeConfig.EXPECT().GetDeviceOverrides().AnyTimes().Return([]models.ScanOverride{{Device: "/dev/sda"}})
detectedDevices := models.Scan{}
d := detect.Detect{
Config: fakeConfig,
}
// test
transformedDevices := d.TransformDetectedDevices(detectedDevices)
// assert
require.Equal(t, 1, len(transformedDevices))
require.Equal(t, "ata", transformedDevices[0].DeviceType)
}
func TestDetect_TransformDetectedDevices_AllowListFilters(t *testing.T) {
mockCtrl := gomock.NewController(t)
fakeConfig := mock_config.NewMockInterface(mockCtrl)
fakeConfig.EXPECT().GetString("host.id").AnyTimes().Return("")
fakeConfig.EXPECT().GetString("commands.metrics_smartctl_bin").AnyTimes().Return("smartctl")
fakeConfig.EXPECT().GetString("commands.metrics_scan_args").AnyTimes().Return("--scan --json")
fakeConfig.EXPECT().GetDeviceOverrides().AnyTimes().Return([]models.ScanOverride{{Device: "/dev/sda", DeviceType: []string{"sat+megaraid"}}})
fakeConfig.EXPECT().IsAllowlistedDevice("/dev/sda").Return(true)
fakeConfig.EXPECT().IsAllowlistedDevice("/dev/sdb").Return(false)
detectedDevices := models.Scan{
Devices: []models.ScanDevice{
{
Name: "/dev/sda",
InfoName: "/dev/sda",
Protocol: "ata",
Type: "ata",
},
{
Name: "/dev/sdb",
InfoName: "/dev/sdb",
Protocol: "ata",
Type: "ata",
},
},
}
d := detect.Detect{
Config: fakeConfig,
}
// test
transformedDevices := d.TransformDetectedDevices(detectedDevices)
// assert
require.Equal(t, 1, len(transformedDevices))
require.Equal(t, "sda", transformedDevices[0].DeviceName)
}
func TestDetect_SmartCtlInfo(t *testing.T) {
t.Run("should report nvme info", func(t *testing.T) {
ctrl := gomock.NewController(t)
const (
someArgs = "--info --json"
// device info
someDeviceName = "some-device-name"
someModelName = "KCD61LUL3T84"
someSerialNumber = "61Q0A05UT7B8"
someFirmware = "8002"
someDeviceProtocol = "NVMe"
someDeviceType = "nvme"
someCapacity int64 = 3840755982336
)
fakeConfig := mock_config.NewMockInterface(ctrl)
fakeConfig.EXPECT().
GetCommandMetricsInfoArgs("/dev/" + someDeviceName).
Return(someArgs)
fakeConfig.EXPECT().
GetString("commands.metrics_smartctl_bin").
Return("smartctl")
someLogger := logrus.WithFields(logrus.Fields{})
smartctlInfoResults, err := os.ReadFile("testdata/smartctl_info_nvme.json")
require.NoError(t, err)
fakeShell := mock_shell.NewMockInterface(ctrl)
fakeShell.EXPECT().
Command(someLogger, "smartctl", append(strings.Split(someArgs, " "), "/dev/"+someDeviceName), "", gomock.Any()).
Return(string(smartctlInfoResults), err)
d := detect.Detect{
Logger: someLogger,
Shell: fakeShell,
Config: fakeConfig,
}
someDevice := &models.Device{
WWN: "some wwn",
DeviceName: someDeviceName,
}
require.NoError(t, d.SmartCtlInfo(someDevice))
assert.Equal(t, someDeviceName, someDevice.DeviceName)
assert.Equal(t, someModelName, someDevice.ModelName)
assert.Equal(t, someSerialNumber, someDevice.SerialNumber)
assert.Equal(t, someFirmware, someDevice.Firmware)
assert.Equal(t, someDeviceProtocol, someDevice.DeviceProtocol)
assert.Equal(t, someDeviceType, someDevice.DeviceType)
assert.Equal(t, someCapacity, someDevice.Capacity)
})
}
+3 -1
View File
@@ -1,6 +1,7 @@
package detect
import (
"github.com/analogj/scrutiny/collector/pkg/common/shell"
"github.com/analogj/scrutiny/collector/pkg/models"
"github.com/jaypipes/ghw"
"strings"
@@ -11,7 +12,8 @@ func DevicePrefix() string {
}
func (d *Detect) Start() ([]models.Device, error) {
// call the base/common functionality to get a list of devicess
d.Shell = shell.Create()
// call the base/common functionality to get a list of devices
detectedDevices, err := d.SmartctlScan()
if err != nil {
return nil, err
+2
View File
@@ -1,6 +1,7 @@
package detect
import (
"github.com/analogj/scrutiny/collector/pkg/common/shell"
"github.com/analogj/scrutiny/collector/pkg/models"
"github.com/jaypipes/ghw"
"strings"
@@ -11,6 +12,7 @@ func DevicePrefix() string {
}
func (d *Detect) Start() ([]models.Device, error) {
d.Shell = shell.Create()
// call the base/common functionality to get a list of devices
detectedDevices, err := d.SmartctlScan()
if err != nil {
+57 -4
View File
@@ -1,9 +1,14 @@
package detect
import (
"fmt"
"os"
"path/filepath"
"strings"
"github.com/analogj/scrutiny/collector/pkg/common/shell"
"github.com/analogj/scrutiny/collector/pkg/models"
"github.com/jaypipes/ghw"
"strings"
)
func DevicePrefix() string {
@@ -11,6 +16,7 @@ func DevicePrefix() string {
}
func (d *Detect) Start() ([]models.Device, error) {
d.Shell = shell.Create()
// call the base/common functionality to get a list of devices
detectedDevices, err := d.SmartctlScan()
if err != nil {
@@ -18,14 +24,15 @@ func (d *Detect) Start() ([]models.Device, error) {
}
//inflate device info for detected devices.
for ndx, _ := range detectedDevices {
d.SmartCtlInfo(&detectedDevices[ndx]) //ignore errors.
for ndx := range detectedDevices {
d.SmartCtlInfo(&detectedDevices[ndx]) //ignore errors.
populateUdevInfo(&detectedDevices[ndx]) //ignore errors.
}
return detectedDevices, nil
}
//WWN values NVMe and SCSI
// WWN values NVMe and SCSI
func (d *Detect) wwnFallback(detectedDevice *models.Device) {
block, err := ghw.Block()
if err == nil {
@@ -47,3 +54,49 @@ func (d *Detect) wwnFallback(detectedDevice *models.Device) {
//wwn must always be lowercase.
detectedDevice.WWN = strings.ToLower(detectedDevice.WWN)
}
// as discussed in
// - https://github.com/AnalogJ/scrutiny/issues/225
// - https://github.com/jaypipes/ghw/issues/59#issue-361915216
// udev exposes its data in a standardized way under /run/udev/data/....
func populateUdevInfo(detectedDevice *models.Device) error {
// Get device major:minor numbers
// `cat /sys/class/block/sda/dev`
devNo, err := os.ReadFile(filepath.Join("/sys/class/block/", detectedDevice.DeviceName, "dev"))
if err != nil {
return err
}
// Look up block device in udev runtime database
// `cat /run/udev/data/b8:0`
udevID := "b" + strings.TrimSpace(string(devNo))
udevBytes, err := os.ReadFile(filepath.Join("/run/udev/data/", udevID))
if err != nil {
return err
}
deviceMountPaths := []string{}
udevInfo := make(map[string]string)
for _, udevLine := range strings.Split(string(udevBytes), "\n") {
if strings.HasPrefix(udevLine, "E:") {
if s := strings.SplitN(udevLine[2:], "=", 2); len(s) == 2 {
udevInfo[s[0]] = s[1]
}
} else if strings.HasPrefix(udevLine, "S:") {
deviceMountPaths = append(deviceMountPaths, udevLine[2:])
}
}
//Set additional device information.
if deviceLabel, exists := udevInfo["ID_FS_LABEL"]; exists {
detectedDevice.DeviceLabel = deviceLabel
}
if deviceUUID, exists := udevInfo["ID_FS_UUID"]; exists {
detectedDevice.DeviceUUID = deviceUUID
}
if deviceSerialID, exists := udevInfo["ID_SERIAL"]; exists {
detectedDevice.DeviceSerialID = fmt.Sprintf("%s-%s", udevInfo["ID_BUS"], deviceSerialID)
}
return nil
}
+2
View File
@@ -1,6 +1,7 @@
package detect
import (
"github.com/analogj/scrutiny/collector/pkg/common/shell"
"github.com/analogj/scrutiny/collector/pkg/models"
"strings"
)
@@ -10,6 +11,7 @@ func DevicePrefix() string {
}
func (d *Detect) Start() ([]models.Device, error) {
d.Shell = shell.Create()
// call the base/common functionality to get a list of devices
detectedDevices, err := d.SmartctlScan()
if err != nil {
+48
View File
@@ -0,0 +1,48 @@
{
"json_format_version": [
1,
0
],
"smartctl": {
"version": [
7,
2
],
"svn_revision": "5155",
"platform_info": "x86_64-linux-6.1.69-talos",
"build_info": "(local build)",
"argv": [
"smartctl",
"--info",
"--json",
"/dev/nvme4"
],
"exit_status": 0
},
"device": {
"name": "/dev/nvme4",
"info_name": "/dev/nvme4",
"type": "nvme",
"protocol": "NVMe"
},
"model_name": "KCD61LUL3T84",
"serial_number": "61Q0A05UT7B8",
"firmware_version": "8002",
"nvme_pci_vendor": {
"id": 7695,
"subsystem_id": 7695
},
"nvme_ieee_oui_identifier": 9233294,
"nvme_total_capacity": 3840755982336,
"nvme_unallocated_capacity": 0,
"nvme_controller_id": 1,
"nvme_version": {
"string": "1.4",
"value": 66560
},
"nvme_number_of_namespaces": 16,
"local_time": {
"time_t": 1706045146,
"asctime": "Tue Jan 23 21:25:46 2024 UTC"
}
}
@@ -0,0 +1,35 @@
{
"json_format_version": [
1,
0
],
"smartctl": {
"version": [
7,
1
],
"svn_revision": "5022",
"platform_info": "x86_64-linux-5.4.0-45-generic",
"build_info": "(local build)",
"argv": [
"smartctl",
"-j",
"--scan"
],
"exit_status": 0
},
"devices": [
{
"name": "/dev/bus/0",
"info_name": "/dev/bus/0 [megaraid_disk_00]",
"type": "megaraid,0",
"protocol": "SCSI"
},
{
"name": "/dev/bus/0",
"info_name": "/dev/bus/0 [megaraid_disk_01]",
"type": "megaraid,1",
"protocol": "SCSI"
}
]
}
+29
View File
@@ -0,0 +1,29 @@
{
"json_format_version": [
1,
0
],
"smartctl": {
"version": [
7,
0
],
"svn_revision": "4883",
"platform_info": "x86_64-linux-4.19.107-Unraid",
"build_info": "(local build)",
"argv": [
"smartctl",
"-j",
"--scan"
],
"exit_status": 0
},
"devices": [
{
"name": "/dev/nvme0",
"info_name": "/dev/nvme0",
"type": "nvme",
"protocol": "NVMe"
}
]
}
+65
View File
@@ -0,0 +1,65 @@
{
"json_format_version": [
1,
0
],
"smartctl": {
"version": [
7,
0
],
"svn_revision": "4883",
"platform_info": "x86_64-linux-5.15.32-flatcar",
"build_info": "(local build)",
"argv": [
"smartctl",
"--scan",
"-j"
],
"exit_status": 0
},
"devices": [
{
"name": "/dev/sda",
"info_name": "/dev/sda",
"type": "scsi",
"protocol": "SCSI"
},
{
"name": "/dev/sdb",
"info_name": "/dev/sdb",
"type": "scsi",
"protocol": "SCSI"
},
{
"name": "/dev/sdc",
"info_name": "/dev/sdc",
"type": "scsi",
"protocol": "SCSI"
},
{
"name": "/dev/sdd",
"info_name": "/dev/sdd",
"type": "scsi",
"protocol": "SCSI"
},
{
"name": "/dev/sde",
"info_name": "/dev/sde",
"type": "scsi",
"protocol": "SCSI"
},
{
"name": "/dev/sdf",
"info_name": "/dev/sdf",
"type": "scsi",
"protocol": "SCSI"
},
{
"name": "/dev/sdg",
"info_name": "/dev/sdg",
"type": "scsi",
"protocol": "SCSI"
}
]
}
+3 -4
View File
@@ -1,10 +1,10 @@
package detect_test
import (
"fmt"
"testing"
"github.com/analogj/scrutiny/collector/pkg/detect"
"github.com/stretchr/testify/require"
"testing"
)
func TestWwn_FromStringTable(t *testing.T) {
@@ -25,8 +25,7 @@ func TestWwn_FromStringTable(t *testing.T) {
}
//test
for _, tt := range tests {
testname := fmt.Sprintf("%s", tt.wwnStr)
t.Run(testname, func(t *testing.T) {
t.Run(tt.wwnStr, func(t *testing.T) {
str := tt.wwn.ToString()
require.Equal(t, tt.wwnStr, str)
})
+9 -2
View File
@@ -1,10 +1,13 @@
package models
type Device struct {
WWN string `json:"wwn"`
HostId string `json:"host_id"`
WWN string `json:"wwn"`
DeviceName string `json:"device_name"`
DeviceUUID string `json:"device_uuid"`
DeviceSerialID string `json:"device_serial_id"`
DeviceLabel string `json:"device_label"`
Manufacturer string `json:"manufacturer"`
ModelName string `json:"model_name"`
InterfaceType string `json:"interface_type"`
@@ -17,6 +20,10 @@ type Device struct {
SmartSupport bool `json:"smart_support"`
DeviceProtocol string `json:"device_protocol"` //protocol determines which smart attribute types are available (ATA, NVMe, SCSI)
DeviceType string `json:"device_type"` //device type is used for querying with -d/t flag, should only be used by collector.
// User provided metadata
Label string `json:"label"`
HostId string `json:"host_id"`
}
type DeviceWrapper struct {
+4
View File
@@ -4,4 +4,8 @@ type ScanOverride struct {
Device string `mapstructure:"device"`
DeviceType []string `mapstructure:"type"`
Ignore bool `mapstructure:"ignore"`
Commands struct {
MetricsInfoArgs string `mapstructure:"metrics_info_args"`
MetricsSmartArgs string `mapstructure:"metrics_smart_args"`
} `mapstructure:"commands"`
}
+84 -43
View File
@@ -1,55 +1,96 @@
########
FROM golang:1.14.4-buster as backendbuild
# syntax=docker/dockerfile:1.4
########################################################################################################################
# Omnibus Image
########################################################################################################################
######## Build the frontend
FROM --platform=${BUILDPLATFORM} node AS frontendbuild
WORKDIR /go/src/github.com/analogj/scrutiny
COPY --link . /go/src/github.com/analogj/scrutiny
RUN make binary-frontend
######## Build the backend
FROM golang:1.25-trixie as backendbuild
WORKDIR /go/src/github.com/analogj/scrutiny
COPY . /go/src/github.com/analogj/scrutiny
RUN go mod vendor && \
go build -ldflags '-w -extldflags "-static"' -o scrutiny webapp/backend/cmd/scrutiny/scrutiny.go && \
go build -ldflags '-w -extldflags "-static"' -o scrutiny-collector-selftest collector/cmd/collector-selftest/collector-selftest.go && \
go build -ldflags '-w -extldflags "-static"' -o scrutiny-collector-metrics collector/cmd/collector-metrics/collector-metrics.go
########
FROM node:lts-slim as frontendbuild
#reduce logging, disable angular-cli analytics for ci environment
ENV NPM_CONFIG_LOGLEVEL=warn NG_CLI_ANALYTICS=false
WORKDIR /scrutiny/src
COPY webapp/frontend /scrutiny/src
RUN npm install -g @angular/cli@9.1.4 && \
mkdir -p /scrutiny/dist && \
npm install && \
ng build --output-path=/scrutiny/dist --deploy-url="/web/" --base-href="/web/" --prod
COPY --link . /go/src/github.com/analogj/scrutiny
RUN apt-get update && DEBIAN_FRONTEND=noninteractive \
apt-get install -y --no-install-recommends \
file \
&& rm -rf /var/lib/apt/lists/*
RUN make binary-clean binary-all WEB_BINARY_NAME=scrutiny
########
FROM ubuntu:bionic as runtime
######## Build smartmontools from source
FROM debian:trixie-slim AS smartmontoolsbuild
ARG SMARTMONTOOLS_VER=7.5
RUN apt-get update && DEBIAN_FRONTEND=noninteractive \
apt-get install -y --no-install-recommends \
ca-certificates curl gcc g++ gnupg make \
&& rm -rf /var/lib/apt/lists/*
RUN curl -L "https://github.com/smartmontools/smartmontools/releases/download/RELEASE_$(echo ${SMARTMONTOOLS_VER} | tr '.' '_')/smartmontools-${SMARTMONTOOLS_VER}.tar.gz" -o /tmp/smartmontools.tar.gz \
&& tar -xzf /tmp/smartmontools.tar.gz -C /tmp \
&& cd /tmp/smartmontools-${SMARTMONTOOLS_VER} \
&& ./configure --prefix=/usr LDFLAGS='-static' --without-libcap-ng --without-libsystemd \
&& make -j"$(nproc)" \
&& make install \
&& /usr/sbin/update-smart-drivedb \
&& rm -rf /tmp/smartmontools*
######## Combine build artifacts in runtime image
FROM debian:trixie-slim AS runtime
ARG TARGETARCH
EXPOSE 8080
WORKDIR /scrutiny
ENV PATH="/scrutiny/bin:${PATH}"
WORKDIR /opt/scrutiny
ENV PATH="/opt/scrutiny/bin:${PATH}"
ENV INFLUXD_CONFIG_PATH=/opt/scrutiny/influxdb
ENV S6VER="3.1.6.2"
ENV INFLUXVER="2.2.0"
ENV S6_SERVICES_READYTIME=1000
SHELL ["/usr/bin/sh", "-c"]
RUN apt-get update && apt-get install -y cron smartmontools=7.0-0ubuntu1~ubuntu18.04.1 ca-certificates curl && update-ca-certificates
RUN apt-get update && DEBIAN_FRONTEND=noninteractive \
apt-get install -y --no-install-recommends \
ca-certificates \
cron \
curl \
tzdata \
procps \
xz-utils \
&& rm -rf /var/lib/apt/lists/* \
&& update-ca-certificates \
&& case ${TARGETARCH} in \
"amd64") S6_ARCH=x86_64 ;; \
"arm64") S6_ARCH=aarch64 ;; \
esac \
&& curl https://github.com/just-containers/s6-overlay/releases/download/v${S6VER}/s6-overlay-noarch.tar.xz -L -s --output /tmp/s6-overlay-noarch.tar.xz \
&& tar -Jxpf /tmp/s6-overlay-noarch.tar.xz -C / \
&& rm -rf /tmp/s6-overlay-noarch.tar.xz \
&& curl https://github.com/just-containers/s6-overlay/releases/download/v${S6VER}/s6-overlay-${S6_ARCH}.tar.xz -L -s --output /tmp/s6-overlay-${S6_ARCH}.tar.xz \
&& tar -Jxpf /tmp/s6-overlay-${S6_ARCH}.tar.xz -C / \
&& rm -rf /tmp/s6-overlay-${S6_ARCH}.tar.xz
RUN curl -L https://dl.influxdata.com/influxdb/releases/influxdb2-${INFLUXVER}-${TARGETARCH}.deb --output /tmp/influxdb2-${INFLUXVER}-${TARGETARCH}.deb \
&& dpkg -i --force-all /tmp/influxdb2-${INFLUXVER}-${TARGETARCH}.deb \
&& rm -rf /tmp/influxdb2-${INFLUXVER}-${TARGETARCH}.deb
ADD https://github.com/just-containers/s6-overlay/releases/download/v1.21.8.0/s6-overlay-amd64.tar.gz /tmp/
RUN tar xzf /tmp/s6-overlay-amd64.tar.gz -C /
COPY /rootfs /
COPY /rootfs/etc/cron.d/scrutiny /etc/cron.d/scrutiny
COPY --from=backendbuild /go/src/github.com/analogj/scrutiny/scrutiny /scrutiny/bin/
COPY --from=backendbuild /go/src/github.com/analogj/scrutiny/scrutiny-collector-selftest /scrutiny/bin/
COPY --from=backendbuild /go/src/github.com/analogj/scrutiny/scrutiny-collector-metrics /scrutiny/bin/
COPY --from=frontendbuild /scrutiny/dist /scrutiny/web
RUN chmod +x /scrutiny/bin/scrutiny && \
chmod +x /scrutiny/bin/scrutiny-collector-selftest && \
chmod +x /scrutiny/bin/scrutiny-collector-metrics && \
chmod 0644 /etc/cron.d/scrutiny && \
rm -f /etc/cron.daily/* && \
mkdir -p /scrutiny/web && \
mkdir -p /scrutiny/config && \
chmod -R ugo+rwx /scrutiny/config
COPY --from=smartmontoolsbuild /usr/sbin/smartctl /usr/sbin/smartctl
COPY --from=smartmontoolsbuild /usr/share/smartmontools/ /usr/share/smartmontools/
COPY --link --from=backendbuild --chmod=755 /go/src/github.com/analogj/scrutiny/scrutiny /opt/scrutiny/bin/
COPY --link --from=backendbuild --chmod=755 /go/src/github.com/analogj/scrutiny/scrutiny-collector-metrics /opt/scrutiny/bin/
COPY --link --from=frontendbuild --chmod=644 /go/src/github.com/analogj/scrutiny/dist /opt/scrutiny/web
RUN chmod 0644 /etc/cron.d/scrutiny && \
rm -f /etc/cron.daily/* && \
mkdir -p /opt/scrutiny/web && \
mkdir -p /opt/scrutiny/config && \
chmod -R ugo+rwx /opt/scrutiny/config && \
chmod +x /etc/cont-init.d/* && \
chmod +x /etc/services.d/*/run && \
chmod +x /etc/services.d/*/finish
CMD ["/init"]
+34 -12
View File
@@ -1,27 +1,49 @@
########################################################################################################################
# Collector Image
########################################################################################################################
########
FROM golang:1.14.4-buster as backendbuild
FROM golang:1.25-trixie AS backendbuild
WORKDIR /go/src/github.com/analogj/scrutiny
COPY . /go/src/github.com/analogj/scrutiny
RUN go mod vendor && \
go build -ldflags '-w -extldflags "-static"' -o scrutiny-collector-selftest collector/cmd/collector-selftest/collector-selftest.go && \
go build -ldflags '-w -extldflags "-static"' -o scrutiny-collector-metrics collector/cmd/collector-metrics/collector-metrics.go
RUN apt-get update && apt-get install -y file && rm -rf /var/lib/apt/lists/*
RUN make binary-clean binary-collector
######## Build smartmontools from source
FROM debian:trixie-slim AS smartmontoolsbuild
ARG SMARTMONTOOLS_VER=7.5
RUN apt-get update && DEBIAN_FRONTEND=noninteractive \
apt-get install -y --no-install-recommends \
ca-certificates curl gcc g++ gnupg make \
&& rm -rf /var/lib/apt/lists/*
RUN curl -L "https://github.com/smartmontools/smartmontools/releases/download/RELEASE_$(echo ${SMARTMONTOOLS_VER} | tr '.' '_')/smartmontools-${SMARTMONTOOLS_VER}.tar.gz" -o /tmp/smartmontools.tar.gz \
&& tar -xzf /tmp/smartmontools.tar.gz -C /tmp \
&& cd /tmp/smartmontools-${SMARTMONTOOLS_VER} \
&& ./configure --prefix=/usr LDFLAGS='-static' --without-libcap-ng --without-libsystemd \
&& make -j"$(nproc)" \
&& make install \
&& /usr/sbin/update-smart-drivedb \
&& rm -rf /tmp/smartmontools*
########
FROM ubuntu:bionic as runtime
WORKDIR /scrutiny
ENV PATH="/scrutiny/bin:${PATH}"
FROM debian:trixie-slim AS runtime
WORKDIR /opt/scrutiny
ENV PATH="/opt/scrutiny/bin:${PATH}"
RUN apt-get update && apt-get install -y cron smartmontools=7.0-0ubuntu1~ubuntu18.04.1 ca-certificates && update-ca-certificates
RUN apt-get update && apt-get install -y cron ca-certificates tzdata && rm -rf /var/lib/apt/lists/* && update-ca-certificates
COPY --from=smartmontoolsbuild /usr/sbin/smartctl /usr/sbin/smartctl
COPY --from=smartmontoolsbuild /usr/share/smartmontools/ /usr/share/smartmontools/
COPY /docker/entrypoint-collector.sh /entrypoint-collector.sh
COPY /rootfs/etc/cron.d/scrutiny /etc/cron.d/scrutiny
COPY --from=backendbuild /go/src/github.com/analogj/scrutiny/scrutiny-collector-selftest /scrutiny/bin/
COPY --from=backendbuild /go/src/github.com/analogj/scrutiny/scrutiny-collector-metrics /scrutiny/bin/
RUN chmod +x /scrutiny/bin/scrutiny-collector-selftest && \
chmod +x /scrutiny/bin/scrutiny-collector-metrics && \
COPY --from=backendbuild /go/src/github.com/analogj/scrutiny/scrutiny-collector-metrics /opt/scrutiny/bin/
RUN chmod +x /opt/scrutiny/bin/scrutiny-collector-metrics && \
chmod +x /entrypoint-collector.sh && \
chmod 0644 /etc/cron.d/scrutiny && \
rm -f /etc/cron.daily/apt /etc/cron.daily/dpkg /etc/cron.daily/passwd
+20
View File
@@ -0,0 +1,20 @@
########################################################################################################################
# Smartmontools Builder
# - Builds smartctl from source as a static binary.
# - Updates the drive database to include the latest drive models since it can change between releases.
# - Used as a shared build stage by Dockerfile and Dockerfile.collector.
########################################################################################################################
FROM debian:trixie-slim
ARG SMARTMONTOOLS_VER=7.5
RUN apt-get update && DEBIAN_FRONTEND=noninteractive \
apt-get install -y --no-install-recommends \
ca-certificates curl gcc g++ gnupg make \
&& rm -rf /var/lib/apt/lists/*
RUN curl -L "https://github.com/smartmontools/smartmontools/releases/download/RELEASE_$(echo ${SMARTMONTOOLS_VER} | tr '.' '_')/smartmontools-${SMARTMONTOOLS_VER}.tar.gz" -o /tmp/smartmontools.tar.gz \
&& tar -xzf /tmp/smartmontools.tar.gz -C /tmp \
&& cd /tmp/smartmontools-${SMARTMONTOOLS_VER} \
&& ./configure --prefix=/usr LDFLAGS='-static' --without-libcap-ng --without-libsystemd \
&& make -j"$(nproc)" \
&& make install \
&& /usr/sbin/update-smart-drivedb \
&& rm -rf /tmp/smartmontools*
+29 -32
View File
@@ -1,40 +1,37 @@
########
FROM golang:1.14.4-buster as backendbuild
# syntax=docker/dockerfile:1.4
########################################################################################################################
# Web Image
########################################################################################################################
######## Build the frontend
FROM --platform=${BUILDPLATFORM} node AS frontendbuild
WORKDIR /go/src/github.com/analogj/scrutiny
COPY --link . /go/src/github.com/analogj/scrutiny
RUN make binary-frontend
######## Build the backend
FROM golang:1.25-trixie as backendbuild
WORKDIR /go/src/github.com/analogj/scrutiny
COPY --link . /go/src/github.com/analogj/scrutiny
COPY . /go/src/github.com/analogj/scrutiny
RUN go mod vendor && \
go build -ldflags '-w -extldflags "-static"' -o scrutiny webapp/backend/cmd/scrutiny/scrutiny.go
########
FROM node:lts-slim as frontendbuild
#reduce logging, disable angular-cli analytics for ci environment
ENV NPM_CONFIG_LOGLEVEL=warn NG_CLI_ANALYTICS=false
WORKDIR /scrutiny/src
COPY webapp/frontend /scrutiny/src
RUN npm install -g @angular/cli@9.1.4 && \
mkdir -p /scrutiny/dist && \
npm install && \
ng build --output-path=/scrutiny/dist --deploy-url="/web/" --base-href="/web/" --prod
RUN apt-get update && apt-get install -y file && rm -rf /var/lib/apt/lists/*
RUN make binary-clean binary-all WEB_BINARY_NAME=scrutiny
########
FROM ubuntu:bionic as runtime
######## Combine build artifacts in runtime image
FROM debian:trixie-slim as runtime
EXPOSE 8080
WORKDIR /scrutiny
ENV PATH="/scrutiny/bin:${PATH}"
WORKDIR /opt/scrutiny
ENV PATH="/opt/scrutiny/bin:${PATH}"
RUN apt-get update && apt-get install -y ca-certificates curl && update-ca-certificates
RUN apt-get update && apt-get install -y ca-certificates curl tzdata && rm -rf /var/lib/apt/lists/* && update-ca-certificates
COPY --from=backendbuild /go/src/github.com/analogj/scrutiny/scrutiny /scrutiny/bin/
COPY --from=frontendbuild /scrutiny/dist /scrutiny/web
RUN chmod +x /scrutiny/bin/scrutiny && \
mkdir -p /scrutiny/web && \
mkdir -p /scrutiny/config && \
chmod -R ugo+rwx /scrutiny/config
CMD ["/scrutiny/bin/scrutiny", "start"]
COPY --link --from=backendbuild --chmod=755 /go/src/github.com/analogj/scrutiny/scrutiny /opt/scrutiny/bin/
COPY --link --from=frontendbuild --chmod=644 /go/src/github.com/analogj/scrutiny/dist /opt/scrutiny/web
RUN mkdir -p /opt/scrutiny/web && \
mkdir -p /opt/scrutiny/config && \
chmod -R a+rX /opt/scrutiny && \
chmod -R a+w /opt/scrutiny/config
CMD ["/opt/scrutiny/bin/scrutiny", "start"]
-7
View File
@@ -1,7 +0,0 @@
FROM karalabe/xgo-1.13.x
WORKDIR /go/src/github.com/analogj/scrutiny
COPY . /go/src/github.com/analogj/scrutiny
RUN make all
+1
View File
@@ -0,0 +1 @@
`rootfs` is only used by Dockerfile and Dockerfile.collector
-16
View File
@@ -1,16 +0,0 @@
Vagrant.configure("2") do |config|
config.vm.guest = :freebsd
config.vm.synced_folder ".", "/vagrant", id: "vagrant-root", disabled: true
config.vm.box = "freebsd/FreeBSD-11.0-CURRENT"
config.ssh.shell = "sh"
config.vm.base_mac = "080027D14C66"
config.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--memory", "1024"]
vb.customize ["modifyvm", :id, "--cpus", "1"]
vb.customize ["modifyvm", :id, "--hwvirtex", "on"]
vb.customize ["modifyvm", :id, "--audio", "none"]
vb.customize ["modifyvm", :id, "--nictype1", "virtio"]
vb.customize ["modifyvm", :id, "--nictype2", "virtio"]
end
end
-16
View File
@@ -1,16 +0,0 @@
version: '3.5'
services:
scrutiny:
container_name: scrutiny
image: analogj/scrutiny
cap_add:
- SYS_RAWIO
ports:
- "8080:8080"
volumes:
- /run/udev:/run/udev:ro
- ./config:/scrutiny/config
devices:
- "/dev/sda"
- "/dev/sdb"
+22 -3
View File
@@ -1,9 +1,28 @@
#!/bin/bash
# Cron runs in its own isolated environment (usually using only /etc/environment )
# So when the container starts up, we will do a dump of the runtime environment into a .env file that we
# will then source into the crontab file (/etc/cron.d/scrutiny.sh)
# will then source into the crontab file (/etc/cron.d/scrutiny)
(set -o posix; export -p) > /env.sh
# adding ability to customize the cron schedule.
COLLECTOR_CRON_SCHEDULE=${COLLECTOR_CRON_SCHEDULE:-"0 0 * * *"}
COLLECTOR_RUN_STARTUP=${COLLECTOR_RUN_STARTUP:-"false"}
COLLECTOR_RUN_STARTUP_SLEEP=${COLLECTOR_RUN_STARTUP_SLEEP:-"1"}
# if the cron schedule has been overridden via env variable (eg docker-compose) we should make sure to strip quotes
[[ "${COLLECTOR_CRON_SCHEDULE}" == \"*\" || "${COLLECTOR_CRON_SCHEDULE}" == \'*\' ]] && COLLECTOR_CRON_SCHEDULE="${COLLECTOR_CRON_SCHEDULE:1:-1}"
# replace placeholder with correct value
sed -i 's|{COLLECTOR_CRON_SCHEDULE}|'"${COLLECTOR_CRON_SCHEDULE}"'|g' /etc/cron.d/scrutiny
if [[ "${COLLECTOR_RUN_STARTUP}" == "true" ]]; then
sleep ${COLLECTOR_RUN_STARTUP_SLEEP}
echo "starting scrutiny collector (run-once mode. subsequent calls will be triggered via cron service)"
/opt/scrutiny/bin/scrutiny-collector-metrics run
fi
printenv | sed 's/^\(.*\)$/export \1/g' > /env.sh
# now that we have the env start cron in the foreground
echo "starting cron"
cron -f
exec su -c "cron -f -L 15" root
@@ -0,0 +1,55 @@
version: '2.4'
services:
influxdb:
restart: unless-stopped
image: influxdb:2.8
ports:
- '8086:8086'
volumes:
- './influxdb:/var/lib/influxdb2'
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8086/health"]
interval: 5s
timeout: 10s
retries: 20
web:
restart: unless-stopped
image: 'ghcr.io/analogj/scrutiny:master-web'
ports:
- '8080:8080'
volumes:
- './config:/opt/scrutiny/config'
environment:
SCRUTINY_WEB_INFLUXDB_HOST: 'influxdb'
depends_on:
influxdb:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/api/health"]
interval: 5s
timeout: 10s
retries: 20
start_period: 10s
collector:
restart: unless-stopped
image: 'ghcr.io/analogj/scrutiny:master-collector'
cap_add:
- SYS_RAWIO
volumes:
- '/run/udev:/run/udev:ro'
environment:
COLLECTOR_API_ENDPOINT: 'http://web:8080'
COLLECTOR_HOST_ID: 'scrutiny-collector-hostname'
# If true forces the collector to run on startup (cron will be started after the collector completes)
# see: https://github.com/AnalogJ/scrutiny/blob/master/docs/TROUBLESHOOTING_DEVICE_COLLECTOR.md#collector-trigger-on-startup
COLLECTOR_RUN_STARTUP: false
depends_on:
web:
condition: service_healthy
devices:
- "/dev/sda"
- "/dev/sdb"
+19
View File
@@ -0,0 +1,19 @@
version: '3.5'
services:
scrutiny:
restart: unless-stopped
container_name: scrutiny
image: ghcr.io/analogj/scrutiny:master-omnibus
cap_add:
- SYS_RAWIO
ports:
- "8080:8080" # webapp
- "8086:8086" # influxDB admin
volumes:
- /run/udev:/run/udev:ro
- ./config:/opt/scrutiny/config
- ./influxdb:/opt/scrutiny/influxdb
devices:
- "/dev/sda"
- "/dev/sdb"
+43
View File
@@ -0,0 +1,43 @@
# Downsampling
Scrutiny collects alot of data, that can cause the database to grow unbounded.
- Smart data
- Smart test data
- Temperature data
- Disk metrics (capacity/usage)
- etc
This data must be accurate in the short term, and is useful for doing trend analysis in the long term.
However, for trend analysis we only need aggregate data, individual data points are not as useful.
Scrutiny will automatically downsample data on a schedule to ensure that the database size stays reasonable, while still
ensuring historical data is present for comparisons.
| Bucket Name | Retention Period | Downsampling Range | Downsampling Aggregation Window | Downsampling Cron | Comments |
| --- | --- | --- | --- | --- | --- |
| `metrics` | 15 days | `-2w -1w` | `1w` | main bucket, weekly on Sunday at 1:00am |
| `metrics_weekly` | 9 weeks | `-2mo -1mo` | `1mo` | monthly on first day of the month at 1:30am
| `metrics_monthly` | 25 months | `-2y -1y` | `1y` | yearly on the first day of the year at 2:00am
| `metrics_yearly` | forever | - | - | - | |
After 5 months, here's how may data points should exist in each bucket for one disk
| Bucket Name | Datapoints | Comments |
| --- | --- | --- |
| `metrics` | 15 | 7 daily datapoints , up to 7 pending data, 1 buffer data point |
| `metrics_weekly` | 9 | 4 aggregated weekly data points, 4 pending datapoints, 1 buffer data point |
| `metrics_monthly` | 3 | 3 aggregated monthly data points |
| `metrics_yearly` | 0 | |
After 5 years, here's how may data points should exist in each bucket for one disk
| Bucket Name | Datapoints | Comments |
| --- | --- | --- |
| `metrics` | - | - |
| `metrics_weekly` | - |
| `metrics_monthly` | - |
| `metrics_yearly` | - |
+17
View File
@@ -0,0 +1,17 @@
# Ansible Install
[Zorlin](https://github.com/Zorlin) has developed and now maintains [an Ansible playbook](https://github.com/Zorlin/scrutiny-playbook) which automates the steps involved in manually setting up Scrutiny.
Using it is simple:
* Grab a copy of the playbook
* Follow the directions in the playbook repository
* Run `ansible-playbook site.yml`
* Visit http://your-machine:8080 to see your new Scrutiny installation.
It will automatically pull metrics from machines once a day, at 1am.
You can see it in action below.
[![asciicast](https://asciinema.org/a/493531.svg)](https://asciinema.org/a/493531)
+182
View File
@@ -0,0 +1,182 @@
>
See [docker/example.hubspoke.docker-compose.yml](https://github.com/AnalogJ/scrutiny/blob/master/docker/example.hubspoke.docker-compose.yml)
for a docker-compose file.
> The following guide was contributed by @TinJoy59 in #417
> It describes how to deploy the Scrutiny in Hub/Spoke mode, where the Hub is running in Docker, and the Spokes (
> collectors) are running as binaries.
> He's using Proxmox & Synology in his guide, however this should be applicable for almost anyone
# S.M.A.R.T. Monitoring with Scrutiny across machines
![drawing-3-1671744407](https://user-images.githubusercontent.com/86809766/209230023-bf1ef9f8-65c4-454e-9e1a-be1293cd737e.png)
### 🤔 The problem:
Scrutiny offers a nice Docker package called "Omnibus" that can monitor HDDs attached to a Docker host with relative
ease. Scrutiny can also be installed in a Hub-Spoke layout where Web interface, Database and Collector come in 3
separate packages. The official documentation assumes that the spokes in the "Hub-Spokes layout" run Docker, which is
not always the case. The third approach is to install Scrutiny manually, entirely outside of Docker.
### 💡 The solution:
This tutorial provides a hybrid configuration where the Hub lives in a Docker instance while the spokes have only
Scrutiny Collector installed manually. The Collector periodically send data to the Hub. It's not mind-boggling hard to
understand but someone might struggle with the setup. This is for them.
### 🖥️ My setup:
I have a Proxmox cluster where one VM runs Docker and all monitoring services - Grafana, Prometheus, various exporters,
InfluxDB and so forth. Another VM runs the NAS - OpenMediaVault v6, where all hard drives reside. The Scrutiny Collector
is triggered every 30min to collect data on the drives. The data is sent to the Docker VM, running InfluxDB.
## Setting up the Hub
![drawing-3-1671744714](https://user-images.githubusercontent.com/86809766/209230113-c954d834-521b-4555-bcd2-eb6b80f343be.png)
The Hub consists of Scrutiny Web - a web interface for viewing the SMART data. And InfluxDB, where the smartmon data is
stored.
[🔗This is the official Hub-Spoke layout in docker-compose.](https://github.com/AnalogJ/scrutiny/blob/master/docker/example.hubspoke.docker-compose.yml)
We are going to reuse parts of it. The ENV variables provide the necessary configuration for the initial setup, both for
InfluxDB and Scrutiny.
If you are working with and existing InfluxDB instance, you can forgo all the `INIT` variables as they already exist.
The official Scrutiny documentation has a
sample [scrutiny.yaml ](https://github.com/AnalogJ/scrutiny/blob/master/example.scrutiny.yaml)file that normally
contains the connection and notification details but I always find it easier to configure as much as possible in the
docker-compose.
```yaml
networks:
monitoring: # A common network for all monitoring services to communicate into
notifications: # To Gotify or another Notification service
services:
influxdb:
restart: unless-stopped
container_name: influxdb
image: influxdb:2.8
ports:
- 8086:8086
volumes:
- ${DIR_CONFIG}/influxdb2/db:/var/lib/influxdb2
- ${DIR_CONFIG}/influxdb2/config:/etc/influxdb2
environment:
- DOCKER_INFLUXDB_INIT_MODE=setup
- DOCKER_INFLUXDB_INIT_USERNAME=Admin
- DOCKER_INFLUXDB_INIT_PASSWORD=${PASSWORD}
- DOCKER_INFLUXDB_INIT_ORG=homelab
- DOCKER_INFLUXDB_INIT_BUCKET=scrutiny
- DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=SUPER-SECRET-TOKEN
- TZ=Europe/Stockholm
networks:
- monitoring
scrutiny:
restart: unless-stopped
container_name: scrutiny
# best practice: pin to a specific release instead of latest
image: ghcr.io/analogj/scrutiny:latest-web
ports:
- 8080:8080
volumes:
- ${DIR_CONFIG}/config:/opt/scrutiny/config
environment:
- SCRUTINY_WEB_INFLUXDB_HOST=influxdb
- SCRUTINY_WEB_INFLUXDB_PORT=8086
- SCRUTINY_WEB_INFLUXDB_TOKEN=SUPER-SECRET-TOKEN
- SCRUTINY_WEB_INFLUXDB_ORG=homelab
- SCRUTINY_WEB_INFLUXDB_BUCKET=scrutiny
# Optional but highly recommended to notify you in case of a problem; space-separated list of shoutrrr uri's
# https://github.com/AnalogJ/scrutiny/blob/master/docs/TROUBLESHOOTING_NOTIFICATIONS.md
- SCRUTINY_NOTIFY_URLS=http://gotify:80/message?token=a-gotify-token ntfy://username:password@host:port/topic
- TZ=Europe/Stockholm
depends_on:
influxdb:
condition: service_healthy
networks:
- notifications
- monitoring
```
A freshly initialized Scrutiny instance can be accessed on port 8080, eg. `192.168.0.100:8080`. The interface will be
empty because no metrics have been collected yet.
## Setting up a Spoke ***without*** Docker
![drawing-3-1671744208](https://user-images.githubusercontent.com/86809766/209230155-386a8644-b506-497f-8245-0d24e15c9063.png)
A spoke consists of the Scrutiny Collector binary that is run on a set interval via crontab and sends the data to the
Hub. The official
documentation [describes the manual setup of the Collector](https://github.com/AnalogJ/scrutiny/blob/master/docs/INSTALL_MANUAL.md#collector)
- dependencies and step by step commands. I have a shortened version that does the same thing but in one line of code.
```bash
# Installing dependencies
apt install smartmontools -y
# 1. Create directory for the binary
# 2. Download the binary into that directory
# 3. Make it exacutable
# 4. List the contents of the library for confirmation
mkdir -p /opt/scrutiny/bin && \
curl -L https://github.com/AnalogJ/scrutiny/releases/download/v0.8.1/scrutiny-collector-metrics-linux-amd64 > /opt/scrutiny/bin/scrutiny-collector-metrics-linux-amd64 && \
chmod +x /opt/scrutiny/bin/scrutiny-collector-metrics-linux-amd64 && \
ls -lha /opt/scrutiny/bin
```
<p class="callout warning">When downloading Github Release Assests, make sure that you have the correct version. The provided example is with Release v0.5.0. [The release list can be found here.](https://github.com/analogj/scrutiny/releases) </p>
Once the Collector is installed, you can run it with the following command. Make sure to add the correct address and
port of your Hub as `--api-endpoint`.
```bash
/opt/scrutiny/bin/scrutiny-collector-metrics-linux-amd64 run --api-endpoint "http://192.168.0.100:8080"
```
This will run the Collector once and populate the Web interface of your Scrutiny instance. In order to collect metrics
for a time series, you need to run the command repeatedly. Here is an example for crontab, running the Collector every
15min.
```bash
# open crontab
crontab -e
# add a line for Scrutiny
*/15 * * * * /opt/scrutiny/bin/scrutiny-collector-metrics-linux-amd64 run --api-endpoint "http://192.168.0.100:8080"
```
The Collector has its own independent config file that lives in `/opt/scrutiny/config/collector.yaml` but I did not find
a need to modify
it. [A default collector.yaml can be found in the official documentation.](https://github.com/AnalogJ/scrutiny/blob/master/example.collector.yaml)
## Setting up a Spoke ***with*** Docker
![drawing-3-1671744277](https://user-images.githubusercontent.com/86809766/209230176-87c9e55a-4e3e-4f5f-9609-335d41529f3d.png)
Setting up a remote Spoke in Docker requires you to split
the [official Hub-Spoke layout docker-compose.yml](https://github.com/AnalogJ/scrutiny/blob/master/docker/example.hubspoke.docker-compose.yml)
. In the following docker-compose you need to provide the `${API_ENDPOINT}`, in my case `http://192.168.0.100:8080`.
Also all drives that you wish to monitor need to be presented to the container under `devices`.
The image handles the periodic scanning of the drives.
```yaml
services:
collector:
restart: unless-stopped
# best practice: pin to a specific release instead of latest
image: 'ghcr.io/analogj/scrutiny:latest-collector'
cap_add:
- SYS_RAWIO
volumes:
- '/run/udev:/run/udev:ro'
environment:
COLLECTOR_API_ENDPOINT: ${API_ENDPOINT}
devices:
- "/dev/sda"
- "/dev/sdb"
```
+338 -14
View File
@@ -2,12 +2,18 @@
While the easiest way to get started with [Scrutiny is using Docker](https://github.com/AnalogJ/scrutiny#docker),
it is possible to run it manually without much work. You can even mix and match, using Docker for one component and
a manual installation for the other.
a manual installation for the other. There's also [an installer](INSTALL_ANSIBLE.md) which automates this manual installation procedure.
Scrutiny is made up of two components: a collector and a webapp/api. Here's how each component can be deployed manually.
Scrutiny is made up of three components: an influxdb Database, a collector and a webapp/api. Here's how each component can be deployed manually.
> Note: the `/opt/scrutiny` directory is not hardcoded, you can use any directory name/path.
## InfluxDB
Please follow the official InfluxDB installation guide. Note, you'll need to install v2.8.0+.
https://docs.influxdata.com/influxdb/v2/install/
## Webapp/API
### Dependencies
@@ -45,6 +51,17 @@ web:
# The path to the Scrutiny frontend files (js, css, images) must be specified.
# We'll populate it with files in the next section
path: /opt/scrutiny/web
# if you're runnning influxdb on a different host (or using a cloud-provider) you'll need to update the host & port below.
# token, org, bucket are unnecessary for a new InfluxDB installation, as Scrutiny will automatically run the InfluxDB setup,
# and store the information in the config file. If you 're re-using an existing influxdb installation, you'll need to provide
# the `token`
influxdb:
host: localhost
port: 8086
# token: 'my-token'
# org: 'my-org'
# bucket: 'bucket'
```
> Note: for a full list of available configuration options, please check the [example.scrutiny.yaml](https://github.com/AnalogJ/scrutiny/blob/master/example.scrutiny.yaml) file.
@@ -66,9 +83,11 @@ Now that we have downloaded the required files, let's prepare the filesystem.
chmod +x /opt/scrutiny/bin/scrutiny-web-linux-amd64
# Next, lets extract the frontend files.
# NOTE: after extraction, there **should not** be a `dist` subdirectory in `/opt/scrutiny/web` directory.
cd /opt/scrutiny/web
tar xvzf scrutiny-web-frontend.tar.gz --strip-components 1 -C .
# Cleanup
rm -rf scrutiny-web-frontend.tar.gz
```
@@ -96,12 +115,18 @@ Unlike the webapp, the collector does have some dependencies:
Unfortunately the version of `smartmontools` (which contains `smartctl`) available in some of the base OS repositories is ancient.
So you'll need to install the v7+ version using one of the following commands:
- **Ubuntu:** `apt-get install -y smartmontools=7.0-0ubuntu1~ubuntu18.04.1`
- **Ubuntu (22.04/Jammy/LTS):** `apt-get install -y smartmontools`
- **Ubuntu (18.04/Bionic):** `apt-get install -y smartmontools=7.0-0ubuntu1~ubuntu18.04.1`
- **Centos8:**
- `dnf install https://extras.getpagespeed.com/release-el8-latest.rpm`
- `dnf install smartmontools`
- **FreeBSD:** `pkg install smartmontools`
The following additional dependencies are needed if you want to run the collector as an unprivileged user:
- systemd version > 235
- a restricted user account
### Directory Structure
Now let's create a directory structure to contain the Scrutiny collector binary.
@@ -113,38 +138,337 @@ mkdir -p /opt/scrutiny/bin
### Download Files
Next, we'll download the Scrutiny collector binary from the [latest Github release](https://github.com/analogj/scrutiny/releases).
The file you need to download is named:
Next, we'll download the Scrutiny collector binary from the [latest Github release](https://github.com/analogj/scrutiny/releases). You are looking for the one titled **scrutiny-collector-metrics-linux-amd64** unless you know you are on arm.
- **scrutiny-collector-metrics-linux-amd64** - save this file to `/opt/scrutiny/bin`
```sh
wget -O /tmp/scrutiny-collector-metrics https://github.com/AnalogJ/scrutiny/releases/latest/download/scrutiny-collector-metrics-linux-amd64
```
Optional, but recommended: Before continuing it's recommended you compare the sha from the release page with the downloaded file to ensure it's the same file and not corrupted/tampered with. The command to do this is:
`echo "SHA_GOES_HERE /tmp/scrutiny-collector-metrics" | sha256sum -c`
example for the v0.8.6 release:
`echo "4c163645ce24e5487f4684a25ec73485d77a82a57f084808ff5aad0c11499ad2 /tmp/scrutiny-collector-metrics" | sha256sum -c`
followed by:
`sudo mv /tmp/scrutiny-collector-metrics /opt/scrutiny/bin/`
to move the binary to its final resting place
### Prepare Scrutiny
Now that we have downloaded the required files, let's prepare the filesystem.
```
```sh
# Let's make sure the Scrutiny collector is executable.
chmod +x /opt/scrutiny/bin/scrutiny-collector-metrics-linux-amd64
chmod +x /opt/scrutiny/bin/scrutiny-collector-metrics
```
if you are using SELinux, you may need to also do the following:
```sh
# tell SELinux to allow these binaries
sudo semanage fcontext -a -t bin_t "/opt/scrutiny/bin(/.*)?"
# update labels
sudo restorecon -Rv /opt/scrutiny/bin
```
### Start Scrutiny Collector, Populate Webapp
Next, we will manually trigger the collector, to populate the Scrutiny dashboard:
```
/opt/scrutiny/bin/scrutiny-collector-metrics-linux-amd64 run --api-endpoint "http://localhost:8080"
> NOTE: if you need to pass a config file to the scrutiny collector, you can provide it using the `--config` flag.
```sh
/opt/scrutiny/bin/scrutiny-collector-metrics run --api-endpoint "http://localhost:8080"
```
### Schedule Collector with Cron
### Schedule Collector with (root) Cron
Finally you need to schedule the collector to run periodically.
This may be different depending on your OS/environment, but it may look something like this:
```
```sh
# open crontab
crontab -e
sudo crontab -e
# add a line for Scrutiny
*/15 * * * * . /etc/profile; /opt/scrutiny/bin/scrutiny-collector-metrics-linux-amd64 run --api-endpoint "http://localhost:8080"
*/15 * * * * . /etc/profile; /opt/scrutiny/bin/scrutiny-collector-metrics run --api-endpoint "http://localhost:8080"
```
### Schedule Collector with Systemd (rootless)
Alternatively you can run `scrutiny-collector-metrics` as non-root so long as the relevant capabilities and permissions are granted.
#### Creating a Restricted Service Account
This is the account that will run `scrutiny-collector-metrics`. Note this isn't strictly needed for all setups, but is useful from a logging/auditing perspective.
- Debian-based distros:
- `sudo adduser --system scrutiny-svc --group --home /opt/scrutiny-svc`
- RHEL-based distros:
- `sudo useradd --system --home-dir /opt/scrutiny-svc --shell /sbin/nologin scrutiny-svc`
Next, add the user to the `disk` group:
```sh
sudo usermod -aG disk scrutiny-svc
```
#### Creating a Restricted Systemd Service using AmbientCapabilities (easier)
This is the simpler setup, which allows you to run scrutiny rootless, but depending on what you want, may require granting more permissions to scrutiny than you would like to.
1. go to `/etc/systemd/system`
2. create scrutiny-collector.service with the following contents:
```ini
[Unit]
Description=Daily Restricted Scrutiny Collector
After=network.target
[Service]
[Unit]
Description=Daily Restricted Scrutiny Collector
After=network.target
[Service]
Type=oneshot
User=scrutiny-svc
Group=disk
ExecStart=/opt/scrutiny/bin/scrutiny-collector-metrics run --api-endpoint "http://localhost:8080"
# --- PRIVILEGE LOCKDOWN ---
## CAP_SYS_RAWIO is needed for SATA drives
AmbientCapabilities=CAP_SYS_RAWIO
CapabilityBoundingSet=CAP_SYS_RAWIO
## unfortunately nvme drives require CAP_SYS_ADMIN
## if you want nvme drives you must do the following:
#AmbientCapabilities=CAP_SYS_RAWIO CAP_SYS_ADMIN
#CapabilityBoundingSet=
NoNewPrivileges=yes
# Security/sandboxing settings
KeyringMode=private
LockPersonality=yes
MemoryDenyWriteExecute=yes
ProtectSystem=strict
ProtectHome=yes
PrivateDevices=no
## you can restrict devices using:
#DevicePolicy=closed
#DeviceAllow=/dev/sda r
#DeviceAllow=/dev/nvme0 r
ProtectKernelModules=yes
ProtectKernelTunables=yes
ProtectControlGroups=yes
ProtectClock=yes
ProtectHostname=yes
ProtectKernelLogs=yes
RemoveIPC=yes
RestrictSUIDSGID=true
# --- NETWORK LOCKDOWN
## use these to restrict what scrutiny can talk to over the network
## if using a hub on a different host you will need to change the values accordingly
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX
IPAddressDeny=any
IPAddressAllow=localhost
[Install]
WantedBy=multi-user.target
```
Additionally, for nvme drives you may need to create a udev rule on many systems, as /dev/nvme* is often owned only by root:
##### add udev rule `/etc/udev/rules.d/99-nvme.rules` with contents:
```
KERNEL=="nvme[0-9]*", GROUP="disk", MODE="0640"
```
then run the following commands to load the udev rule:
```sh
sudo udevadm control --reload-rules
sudo udevadm trigger --subsystem-match=nvme --action=add
```
##### Pros:
- easy to maintain
- much better than running as root (especially if you don't need nvme drives)
- there are no privilege escalations needed
##### Cons:
NOTE: These cons basically only apply if a major supply-chain attack happens against scrutiny, and reflect a worst-case scenario that is unlikely to ever occur:
- CAP_SYS_RAWIO allows for data exfiltration/modification from SATA drives (ssh keys, /etc/shadow, etc)
- CAP_SYS_ADMIN would theoretically allow for significant system compromise
- nvme drives requires a udev rule for reliable access
If you are happy with that, you can jump to [Create a Systemd Timer to run scrutiny-collector.service](#create-a-systemd-timer-to-run-scrutiny-collectorservice)
#### Creating a Restricted Systemd Service using sudo and Shim Script
If granting scrutiny `CAP_SYS_RAWIO` and/or `CAP_SYS_ADMIN` exceeds your risk appetite, you have another option, though one more complicated and with its own set of pros/cons
1. run `sudo mkdir -p /opt/smartctl-shim/bin`
2. edit `/opt/smartctl-shim/bin/smartctl` with the following content:
```sh
#!/bin/bash
# Shim for accounts to use smartctl without being root
# for automation requires the account be in sudoers
exec /usr/bin/sudo /usr/sbin/smartctl "$@"
```
3. create a new `scrutiny-collector` file in `/etc/sudoers.d/`
4. inside `/etc/sudoers.d/scrutiny-collector` add the following:
```sh
scrutiny-svc ALL=(root) NOPASSWD: /usr/sbin/smartctl *
```
5. go to `/etc/systemd/system`
6. create scrutiny-collector.service with the following contents:
```ini
[Unit]
Description=Daily Restricted Scrutiny Collector
After=network.target
[Service]
Type=oneshot
User=scrutiny-svc
Environment="PATH=/opt/smartctl-shim/bin:/usr/bin:/bin"
ExecStart=/opt/scrutiny/bin/scrutiny-collector-metrics run --api-endpoint "http://localhost:8080"
# --- PRIVILEGE LOCKDOWN ---
## we use sudo to elevate privileges for smartctl only, so no Ambient Capabilities are needed
AmbientCapabilities=
## CAP_SYS_RAWIO is needed for SATA drives
CapabilityBoundingSet=CAP_SETUID CAP_SETGID CAP_AUDIT_WRITE CAP_SYS_RAWIO CAP_SYS_RESOURCE
## unfortunately nvme drives require CAP_SYS_ADMIN
## if you want nvme drives you must do the following:
# CapabilityBoundingSet=CAP_SETUID CAP_SETGID CAP_AUDIT_WRITE CAP_SYS_RAWIO CAP_SYS_ADMIN CAP_SYS_RESOURCE
## since sudo needs to be used to elevate permissions in this setup, we need to allow new privileges
NoNewPrivileges=no
# Security/sandboxing settings
KeyringMode=private
LockPersonality=yes
MemoryDenyWriteExecute=yes
ProtectSystem=strict
ProtectHome=yes
PrivateDevices=no
ProtectKernelModules=yes
ProtectKernelTunables=yes
ProtectControlGroups=yes
ProtectClock=yes
ProtectHostname=yes
ProtectKernelLogs=yes
RemoveIPC=yes
RestrictSUIDSGID=true
# --- NETWORK LOCKDOWN
## use these to restrict what scrutiny can talk to over the network
## if using a hub on a different host you will need to change the values accordingly
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX
IPAddressDeny=any
IPAddressAllow=localhost
[Install]
WantedBy=multi-user.target
```
##### Pros:
- the scrutiny binary itself will not have permissions like CAP_SYS_ADMIN
- much better than running as root (especially if you don't need nvme drives)
- `sudo` restricts privilege escalation to just `smartctl`
- no udev rule needed
##### Cons:
NOTE: These cons basically only apply if a major supply-chain attack happens against scrutiny, and reflect a worst-case scenario that is unlikely to ever occur:
- Any sort of privilege escalation attack in sudo could theoretically allow a compromised scrutiny to gain additional privileges, since the process has permission to escelate privileges in general
- Even though sudo only allows `smartctl`, it still has `CAP_SYS_RAWIO` and `CAP_SYS_ADMIN` so in theory the same attacks from the first method are possible, though now only with an exploit using smartctl instead of scrutiny directly
- even though you don't need a udev rule, this adds a lot of additional administrative overhead
- while the scrutiny binary itself isn't elevated, it has a sub-process that is running as root (systemctl)
#### Create a Systemd Timer to run scrutiny-collector.service
First, lets test our service. It doesn't matter which method you used above, as either way you need to load and run it.
```sh
# reload changes for systemd services
sudo systemctl daemon-reload
# enable the service
sudo systemctl enable scrutiny-collector.service
# now run the service
sudo systemctl start scrutiny-collector.service
```
You should see the data in your hub instance of scrutiny now. If your run into issues I recommend turning on debug logging for scrutiny and checking your system logs using journalctl. It may be a permission is missing or wrong.
Now that things have been validated, lets create the systemd timer to run the service for us on a schedule:
1. if you are not still there, go to `/etc/systemd/system`
2. create scrutiny-collector.timer with the following contents:
```ini
[Unit]
Description=Run Scruitiny Collector daily at 2am
[Timer]
# Standard calendar trigger
OnCalendar=*-*-* 02:00:00
# Ensures the job runs if the computer was off at 2am
Persistent=true
# Minimizes I/O spikes by staggering start time
RandomizedDelaySec=30
[Install]
WantedBy=timers.target
```
Update the schedule as you see fit for your needs
Once you are satisfied with our timer, you'll need to load and enable it:
```sh
# reload changes for systemd services
sudo systemctl daemon-reload
# now enable the timer
sudo systemctl enable --now scrutiny-collector.timer
```
That's it! you're done. You can check the status of the timer using `sudo systemctl status scrutiny-collector.timer
`
+59
View File
@@ -0,0 +1,59 @@
# Manual Windows Install
This guide is specifically for people who are on a Windows machine using [WSL](https://learn.microsoft.com/en-us/windows/wsl/about) with Docker.
Scrutiny is made up of three components: an influxdb Database, a collector and a webapp/api. Docker will be used for
the influxdb and webapp/API, the collector component will be facilitated by [Windows Task Scheduler](https://learn.microsoft.com/en-us/windows/win32/taskschd/task-scheduler-start-page).
> **NOTE:** If you are **NOT** using WSL with docker, then the easiest way to get started with [Scrutiny is the omnibus Docker image](https://github.com/AnalogJ/scrutiny#docker).
## InfluxDB and Webapp/API (Docker)
1. Copy the [example.hubspoke.docker-compose.yml](https://github.com/AnalogJ/scrutiny/blob/master/docker/example.hubspoke.docker-compose.yml)
file and delete the collector section near the bottom of the file.
2. Run `docker-compose up -d` to verify that the DB and webapp are working correctly and once its completed, your webapp
should be up and running but the dashboard will be empty (default location is `localhost:8080`)
## Collector (Windows Task Scheduler)
1. Download the latest `scrutiny-collector-metrics-windows-amd64.exe` from the [releases page](https://github.com/AnalogJ/scrutiny/releases) (under assets)
2. On your windows host, open [Windows Task Scheduler](https://www.wikihow.com/Open-Task-Scheduler-in-Windows-10) as **Administrator**
1. In the **Start Menu** (Windows key), type `Task Scheduler` and then right click `Run as Administrator` to open
3. On the status bar (under the `action` tab), click `Create Task...`
4. A new window should open with the `General` Tab open, enter relevant information into the `Name` and `Description` fields
1. Under **Security Options** check:
1. **Run whether user is logged on or not**
2. **Run with highest privileges**
5. Next, click the `Triggers` tab and then click `New...` (bottom left-hand side of the window)
6. Here you can set how often you want this task to run, example settings are the following:
1. **Settings:**
1. `Daily`, start at `TODAYS_DATE` `12:00:00 AM`, Recur every `1` days,
2. **Advanced Settings:**
1. Repeat Task every: `1 hour` for a duration of `Indefinitely`
2. Stop task if it runs longer than: `30 minutes`
3. Click Ok when satisfied with your schedule
> **NOTE:** The above settings will trigger the task **every day at midnight** and then **run every hour after that** (modify as needed)
7. Next, click the `Actions` tab and then click `New...` (bottom left-hand side of the window)
1. **Action Settings:**
1. In the **Program/Script** field, put: `scrutiny-collector-metrics-windows-amd64.exe`
2. In the **Add arguments (optional)** field, put: `run --api-endpoint "http://localhost:8080" --config collector.yaml`
> **NOTE:**
> * Make sure that you put the correct port number (as specified in the docker-compose file) for the webapp (default is `8080`)
> * The `--config` param is optional and is not needed if you just want to use the default collector config, see
[example.collector.yaml](https://github.com/AnalogJ/scrutiny/blob/master/example.collector.yaml) for more info on the collector config.
3. In the **Start in (optional)** field, put: FOLDER_PATH_TO_YOUR `scrutiny-collector-metrics-windows-amd64.exe` file
> **NOTE:** Must be exact and do not include `scrutiny-collector-metrics-windows-amd64.exe` in the path
4. Click Ok when finished
8. Next, click the `Conditions` tab and make sure that everything is unchecked (unless you want to specify otherwise)
9. Next, click the `Settings` tab and check everything except for the last checkbox
1. **Examples for the following settings:**
1. If the task fails, restart every: `5 minutes`
2. Attempt restart up to: `3` times
3. Stop the task if it runs longer than `1 hour`
10. Next, once satisfied with everything, click Ok
11. Then, find your newly created task (by its name) in the scheduler task list and then manually run it (right click it and then click `Run`)
12. Finally, refresh your dashboard after a minute or two and your drive information should have populated the webapp dashboard.
+72
View File
@@ -0,0 +1,72 @@
# pfsense Install
This bascially follows the [Manual collector instructions](https://github.com/AnalogJ/scrutiny/blob/master/docs/INSTALL_MANUAL.md#collector) and assumes you are running a hub and spoke deployment and already have the web app setup.
### Dependencies
SSH into pfsense, hit `8` for the shell and install the required dependencies.
```
pkg install smartmontools
```
Ensure smartmontools is v7+. This won't be a problem in pfsense 2.6.0+
### Directory Structure
Now let's create a directory structure to contain the Scrutiny collector binary.
```
mkdir -p /opt/scrutiny/bin
```
### Download Files
Next, we'll download the Scrutiny collector binary from the [latest Github release](https://github.com/analogj/scrutiny/releases).
> NOTE: Ensure you have the latest version in the below command
```
fetch -o /opt/scrutiny/bin https://github.com/AnalogJ/scrutiny/releases/download/vX.X.X/scrutiny-collector-metrics-freebsd-amd64
```
### Prepare Scrutiny
Now that we have downloaded the required files, let's prepare the filesystem.
```
chmod +x /opt/scrutiny/bin/scrutiny-collector-metrics-freebsd-amd64
```
### Start Scrutiny Collector, Populate Webapp
Next, we will manually trigger the collector, to populate the Scrutiny dashboard:
> NOTE: if you need to pass a config file to the scrutiny collector, you can provide it using the `--config` flag.
```
/opt/scrutiny/bin/scrutiny-collector-metrics-freebsd-amd64 run --api-endpoint "http://localhost:8080"
```
> NOTE: change the IP address to that of your web app
### Schedule Collector with Cron
Finally you need to schedule the collector to run periodically.
Login to the pfsense webGUI and head to `Services/Cron` add an entry with the following details:
```
Minute: */15
Hour: *
Day of the Month: *
Month of the Year: *
Day of the Week: *
User: root
Command: /opt/scrutiny/bin/scrutiny-collector-metrics-freebsd-amd64 run --api-endpoint "http://localhost:8080" >/dev/null 2>&1
```
> NOTE: `>/dev/null 2>&1` is used to stop cron confirmation emails being sent.
+170
View File
@@ -0,0 +1,170 @@
# Rootless Podman Quadlet Install
Note: These instructions are written with Podman 4.9 in mind, as that's what's available on Ubuntu 24.04. Podman 5+ can simplify the process using a .pod file to run both the hub and influxdb instance in the same pod, sharing localhost. This is a fairly trivial change should anyone want to add the documentation for it. While this document isn't Ubuntu-specific, this is being purposefully done to allow it to apply to the vast majority of Podman users, regardless of what Linux distro they use.
### Dependencies
- Podman > 4.9
- Systemd > 250 (for quadlet support)
- a restricted service account
### Creating a Service Account
See [Creating a Restricted Service Account](INSTALL_MANUAL.md#creating-a-restricted-service-account) for instructions.
While you do not need to use the same account as the collector, this guide will assume you will be for all its examples.
In addition to those steps, you will need to create sub ids and enable lingering for the user:
```sh
# add sub-uids and sub-gids, you may need to adjust numbers if you have other rootless quadlets running for other users already
# it is not recommended to go below 100000
# we choose to start at 500000 in the event you have some other podman accounts
sudo usermod --add-subuids 500000-565535 scrutiny-svc
sudo usermod --add-subgids 500000-565535 scrutiny-svc
# We want the quadlets to stay running even if the user isn't logged in
sudo loginctl enable-linger scrutiny-svc
```
### Directory Structure
Once the account is created, you will need to grab its id to create a few drectories for the data files and rootless quadlet files:
```sh
# create folders for config and influxdb
sudo mkdir -p /opt/scrutiny-svc/scrutiny/{config,influxdb}
# get the config file for scrutiny hub
sudo wget -O /opt/scrutiny-svc/scrutiny/config/scrutiny.yaml https://raw.githubusercontent.com/AnalogJ/scrutiny/refs/heads/master/example.scrutiny.yaml
# set permissions on everything
sudo chown -R scrutiny-svc:scrutiny-svc /opt/scrutiny-svc
# Get the ID of scrutiny-svc so you know it for your own record-keeping
id -u scrutiny-svc
# create a directory
sudo mkdir -p /etc/containers/systemd/users/$(id -u scrutiny-svc)
## go into the directory you just created for the rest of the guide
cd /etc/containers/systemd/users/$(id -u scrutiny-svc)
```
### Quadlet Files
Now that everything is set up and configured for the account to run quadlets, we just need to create a few quadlet files.
All remaining system actions will take place in `/etc/containers/systemd/users/$(id -u scrutiny-svc)` which is why we had you cd into it.
#### Networking
We need the hub and influxdb instances to be able to talk to each other, and in the case of Podman 4.9, they will run separately not sharing a localhost, and as such we need to configure a network for them to share. The file is pretty simple:
##### scrutiny-net.network
```ini
[Network]
NetworkName=scrutiny-net
```
#### Containers
Now we're ready for creating the containers
##### influxdb.container
```ini
[Unit]
Description=influxdb
[Container]
ContainerName=influxdb
Image=docker.io/library/influxdb:2.8
AutoUpdate=registry
Timezone=local
## not strictly necessary, but keeps file permission sane for influxdb
PodmanArgs=--group-add keep-groups
## versions of podman after 5.1 should do the below instead
#GroupAdd=keep-groups
Volume=/opt/scrutiny-svc/scrutiny/influxdb:/var/lib/influxdb2:Z
Network=scrutiny-net
[Service]
Restart=on-failure
[Install]
# Start by default on boot
WantedBy=default.target
```
##### scrutiny-web.container
```ini
[Unit]
Description=scrutiny-web
After=influxdb.service
Requires=influxdb.service
[Container]
ContainerName=scrutiny-web
Image=ghcr.io/analogj/scrutiny:latest-web
AutoUpdate=registry
Timezone=local
Volume=/opt/scrutiny-svc/scrutiny/config:/opt/scrutiny/config:Z
Network=scrutiny-net
PublishPort=8080:8080/tcp
[Service]
Restart=on-failure
[Install]
# Start by default on boot
WantedBy=default.target
```
#### Update scrutiny config
Since our containers are running separately, we need to update `/opt/scrutiny-svc/scrutiny/config/scrutiny.yaml` to the new influxdb host:
1. edit `/opt/scrutiny-svc/scrutiny/config/scrutiny.yaml`
2. under `influxdb` section, change `host: 0.0.0.0` to `host: influxdb` -- remember that yaml is whitespace-sensitive! so be mindful of the indents
```yaml
influxdb:
# scheme: 'http'
host: influxdb
port: 8086
```
# Running the hub and doing the
With that done, we're now ready to start up the services:
```sh
# reload all the systemd user files for scrutiny-svc
sudo systemctl --user -M scrutiny-svc@ daemon-reload
# start the scrutiny-net network:
sudo systemctl --user -M scrutiny-svc@ start scrutiny-net-network.service
# start influxdb first and wait for it to come up
sudo systemctl --user -M scrutiny-svc@ start influxdb.service
# check if it's fully up
sudo systemctl --user -M scrutiny-svc@ status influxdb.service
# now start scrutiny
sudo systemctl --user -M scrutiny-svc@ start scrutiny-web.service
```
You are now ready to run the collector, if you would like to run that rootless as well, see the guide at [Schedule Collector with Systemd (rootless)](INSTALL_MANUAL.md#schedule-collector-with-systemd-rootless)
+138
View File
@@ -0,0 +1,138 @@
# Install collector on Synology
## Install Entware
This will allow you to install a newer version of smartmontools on your Synology. Follow the instructions here (This is tested on DSM7) - https://github.com/Entware/Entware/wiki/Install-on-Synology-NAS
**PLEASE NOTE THAT IF YOU UPDATE DSM FIRMWARE YOU MAY BORK THE EXISTING ENTWARE INSTALLATION, FOR ANYTHING THAT MAY RELATE TO ENTWARE PLEASE VISIT THEIR REPO**
## Collector Setup
**1. Run an update**
`sudo opkg update`
**2. Run an upgrade**
`sudo opkg upgrade`
**3. Install smartmontools**
`sudo opkg install smartmontools`
*It should install v7.2-2*
`Installing smartmontools (7.2-2) to root...`
**4. We will now create the directories.**
```
mkdir -p /volume1/\@Entware/scrutiny/bin
mkdir -p /volume1/\@Entware/scrutiny/conf
```
**5. change into the bin directory**
`cd /volume1/\@Entware/scrutiny/bin`
**6. Download the collector binary for your architecture and make it executable**
`wget https://github.com/AnalogJ/scrutiny/releases/download/v0.4.12/scrutiny-collector-metrics-linux-arm64`
`chmod +x /volume1/\@Entware/scrutiny/bin/scrutiny-collector-metrics-linux-arm64`
**7. Create a config file for the collector**
```
cd /volume1/\@Entware/scrutiny/conf
wget https://raw.githubusercontent.com/AnalogJ/scrutiny/master/example.collector.yaml
mv example.collector.yaml collector.yaml
```
**8. Lets make some changes in the [collector config file](../example.collector.yaml), these are what i uncommented/added, please tweak the device paths to your needs**
```
host:
id: 'Server_Name'
devices:
# # example for forcing device type detection for a single disk
- device: /dev/sda
type: 'sat'
- device: /dev/sdb
type: 'sat'
- device: /dev/sdc
type: 'sat'
- device: /dev/sdd
type: 'sat'
api:
endpoint: 'http://<url>:8080'
```
**9. Let's update the smartd db**
```
cd /volume1/\@Entware/scrutiny/bin/
wget https://raw.githubusercontent.com/smartmontools/smartmontools/master/smartmontools/drivedb.h
```
**10. I ran it like this but you can tweak to your liking, the most important part is the --drivedb, as this loads it into the aplication for future use**
`smartctl -d sat --all /dev/sda --drivedb=/volume1/\@Entware/scrutiny/bin/drivedb.h`
**11. Now lets create a small bash script, this will be used for the scheduled task inside Synology**
`vim /volume1/\@Entware/scrutiny/bin/run_collect.sh`
**The contents are below, copy and paste them in**
```
#!/bin/bash
/volume1/\@Entware/scrutiny/bin/scrutiny-collector-metrics-linux-arm64 run --config /volume1/\@Entware/scrutiny/conf/collector.yaml
```
**Make `run_collect.sh` executable**
`chmod +x /volume1/\@Entware/scrutiny/bin/run_collect.sh`
## Set up Synology to run a scheduled task.
Log in to DSM and do the following:
Goto: DSM > Control Panel > Task Scheduler
Create > Scheduled Task > User Defined Script
###### General
```
Task: Scrutiny_Collector
User: root
Enabled: yes
```
###### Schedule
```
Run on the following days: Daily
```
###### Time:
```
Frequency: <Your desired frequency>
```
###### Task Settings
**Run Command**
```
. /opt/etc/profile; /volume1/\@Entware/scrutiny/bin/run_collect.sh
```
## Troubleshooting
If you have any issues with your devices being detected, or incorrect data, please take a look at [TROUBLESHOOTING_DEVICE_COLLECTOR.md](./TROUBLESHOOTING_DEVICE_COLLECTOR.md)
+3 -3
View File
@@ -19,13 +19,13 @@ To install, simply click 'Install'; the configuration parameters should not need
As a docker image can be created using various OS bases, the image choice is entirely the users choice. Recommendations of a specific image from a specific maintainer is beyond the scope of this guide. However, to provide some context given the number of questions posed regarding the various versions available:
- **analogj/scrutiny**
- **ghcr.io/analogj/scrutiny:master-omnibus**
- `Image maintained directly by the application author`
- `Debian based docker image`
- **linuxserver/scrutiny:**
- **linuxserver/scrutiny**
- `Image maintained by the LinuxServer.io group`
- `Alpine based docker image`
- **hotio/scrutiny:**
- **hotio/scrutiny**
- `Image maintained by hotio`
- `DETAILS TBD`
+15 -7
View File
@@ -1,14 +1,22 @@
# Officially Supported NAS OS's
# Officially Supported NAS/OS's
These are the officially supported NAS OS's (with documentation and setup guides).
Once a guide is created (in `docs/guides/`) it will be linked here.
These are the officially supported NAS OS's (with documentation and setup guides). Once a guide is created (
in `docs/guides/` or elsewhere) it will be linked here.
- [ ] freenas/truenas
- [x] [unraid](https://github.com/AnalogJ/scrutiny/blob/master/docs/INSTALL_UNRAID.md)
- [x] [freenas/truenas](https://blog.stefandroid.com/2022/01/14/smart-scrutiny.html)
- [x] [unraid](./INSTALL_UNRAID.md)
- [ ] ESXI
- [ ] Proxmox
- [ ] Synology
- [x] Synology
- [Hub/Spoke Deployment - Collector](./INSTALL_SYNOLOGY_COLLECTOR.md)
- [Omnibus Deployment](https://drfrankenstein.co.uk/2022/07/28/scrutiny-in-docker-on-a-synology-nas) - Docker Package
- [Omnibus Deployment](https://drfrankenstein.co.uk/scrutiny-in-container-manager-on-a-synology-nas/) - Container Manager Package
- [ ] OMV
- [ ] Amahi
- [ ] Running in a LXC container
- [x] [PFSense](./INSTALL_PFSENSE.md)
- [x] QNAP
- [x] [RockStor](https://rockstor.com/docs/interface/docker-based-rock-ons/scrutiny.html)
- [ ] Solaris/OmniOS CE Support
- [ ] Kubernetes
- [x] [Windows](./INSTALL_MANUAL_WINDOWS.md)
+20
View File
@@ -0,0 +1,20 @@
# Testers
Scrutiny supports many operating systems, CPU architectures and runtime environments. Unfortunately that makes it incredibly
difficult to test.
Thankfully the following users have been gracious enough to test/validate Scrutiny works on their system.
> NOTE: If you're interested in volunteering to test Scrutiny beta builds on your system, please [open an issue](https://github.com/AnalogJ/scrutiny/issues).
| Architecture Name | Binaries | Docker |
| --- | --- | --- |
| linux-amd64 | @TizzAmmazz | @feroxy @rshxyz |
| linux-arm-5 | -- | |
| linux-arm-6 | -- | |
| linux-arm-7 | @Zorlin | @martini1992 |
| linux-arm64 | @SiM22 @Zorlin | @ViRb3 @agneevX @benamajin |
| freebsd-amd64 | @BadCo-NZ @varunsridharan @martadinata666 @KenwoodFox @FingerlessGlov3s | |
| macos-amd64 | -- | -- |
| macos-arm64 | -- | -- |
| windows-amd64 | @gabrielv33 | -- |
| windows-arm64 | -- | -- |
+349
View File
@@ -0,0 +1,349 @@
# Scrutiny <-> SmartMonTools
Scrutiny uses `smartctl --scan` to detect devices/drives. If your devices are not being detected by Scrutiny, or some
data is missing, this is probably due to a `smartctl` issue.
The following page will document commonly asked questions and troubleshooting steps for the Scrutiny S.M.A.R.T. data collector.
## WWN vs Device name
As discussed in [`#117`](https://github.com/AnalogJ/scrutiny/issues/117), `/dev/sd*` device paths are ephemeral.
> Device paths in Linux aren't guaranteed to be consistent across restarts. Device names consist of major numbers (letters) and minor numbers. When the Linux storage device driver detects a new device, the driver assigns major and minor numbers from the available range to the device. When a device is removed, the device numbers are freed for reuse.
>
> The problem occurs because device scanning in Linux is scheduled by the SCSI subsystem to happen asynchronously. As a result, a device path name can vary across restarts.
>
> https://docs.microsoft.com/en-us/troubleshoot/azure/virtual-machines/troubleshoot-device-names-problems
While the Docker Scrutiny collector does require devices to attached to the docker container by device name (using `--device=/dev/sd..`), internally
Scrutiny stores and references the devices by their `WWN` which is globally unique, and never changes.
As such, passing devices to the Scrutiny collector container using `/dev/disk/by-id/`, `/dev/disk/by-label/`, `/dev/disk/by-path/` and `/dev/disk/by-uuid/`
paths are unnecessary, unless you'd like to ensure the docker run command never needs to change.
#### Force /dev/disk/by-id paths
Since Scrutiny uses WWN under the hood, it really doesn't care about `/dev/sd*` vs `/dev/disk/by-id/`. The problem is the interaction between docker and smartmontools when using `--device /dev/disk/by-id` paths.
Basically Scrutiny offloads all device detection to smartmontools, which doesn't seem to detect devices that have been passed into the docker container using `/dev/disk/by-id` paths.
If you must use "static" device references, you can map the host device id/uuid/wwn references to device names within the container:
```
# --device=<Host Device>:<Container Device Mapping>
docker run ....
--device=/dev/disk/by-id/wwn-0x5000xxxxx:/dev/sda
--device=/dev/disk/by-id/wwn-0x5001xxxxx:/dev/sdb
--device=/dev/disk/by-id/wwn-0x5003xxxxx:/dev/sdc
...
```
## Device Detection By Smartctl
The first thing you'll want to do is run `smartctl` locally (not in Docker) and make sure the output shows all your drives as expected.
See the `Drive Types` section below for what this output should look like for `NVMe`/`ATA`/`RAID` drives.
```bash
smartctl --scan
/dev/sda -d scsi # /dev/sda, SCSI device
/dev/sdb -d scsi # /dev/sdb, SCSI device
/dev/sdc -d scsi # /dev/sdc, SCSI device
/dev/sdd -d scsi # /dev/sdd, SCSI device
```
Once you've verified that `smartctl` correctly detects your drives, make sure scrutiny is correctly detecting them as well.
> NOTE: make sure you specify all the devices you'd like scrutiny to process using `--device=` flags.
```bash
docker run -it --rm \
-v /run/udev:/run/udev:ro \
--cap-add SYS_RAWIO \
--device=/dev/sda \
--device=/dev/sdb \
ghcr.io/analogj/scrutiny:master-collector smartctl --scan
```
If the output is the same, your devices will be processed by Scrutiny.
### Collector Config File
In some cases `--scan` does not correctly detect the device type, returning [incomplete SMART data](https://github.com/AnalogJ/scrutiny/issues/45).
Scrutiny will supports overriding the detected device type via the config file.
[example.collector.yaml](https://github.com/AnalogJ/scrutiny/blob/master/example.collector.yaml)
### RAID Controllers (Megaraid/3ware/HBA/Adaptec/HPE/etc)
Smartctl has support for a large number of [RAID controllers](https://www.smartmontools.org/wiki/Supported_RAID-Controllers), however this
support is not automatic, and may require some additional device type hinting. You can provide this information to the Scrutiny collector
using a collector config file. See [example.collector.yaml](/example.collector.yaml)
> NOTE: If you use docker, you **must** pass though the RAID virtual disk to the container using `--device` (see below)
>
> This device may be in `/dev/*` or `/dev/bus/*`.
> If you do not see a virtual device file `/dev/bus/*` you may need to use the `--privileged` flag. See [#366 for more info](https://github.com/AnalogJ/scrutiny/issues/366#issuecomment-1253196407)
>
> If you're unsure, run `smartctl --scan` on your host, and pass all listed devices to the container.
```yaml
# /opt/scrutiny/config/collector.yaml
devices:
# Dell PERC/Broadcom Megaraid example: https://github.com/AnalogJ/scrutiny/issues/30
- device: /dev/bus/0
type:
- megaraid,14
- megaraid,15
- megaraid,18
- megaraid,19
- megaraid,20
- megaraid,21
- device: /dev/twa0
type:
- 3ware,0
- 3ware,1
- 3ware,2
- 3ware,3
- 3ware,4
- 3ware,5
# Adapec RAID: https://github.com/AnalogJ/scrutiny/issues/189
- device: /dev/sdb
type:
- aacraid,0,0,0
- aacraid,0,0,1
# HPE Smart Array example: https://github.com/AnalogJ/scrutiny/issues/213
- device: /dev/sda
type:
- 'cciss,0'
- 'cciss,1'
```
>
### NVMe Drives
As mentioned in the [README.md](/README.md), NVMe devices require both `--cap-add SYS_RAWIO` and `--cap-add SYS_ADMIN`
to allow smartctl permission to query your NVMe device SMART data [#26](https://github.com/AnalogJ/scrutiny/issues/26)
When attaching NVMe devices using `--device=/dev/nvme..`, make sure to provide the device controller (`/dev/nvme0`)
instead of the block device (`/dev/nvme0n1`). See [#209](https://github.com/AnalogJ/scrutiny/issues/209).
> The character device /dev/nvme0 is the NVME device controller, and block devices like /dev/nvme0n1 are the NVME storage namespaces: the devices you use for actual storage, which will behave essentially as disks.
>
> In enterprise-grade hardware, there might be support for several namespaces, thin provisioning within namespaces and other features. For now, you could think namespaces as sort of meta-partitions with extra features for enterprise use.
### ATA
### USB Devices
The following information is extracted from [#266](https://github.com/AnalogJ/scrutiny/issues/266)
External HDDs support two modes of operation usb-storage (old, slower, stable) and uas (new, faster, sometimes unstable)
. On some external HDDs, uas mode does not properly pass through SMART information, or even causes hardware issues, so
it has been disabled by the kernel. No amount of smartctl parameters will fix this, as it is being rejected by the
kernel. This is especially true with Seagate HDDs. One solution is to force these devices into usb-storage mode, which
will incur some performance penalty, but may work well enough for you. More info:
- https://smartmontools.org/wiki/Supported_USB-Devices
- https://smartmontools.org/wiki/SAT-with-UAS-Linux
- https://forums.raspberrypi.com/viewtopic.php?t=245931
### Exit Codes
If you see an error message similar to `smartctl returned an error code (2) while processing /dev/sda`, this means that
`smartctl` (not Scrutiny) exited with an error code. Scrutiny will attempt to print a helpful error message to help you
debug, but you can look at the table (and associated links) below to debug `smartctl`.
> smartctl Return Values
> The return values of smartctl are defined by a bitmask. If all is well with the disk, the return value (exit status) of
> smartctl is 0 (all bits turned off). If a problem occurs, or an error, potential error, or fault is detected, then
> a non-zero status is returned. In this case, the eight different bits in the return value have the following meanings
> for ATA disks; some of these values may also be returned for SCSI disks.
>
> source: http://www.linuxguide.it/command_line/linux-manpage/do.php?file=smartctl#sect7
| Exit Code (Isolated) | Binary | Problem Message |
| --- | --- | --- |
| 1 | Bit 0 | Command line did not parse. |
| 2 | Bit 1 | Device open failed, or device did not return an IDENTIFY DEVICE structure. |
| 4 | Bit 2 | Some SMART command to the disk failed, or there was a checksum error in a SMART data structure (see В´-bВ´ option above). |
| 8 | Bit 3 | SMART status check returned “DISK FAILING". |
| 16 | Bit 4 | We found prefail Attributes <= threshold. |
| 32 | Bit 5 | SMART status check returned “DISK OK” but we found that some (usage or prefail) Attributes have been <= threshold at some time in the past. |
| 64 | Bit 6 | The device error log contains records of errors. |
| 128 | Bit 7 | The device self-test log contains records of errors. |
#### Standby/Sleeping Disks
Disks in Standby/Sleep can also cause `smartctl` to exit abnormally, usually with `exit code: 2`.
- https://github.com/AnalogJ/scrutiny/issues/221
- https://github.com/AnalogJ/scrutiny/issues/157
### Volume Mount All Devices (`/dev`) - Privileged
> WARNING: This is an insecure/dangerous workaround. Running Scrutiny (or any Docker image) with `--privileged` is equivalent to running it with root access.
If you have exhausted all other mechanisms to get your disks working with `smartctl` running within a container, you can try running the docker image with the following additional flags:
- `--privileged` (instead of `--cap-add`) - this gives the docker container full access to your system. Scrutiny does not require this permission, however it can be helpful for `smartctl`
- `-v /dev:/dev:ro` (instead of `--device`) - this mounts the `/dev` folder (containing all your device files) into the container, allowing `smartctl` to see your disks, exactly as if it were running on your host directly.
With this workaround your `docker run` command would look similar to the following:
```bash
docker run -it --rm -p 8080:8080 -p 8086:8086 \
-v `pwd`/scrutiny:/opt/scrutiny/config \
-v `pwd`/influxdb2:/opt/scrutiny/influxdb \
-v /run/udev:/run/udev:ro \
--privileged \
-v /dev:/dev \
--name scrutiny \
ghcr.io/analogj/scrutiny:master-omnibus
```
## Scrutiny detects Failure but SMART Passed?
There's 2 different mechanisms that Scrutiny uses to detect failures.
The first is simple SMART failures. If SMART thinks an attribute is in a failed state, Scrutiny will display it as failed as well.
The second is using BackBlaze failure data: [https://backblaze.com/blog-smart-stats-2014-8.html](https://backblaze.com/blog-smart-stats-2014-8.html)
If Scrutiny detects that an attribute corresponds with a high rate of failure using BackBlaze's data, it will also mark that attribute (and disk) as failed (even though SMART may think the device is still healthy).
This can cause some confusion when comparing Scrutiny's dashboard against other SMART analysis tools.
If you hover over the "failed" label beside an attribute, Scrutiny will tell you if the failure was due to SMART or Scrutiny/BackBlaze data.
### Device failed but Smart & Scrutiny passed
Device SMART results are the source of truth for Scrutiny, however we don't just take into account the current SMART results, but also historical analysis of a disk.
This means that if a device is marked as failed at any point in its history, it will continue to be stored in the database as failed until the device is removed (or status is reset -- see below).
In some cases, this historical failure may have been due to attribute analysis/thresholds that have since been relaxed:
- NVME - Numb Error Log Entries (v0.4.7)
- ATA - Power Cycle Count (v0.4.7)
- ATA - Read Error Rate (v0.4.13)
- ATA - Seek Error Rate (v0.4.13)
If you'd like to reset the status of a disk (to healthy) and allow the next run of the collector to determine the actual status, you can run the following command:
```bash
# connect to scrutiny docker container
docker exec -it scrutiny bash
# install sqlite CLI tools (inside container)
apt update && apt install -y sqlite3
# connect to the scrutiny database
sqlite3 /opt/scrutiny/config/scrutiny.db
# reset/update the devices table, unset the failure status.
UPDATE devices SET device_status = null;
# exit sqlite CLI
.exit
```
### Seagate Drives Failing
As thoroughly discussed in [#255](https://github.com/AnalogJ/scrutiny/issues/255) and [#522](https://github.com/AnalogJ/scrutiny/issues/522), Seagate (Ironwolf & others) drives are almost always marked as failed by Scrutiny.
#### Seek Error Rate & Read Error Rate (#255)
> The `Seek Error Rate` & `Read Error Rate` attribute raw values are typically very high, and the
> normalised values (Current / Worst / Threshold) are usually quite low. Despite this, the numbers in most cases are perfectly OK
>
> The anxiety arises because we intuitively expect that the normalised values should reflect a "health" score, with
> 100 being the ideal value. Similarly, we would expect that the raw values should reflect an error count, in
> which case a value of 0 would be most desirable. However, Seagate calculates and applies these attribute values
> in a counterintuitive way.
>
> http://www.users.on.net/~fzabkar/HDD/Seagate_SER_RRER_HEC.html
Some analysis has been done which shows that Seagate drives break the common SMART conventions, which also causes Scrutiny's
comparison against BackBlaze data to detect these drives as failed.
**So what's the Solution?**
After taking a look at the BackBlaze data for the relevant Attributes (`Seek Error Rate` & `Read Error Rate`), I've decided
to disable Scrutiny analysis for them. Both are non-critical, and have low-correlation with failure.
> Please note: SMART failures for these attributes will still cause the drive to be marked as failed. Only BackBlaze analysis has been disabled
If this is effecting your drives, you'll need to do the following:
1. Upgrade to v0.4.13+
2. Reset your drive status using the SQLite script
in [#device-failed-but-smart--scrutiny-passed](https://github.com/AnalogJ/scrutiny/blob/master/docs/TROUBLESHOOTING_DEVICE_COLLECTOR.md#device-failed-but-smart--scrutiny-passed)
3. Wait for (or manually start) the collector.
If you'd like to learn more about how the Seagate Ironwolf SMART attributes work under the hood, and how they differ
from
other drives, please read the following:
- http://www.users.on.net/~fzabkar/HDD/Seagate_SER_RRER_HEC.html
- https://www.truenas.com/community/threads/seagate-ironwolf-smart-test-raw_read_error_rate-seek_error_rate.68634/
#### Seagate Raw Values are incorrect (#522)
Basically Seagate drives are known to use a custom data format for a number of their SMART attributes. This causes issues with Scrutiny's threshold analysis.
- The workaround is to customize the smartctl command that Scrutiny uses for your drive by [creating a collector.yaml file](https://github.com/AnalogJ/scrutiny/issues/522#issuecomment-1766727988) and "fixing" each attribute
- The **real "fix"** is to make sure your Seagate drive is correctly supported by smartmontools. See this [PR](https://github.com/smartmontools/smartmontools/pull/247)
Sorry for the bad news, but this is a known issue and there's just not much we can do on the Scrutiny side.
## Hub & Spoke model, with multiple Hosts.
![multiple-host-ids image](multiple-host-ids.png)
When deploying Scrutiny in a hub & spoke model, it can be difficult to determine exactly which node a set of devices are
associated with.
Thankfully the collector has a special `--host-id` flag (or `COLLECTOR_HOST_ID` env variable) that can be used to
associate devices with a friendly host name.
The host-id is passed from the collector to the web-api when SMART device data is uploaded. There's 3 ways you can set
the host-id:
- using the collector config
file: [master/example.collector.yaml#L19-L22](https://github.com/AnalogJ/scrutiny/blob/master/example.collector.yaml?rgh-link-date=2022-05-25T15%3A08%3A56Z#L19-L22)
- using the `--host-id` collector CLI
argument: [master/collector/cmd/collector-metrics/collector-metrics.go#L180-L185](https://github.com/AnalogJ/scrutiny/blob/master/collector/cmd/collector-metrics/collector-metrics.go?rgh-link-date=2022-05-25T15%3A08%3A56Z#L180-L185)
- using the `COLLECTOR_HOST_ID` environmental variable.
See the [docs/INSTALL_HUB_SPOKE.md](/docs/INSTALL_HUB_SPOKE.md) guide for more information.
## Collector DEBUG mode
You can use environmental variables to enable debug logging and/or log files for the collector:
```bash
DEBUG=true
COLLECTOR_LOG_FILE=/tmp/collector.log
```
Or if you're not using docker, you can pass CLI arguments to the collector during startup:
```bash
scrutiny-collector-metrics run --debug --log-file /tmp/collector.log
```
## Collector trigger on startup
When the `omnibus` docker image starts up, it will automatically trigger the collector, which will populate the Scrutiny
Webui with your disks.
This is not the case when running the collector docker image in **hub/spoke** mode, as the collector and webui are
running in different containers (and potentially different host machines), so
the web container may not be ready for incoming connections. By default the container will only run the collector at the
time specified in the cron schedule.
You can force the collector to run on startup using the following env variables:
- `-e COLLECTOR_RUN_STARTUP=true` - forces the collector to run on startup (cron will be started after the collector
completes)
- `-e COLLECTOR_RUN_STARTUP_SLEEP=10` - if `COLLECTOR_RUN_STARTUP` is enabled, you can use this env variable to
configure the delay before the collector is run (default: `1` second). Used to ensure the web container has started
successfully.
+23
View File
@@ -0,0 +1,23 @@
# Docker Images `latest` vs `nightly`
> TL;DR; The `latest-omnibus`, `latest-collector`, and `latest-web` tags point to the most recent release. (`latest` points to `latest-omnibus`)
> The `nightly-omnibus`, `nightly-collector`, and `nightly-web` tags point to builds that are generated every night from the latest commit on the `master` branch.
The CD scripts used to orchestrate the docker image builds can be found here:
* https://github.com/AnalogJ/scrutiny/blob/master/.github/workflows/docker-build.yaml
* https://github.com/AnalogJ/scrutiny/blob/master/.github/workflows/docker-nightly.yaml
In general scrutiny follows a feature branch development process, which means that the `master` branch should ideally always be free of bugs
This is driven by the requirement that every PR be reviewed and pass all tests. Unfortunately, bugs do make it through, especially because of the
enormous number of hard drives that scrutiny must support..
This means that while the nightly builds should have the latest features and bug fixes, there may be things that sneak through. Unless you need a particular
feature or bug fix, we recommend sticking to releases. Also note that using `latest` tags is generally considered a bad practice; pin a specific version instead.
# Running Docker `rootless`
To avoid that the container(s) restart when you installed Docker as `rootless` you need to isssue the following commands to allow the session to stay alive even after you close your (SSH) sesssion:
`sudo loginctl enable-linger $(whoami)`
`systemctl --user enable docker`
+452
View File
@@ -0,0 +1,452 @@
# InfluxDB Troubleshooting
## Why??
Scrutiny has many features, but the relevant one to this conversation is the "S.M.A.R.T metric tracking for historical
trends". Basically Scrutiny not only shows you the current SMART values, but how they've changed over weeks, months (or
even years).
To efficiently handle that data at scale (and to make my life easier as a developer) I decided to add InfluxDB as a
dependency. It's a dedicated timeseries database, as opposed to the general purpose sqlite DB I used before. I also did
a bunch of testing and analysis before I made the change. With InfluxDB the memory footprint for Scrutiny (at idle) is ~
100mb, which is still fairly reasonable.
### Data Size
It's surprisingly easy to reach extremely large database sizes, if you don't use downsampling, or you downsample incorrectly.
The growth rate is pretty unintuitive -- see https://github.com/AnalogJ/scrutiny/issues/650#issuecomment-2365174940
> Fasten stores the SMART metrics in a timeseries database (InfluxDB), and automatically downsamples the data on a schedule.
>
> The expectation was that cron would run daily, and there would be:
>
> - 7 daily data points
> - 3 weekly data points
> - 11 monthly data points
> - and infinite yearly data points.
>
> These data points would be for each SMART metric, for each device.
> eg. in one year, (7+3+11)*80ish SMART attributes = 1680 datapoints for one device
>
> If you're running cron every 15 minutes, your browser will instead be attempting to display:
>
> - 96*7 daily data points
> - 3 weekly
> - 11 monthly
>
> so (96*7 + 3 + 11)*80 = 54,880 datapoints for each device 😭
## Installation
InfluxDB is a required dependency for Scrutiny v0.4.0+.
https://docs.influxdata.com/influxdb/v2/install/
## Persistence
To ensure that all data is correctly stored, you must also persist the InfluxDB database directory
- If you're using the Official Scrutiny Omnibus image (`ghcr.io/analogj/scrutiny:master-omnibus`), the path is `/opt/scrutiny/influxdb`
- If you're deploying in Hub/Spoke mode with the InfluxDB maintained image (`influxdb:2.8`), the path is `/var/lib/influxdb2`
If you attempt to restart Scrutiny but you forgot to persist the InfluxDB directory, you will get an error message like follows:
```
scrutiny | time="2022-05-12T22:54:12Z" level=info msg="Trying to connect to scrutiny sqlite db: /opt/scrutiny/config/scrutiny.db\n"
scrutiny | time="2022-05-12T22:54:12Z" level=info msg="Successfully connected to scrutiny sqlite db: /opt/scrutiny/config/scrutiny.db\n"
scrutiny | ts=2022-05-12T22:54:12.240791Z lvl=info msg=Unauthorized log_id=0aQcVlOW000 error="authorization not found"
scrutiny | panic: unauthorized: unauthorized access
```
Unfortunately this may mean that your database is lost, and the previous Scrutiny data is unavailable.
You should fix the docker-compose/docker run command that you're using to ensure that your database folder is persisted correctly,
then delete the `web.influxdb.token` field in your `scrutiny.yaml` file, and then restart Scrutiny.
## First Start
The web/api service will trigger an InfluxDB onboarding process automatically when it first starts. After that, it will store the newly generated influxdb api token in the Scrutiny config file.
If this Credential is not correctly stored in the scrutiny config file, Scrutiny will fail to start (with an authentication error)
```
scrutiny | time="2022-05-12T22:52:55Z" level=info msg="Successfully connected to scrutiny sqlite db: /opt/scrutiny/config/scrutiny.db\n"
scrutiny | ts=2022-05-12T22:52:55.235753Z lvl=error msg="failed to onboard user admin" log_id=0aQcRnc0000 handler=onboard error="onboarding has already been completed" took=0.038ms
scrutiny | ts=2022-05-12T22:52:55.235816Z lvl=error msg="api error encountered" log_id=0aQcRnc0000 error="onboarding has already been completed"
scrutiny | panic: conflict: onboarding has already been completed
```
You can fix this issue by authenticating to the InfluxDB admin portal (the default credentials are username: `admin`, password: `password12345`),
then retrieving the API token, and writing it to your `scrutiny.yaml` config file under the `web.influxdb.token` field:
![influx db admin token](./influxdb-admin-token.png)
## Upgrading from v0.3.x to v0.4.x
When upgrading from v0.3.x to v0.4.x, some users have noticed problems such as:
```
2022/05/13 14:38:05 Loading configuration file: /opt/scrutiny/config/scrutiny.yaml
time="2022-05-13T14:38:05Z" level=info msg="Trying to connect to scrutiny sqlite db:"
time="2022-05-13T14:38:05Z" level=info msg="Successfully connected to scrutiny sqlite db:"
panic: a username and password is required for a setup
```
or
```
Start the scrutiny server
time="2022-06-11T10:35:04-04:00" level=info msg="Trying to connect to scrutiny sqlite db: \n"
time="2022-06-11T10:35:04-04:00" level=info msg="Successfully connected to scrutiny sqlite db: \n"
panic: failed to check influxdb setup status - parse "://:": missing protocol scheme
```
As discussed in [#248](https://github.com/AnalogJ/scrutiny/issues/248) and [#234](https://github.com/AnalogJ/scrutiny/issues/234),
this usually related to either:
- Upgrading from the LSIO Scrutiny image to the Official Scrutiny image, without removing LSIO specific environmental
variables
- remove the `SCRUTINY_WEB=true` and `SCRUTINY_COLLECTOR=true` environmental variables. They were used by the LSIO
image, but are unnecessary and cause issues with the official Scrutiny image.
- Change your volume mappings to `/opt/scrutiny` from `/scrutiny`
- Updated versions of the [LSIO Scrutiny images are broken](https://github.com/linuxserver/docker-scrutiny/issues/22),
as they have not installed InfluxDB which is a required dependency of Scrutiny v0.4.x
- You can revert to an earlier version of the LSIO image (`lscr.io/linuxserver/scrutiny:060ac7b8-ls34`), or just
change to the official Scrutiny image (`ghcr.io/analogj/scrutiny:master-omnibus`)
Here's a couple of confirmed working docker-compose files that you may want to look at:
- https://github.com/AnalogJ/scrutiny/blob/master/docker/example.hubspoke.docker-compose.yml
- https://github.com/AnalogJ/scrutiny/blob/master/docker/example.omnibus.docker-compose.yml
## Bring your own InfluxDB
> WARNING: Most users should not follow these steps. This is ONLY for users who have an EXISTING InfluxDB installation which contains data from multiple services.
> The Scrutiny Docker omnibus image includes an empty InfluxDB instance which it can configure.
> If you're deploying manually or via Hub/Spoke, you can just follow the installation instructions, Scrutiny knows how
> to run the first-time setup automatically.
The goal here is to create an InfluxDB API key with minimal permissions for use by Scrutiny.
- Create Scrutiny buckets (`metrics`, `metrics_weekly`, `metrics_monthly`, `metrics_yearly`) with placeholder config
- Create Downsampling tasks (`tsk-weekly-aggr`, `tsk-monthly-aggr`, `tsk-yearly-aggr`) with placeholder script.
- Create API token with restricted scope
- NOTE: Placeholder bucket & task configuration will be replaced automatically by Scrutiny during startup
The placeholder buckets and tasks need to be created before the API token can be created, as the resource ID's need to
exist for the scope restriction to work.
Scopes:
- `orgs`: read - required for scrutiny to find it's configured org_id
- `tasks`: scrutiny specific read/write access - Scrutiny only needs access to the downsampling tasks you created above
- `buckets`: scrutiny specific read/write access - Scrutiny only needs access to the buckets you created above
### Setup Environmental Variables
```bash
# replace the following values with correct values for your InfluxDB installation
export INFLUXDB_ADMIN_TOKEN=pCqRq7xxxxxx-FZgNLfstIs0w==
export INFLUXDB_ORG_ID=b2495xxxxx
export INFLUXDB_HOSTNAME=http://localhost:8086
# if you want to change the bucket name prefix below, you'll also need to update the setting in the scrutiny.yaml config file.
export INFLUXDB_SCRUTINY_BUCKET_BASENAME=metrics
```
### Create placeholder buckets
<details>
<summary>Click to expand!</summary>
```bash
curl -sS -X POST ${INFLUXDB_HOSTNAME}/api/v2/buckets \
-H "Content-Type: application/json" \
-H "Authorization: Token ${INFLUXDB_ADMIN_TOKEN}" \
--data-binary @- << EOF
{
"name": "${INFLUXDB_SCRUTINY_BUCKET_BASENAME}",
"orgID": "${INFLUXDB_ORG_ID}",
"retentionRules": []
}
EOF
curl -sS -X POST ${INFLUXDB_HOSTNAME}/api/v2/buckets \
-H "Content-Type: application/json" \
-H "Authorization: Token ${INFLUXDB_ADMIN_TOKEN}" \
--data-binary @- << EOF
{
"name": "${INFLUXDB_SCRUTINY_BUCKET_BASENAME}_weekly",
"orgID": "${INFLUXDB_ORG_ID}",
"retentionRules": []
}
EOF
curl -sS -X POST ${INFLUXDB_HOSTNAME}/api/v2/buckets \
-H "Content-Type: application/json" \
-H "Authorization: Token ${INFLUXDB_ADMIN_TOKEN}" \
--data-binary @- << EOF
{
"name": "${INFLUXDB_SCRUTINY_BUCKET_BASENAME}_monthly",
"orgID": "${INFLUXDB_ORG_ID}",
"retentionRules": []
}
EOF
curl -sS -X POST ${INFLUXDB_HOSTNAME}/api/v2/buckets \
-H "Content-Type: application/json" \
-H "Authorization: Token ${INFLUXDB_ADMIN_TOKEN}" \
--data-binary @- << EOF
{
"name": "${INFLUXDB_SCRUTINY_BUCKET_BASENAME}_yearly",
"orgID": "${INFLUXDB_ORG_ID}",
"retentionRules": []
}
EOF
```
</details>
### Create placeholder tasks
<details>
<summary>Click to expand!</summary>
```bash
curl -sS -X POST ${INFLUXDB_HOSTNAME}/api/v2/tasks \
-H "Content-Type: application/json" \
-H "Authorization: Token ${INFLUXDB_ADMIN_TOKEN}" \
--data-binary @- << EOF
{
"orgID": "${INFLUXDB_ORG_ID}",
"flux": "option task = {name: \"tsk-weekly-aggr\", every: 1y} \nyield now()"
}
EOF
curl -sS -X POST ${INFLUXDB_HOSTNAME}/api/v2/tasks \
-H "Content-Type: application/json" \
-H "Authorization: Token ${INFLUXDB_ADMIN_TOKEN}" \
--data-binary @- << EOF
{
"orgID": "${INFLUXDB_ORG_ID}",
"flux": "option task = {name: \"tsk-monthly-aggr\", every: 1y} \nyield now()"
}
EOF
curl -sS -X POST ${INFLUXDB_HOSTNAME}/api/v2/tasks \
-H "Content-Type: application/json" \
-H "Authorization: Token ${INFLUXDB_ADMIN_TOKEN}" \
--data-binary @- << EOF
{
"orgID": "${INFLUXDB_ORG_ID}",
"flux": "option task = {name: \"tsk-yearly-aggr\", every: 1y} \nyield now()"
}
EOF
```
</details>
### Create InfluxDB API Token
<details>
<summary>Click to expand!</summary>
```bash
# replace these values with placeholder bucket and task ids from your InfluxDB installation.
export INFLUXDB_SCRUTINY_BASE_BUCKET_ID=1e0709xxxx
export INFLUXDB_SCRUTINY_WEEKLY_BUCKET_ID=1af03dexxxxx
export INFLUXDB_SCRUTINY_MONTHLY_BUCKET_ID=b3c59c7xxxxx
export INFLUXDB_SCRUTINY_YEARLY_BUCKET_ID=f381d8cxxxxx
export INFLUXDB_SCRUTINY_WEEKLY_TASK_ID=09a64ecxxxxx
export INFLUXDB_SCRUTINY_MONTHLY_TASK_ID=09a64xxxxx
export INFLUXDB_SCRUTINY_YEARLY_TASK_ID=09a64ecxxxxx
curl -sS -X POST ${INFLUXDB_HOSTNAME}/api/v2/authorizations \
-H "Content-Type: application/json" \
-H "Authorization: Token ${INFLUXDB_ADMIN_TOKEN}" \
--data-binary @- << EOF
{
"description": "scrutiny - restricted scope token",
"orgID": "${INFLUXDB_ORG_ID}",
"permissions": [
{
"action": "read",
"resource": {
"type": "orgs"
}
},
{
"action": "read",
"resource": {
"type": "tasks"
}
},
{
"action": "write",
"resource": {
"type": "tasks",
"id": "${INFLUXDB_SCRUTINY_WEEKLY_TASK_ID}",
"orgID": "${INFLUXDB_ORG_ID}"
}
},
{
"action": "write",
"resource": {
"type": "tasks",
"id": "${INFLUXDB_SCRUTINY_MONTHLY_TASK_ID}",
"orgID": "${INFLUXDB_ORG_ID}"
}
},
{
"action": "write",
"resource": {
"type": "tasks",
"id": "${INFLUXDB_SCRUTINY_YEARLY_TASK_ID}",
"orgID": "${INFLUXDB_ORG_ID}"
}
},
{
"action": "read",
"resource": {
"type": "buckets",
"id": "${INFLUXDB_SCRUTINY_BASE_BUCKET_ID}",
"orgID": "${INFLUXDB_ORG_ID}"
}
},
{
"action": "write",
"resource": {
"type": "buckets",
"id": "${INFLUXDB_SCRUTINY_BASE_BUCKET_ID}",
"orgID": "${INFLUXDB_ORG_ID}"
}
},
{
"action": "read",
"resource": {
"type": "buckets",
"id": "${INFLUXDB_SCRUTINY_WEEKLY_BUCKET_ID}",
"orgID": "${INFLUXDB_ORG_ID}"
}
},
{
"action": "write",
"resource": {
"type": "buckets",
"id": "${INFLUXDB_SCRUTINY_WEEKLY_BUCKET_ID}",
"orgID": "${INFLUXDB_ORG_ID}"
}
},
{
"action": "read",
"resource": {
"type": "buckets",
"id": "${INFLUXDB_SCRUTINY_MONTHLY_BUCKET_ID}",
"orgID": "${INFLUXDB_ORG_ID}"
}
},
{
"action": "write",
"resource": {
"type": "buckets",
"id": "${INFLUXDB_SCRUTINY_MONTHLY_BUCKET_ID}",
"orgID": "${INFLUXDB_ORG_ID}"
}
},
{
"action": "read",
"resource": {
"type": "buckets",
"id": "${INFLUXDB_SCRUTINY_YEARLY_BUCKET_ID}",
"orgID": "${INFLUXDB_ORG_ID}"
}
},
{
"action": "write",
"resource": {
"type": "buckets",
"id": "${INFLUXDB_SCRUTINY_YEARLY_BUCKET_ID}",
"orgID": "${INFLUXDB_ORG_ID}"
}
}
]
}
EOF
```
</details>
### Save InfluxDB API Token
After running the Curl command above, you'll see a JSON response that looks like the following:
```json
{
"token": "ksVU2t5SkQwYkvIxxxxxxxYt2xUt0uRKSbSF1Po0UQ==",
"status": "active",
"description": "scrutiny - restricted scope token",
"orgID": "b2495586xxxx",
"org": "my-org",
"user": "admin",
"permissions": [
{
"action": "read",
"resource": {
"type": "orgs"
}
},
{
"action": "read",
"resource": {
"type": "tasks"
}
},
{
"action": "write",
"resource": {
"type": "tasks",
"id": "09a64exxxxx",
"orgID": "b24955860xxxxx",
"org": "my-org"
}
},
...
]
}
```
You must copy the token field from the JSON response, and save it in your `scrutiny.yaml` config file. After that's
done, you can start the Scrutiny server
## Customize InfluxDB Admin Username & Password
The full set of InfluxDB configuration options are available
in [code](https://github.com/AnalogJ/scrutiny/blob/master/webapp/backend/pkg/config/config.go?rgh-link-date=2023-01-19T16%3A23%3A40Z#L49-L51)
.
During first startup Scrutiny will connect to the unprotected InfluxDB server, start the setup process (via API) using a
username and password of `admin`:`password12345` and then create an API token of `scrutiny-default-admin-token`.
After that's complete, it will use the api token for all subsequent communication with InfluxDB.
You can configure the values for the Admin username, password and token using the config file, or env variables:
#### Config File Example
```yaml
web:
influxdb:
token: 'my-custom-token'
init_username: 'my-custom-username'
init_password: 'my-custom-password'
```
#### Environmental Variables Example
`SCRUTINY_WEB_INFLUXDB_TOKEN` , `SCRUTINY_WEB_INFLUXDB_INIT_USERNAME` and `SCRUTINY_WEB_INFLUXDB_INIT_PASSWORD`
It's safe to change the InfluxDB Admin username/password after setup has completed, only the API token is used for
subsequent communication with InfluxDB.
+46
View File
@@ -0,0 +1,46 @@
# Notifications
As documented in [example.scrutiny.yaml](https://github.com/AnalogJ/scrutiny/blob/master/example.scrutiny.yaml#L59-L75)
there are multiple ways to configure notifications for Scrutiny.
Under the hood we use a library called [Shoutrrr](https://github.com/nicholas-fedor/shoutrrr) to send our notifications, and you should use their documentation if you run into
any issues: https://shoutrrr.nickfedor.com/services/overview/
# Script Notifications
While the Shoutrrr library supports many popular providers for sending notifications Scrutiny also supports a "script" based
notification system, allowing you to execute a custom script whenever a notification needs to be sent.
Data is provided to this script using the following environmental variables:
```
SCRUTINY_SUBJECT - eg. "Scrutiny SMART error (%s) detected on device: %s"
SCRUTINY_DATE
SCRUTINY_FAILURE_TYPE - EmailTest, SmartFail, ScrutinyFail
SCRUTINY_DEVICE_NAME - eg. /dev/sda
SCRUTINY_DEVICE_TYPE - ATA/SCSI/NVMe
SCRUTINY_DEVICE_SERIAL - eg. WDDJ324KSO
SCRUTINY_MESSAGE - eg. "Scrutiny SMART error notification for device: %s\nFailure Type: %s\nDevice Name: %s\nDevice Serial: %s\nDevice Type: %s\nDate: %s"
SCRUTINY_HOST_ID - (optional) eg. "my-custom-host-id"
```
# Special Characters
`Shoutrrr` supports special characters in the username and password fields, however you'll need to url-encode the
username and the password separately.
- if your username is: `myname@example.com`
- if your password is `124@34$1`
Then your `shoutrrr` url will look something like:
- `smtp://myname%40example%2Ecom:124%4034%241@ms.my.domain.com:587`
# Testing Notifications
You can test that your notifications are configured correctly by posting an empty payload to the notifications health
check API.
```
curl -X POST http://localhost:8080/api/health/notify
```
+139
View File
@@ -0,0 +1,139 @@
# Reverse Proxy Support
Scrutiny is designed so that it can be used with a reverse proxy, leveraging `domain`, `port` or `path` based matching to correctly route to the Scrutiny service.
For simple `domain` and/or `port` based routing, this is easy.
If your domain:port pair is similar to `http://scrutiny.example.com` or `http://localhost:54321`, just update your reverse proxy configuration
to route traffic to the Scrutiny backend, which is listening on `0.0.0.0:8080` by default.
```yaml
# default config
web:
listen:
port: 8080
host: 0.0.0.0
```
However if you're using `path` based routing to differentiate your reverse proxy protected services, things become more complicated.
If you'd like to access Scrutiny using a path like: `http://example.com/scrutiny/`, then we need a way to configure Scrutiny so that it
understands `http://example.com/scrutiny/api/health` actually means `http://localhost:8080/api/health`.
Thankfully this can be done by changing **two** settings (both are required).
1. The webserver has a `web.listen.basepath` key
2. The collectors have a `api.endpoint` key.
## Webserver Configuration
When setting the `web.listen.basepath` key in the web config file, make sure the `basepath` key is prefixed with `/`.
```yaml
# customized webserver config
web:
listen:
port: 8080
host: 0.0.0.0
# if you're using a reverse proxy like apache/nginx, you can override this value to serve scrutiny on a subpath.
# eg. http://example.com/custombasepath/* vs http://example.com:8080
basepath: '/custombasepath'
```
## Collector Configuration
Here's how you can update the collector `api.endpoint` key:
```yaml
# customized collector config
api:
endpoint: 'http://localhost:8080/custombasepath'
```
# Environmental Variables.
You may also configure these values using the following environmental variables (both are required).
- `COLLECTOR_API_ENDPOINT=http://localhost:8080/custombasepath`
- `SCRUTINY_WEB_LISTEN_BASEPATH=/custombasepath`
# Real Examples
## Caddy
1. Create a Caddyfile
```yaml
# Caddyfile
:9090
# The `scrutiny` text in this file must match the service name in the docker-compose file below.
# The `/custom/` text is the custom base path scrutiny will be availble on.
reverse_proxy /custom/* scrutiny:8080
```
2. Create a `docker-compose.yml` file
```yaml
# docker-compose.yml
version: '3.5'
services:
scrutiny:
container_name: scrutiny
image: ghcr.io/analogj/scrutiny:master-omnibus
cap_add:
- SYS_RAWIO
ports:
- "8086:8086" # influxDB admin
volumes:
- /run/udev:/run/udev:ro
- ./config:/opt/scrutiny/config
- ./influxdb:/opt/scrutiny/influxdb
devices:
- "/dev/sda"
- "/dev/sdb"
environment:
- SCRUTINY_WEB_LISTEN_BASEPATH=/custom
- COLLECTOR_API_ENDPOINT=http://localhost:8080/custom
caddy:
image: caddy
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
ports:
- "9090:9090"
```
3. run `docker-compose up`
4. visit [http://localhost:9090/custom/web](http://localhost:9090/custom/web) - access the scrutiny container via caddy reverse proxy
## Traefik
Assuming, that you have Traefik up and running with [AutoDiscovery Using Traefik For Docker ](https://doc.traefik.io/traefik/providers/docker/),
here is an example of a `docker-compose.yml` file, with labels to enable Traefik reverse proxy and basic auth
```yaml
version: '3.5'
services:
scrutiny:
container_name: scrutiny
image: ghcr.io/analogj/scrutiny:master-omnibus
cap_add:
- SYS_RAWIO
- SYS_ADMIN
volumes:
- /run/udev:/run/udev:ro
- ./config:/opt/scrutiny/config
- ./influxdb:/opt/scrutiny/influxdb
labels:
- traefik.enable=true
- traefik.http.routers.scrutiny.rule=Host(`example.com`)
- traefik.http.services.scrutiny.loadbalancer.server.port=8080
# 2 labels below are optional, in case you want basic auth in Traefik:
- traefik.http.routers.scrutiny.middlewares=auth
- "traefik.http.middlewares.auth.basicauth.users=user:$$2y$$05$$G11Wm/dlWpXHENK..m8se.zxvaE8USJBp1Ws56sSCrOcwWDjsYHni"
# Note: when used in docker-compose.yml all dollar signs in the hash need to be doubled for escaping.
# To create user:password pair, it's possible to use this command:
# echo $(htpasswd -nB user) | sed -e s/\\$/\\$\\$/g
devices:
- "/dev/sda"
- "/dev/sdb"
- "/dev/nvme0"
```
+18
View File
@@ -0,0 +1,18 @@
# Operating systems without udev
Some operating systems do not come with `udev` out of the box, for example Alpine Linux. In these instances you will not be able to bind `/run/udev` to the container for sharing device metadata. Some operating systems offer `udev` as a package that can be installed separately, or an alternative (such as `eudev` in the case of Alpine Linux) that provides the same functionality.
To install `eudev` in Alpine Linux (run as root):
```
apk add eudev
setup-udev
```
Once your `udev` implementation is installed, create `/run/udev` with the following command:
```
udevadm trigger
```
On Alpine Linux, this also has the benefit of creating symlinks to device serial numbers in `/dev/disk/by-id`.
+89
View File
@@ -0,0 +1,89 @@
// SQLite Table(s)
Table Device {
Archived bool
//GORM attributes, see: http://gorm.io/docs/conventions.html
CreatedAt time
UpdatedAt time
DeletedAt time
WWN string
DeviceName string
DeviceUUID string
DeviceSerialID string
DeviceLabel string
Manufacturer string
ModelName string
InterfaceType string
InterfaceSpeed string
SerialNumber string
Firmware string
RotationSpeed int
Capacity int64
FormFactor string
SmartSupport bool
DeviceProtocol string//protocol determines which smart attribute types are available (ATA, NVMe, SCSI)
DeviceType string//device type is used for querying with -d/t flag, should only be used by collector.
// User provided metadata
Label string
HostId string
// Data set by Scrutiny
DeviceStatus enum
}
Table Setting {
//GORM attributes, see: http://gorm.io/docs/conventions.html
SettingKeyName string
SettingKeyDescription string
SettingDataType string
SettingValueNumeric int64
SettingValueString string
}
// InfluxDB Tables
Table SmartTemperature {
Date time
DeviceWWN string //(tag)
Temp int64
}
Table Smart {
Date time
DeviceWWN string //(tag)
DeviceProtocol string
//Metrics (fields)
Temp int64
PowerOnHours int64
PowerCycleCount int64
//Smart Status
Status enum
//SMART Attributes (fields)
Attr_ID_AttributeId int
Attr_ID_Value int64
Attr_ID_Threshold int64
Attr_ID_Worst int64
Attr_ID_RawValue int64
Attr_ID_RawString string
Attr_ID_WhenFailed string
//Generated data
Attr_ID_TransformedValue int64
Attr_ID_Status enum
Attr_ID_StatusReason string
Attr_ID_FailureRate float64
}
Ref: Device.WWN < Smart.DeviceWWN
Ref: Device.WWN < SmartTemperature.DeviceWWN
Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

+36 -1
View File
@@ -1,6 +1,6 @@
# Commented Scrutiny Configuration File
#
# The default location for this file is /scrutiny/config/collector.yaml.
# The default location for this file is /opt/scrutiny/config/collector.yaml.
# In some cases to improve clarity default values are specified,
# uncommented. Other example values are commented out.
#
@@ -31,6 +31,10 @@ devices:
# - device: /dev/sda
# type: 'sat'
#
# # example for using `-d sat,auto`, notice the square brackets (workaround for #418)
# - device: /dev/sda
# type: ['sat,auto']
#
# # example to show how to ignore a specific disk/device.
# - device: /dev/sda
# ignore: true
@@ -53,8 +57,32 @@ devices:
# - 3ware,3
# - 3ware,4
# - 3ware,5
#
# # example to show how to override the smartctl command args (per device), see below for how to override these globally.
# - device: /dev/sda
# commands:
# metrics_info_args: '--info --json -T permissive' # used to determine device unique ID & register device with Scrutiny
# metrics_smart_args: '--xall --json -T permissive' # used to retrieve smart data for each device.
#log:
# file: '' #absolute or relative paths allowed, eg. web.log
# level: INFO
#
#api:
# endpoint: 'http://localhost:8080'
# endpoint: 'http://localhost:8080/custombasepath'
# if you need to use a custom base path (for a reverse proxy), you can add a suffix to the endpoint.
# See docs/TROUBLESHOOTING_REVERSE_PROXY.md for more info,
# example to show how to override the smartctl command args globally
#commands:
# metrics_smartctl_bin: 'smartctl' # change to provide custom `smartctl` binary path, eg. `/usr/sbin/smartctl`
# metrics_scan_args: '--scan --json' # used to detect devices
# metrics_info_args: '--info --json' # used to determine device unique ID & register device with Scrutiny
# metrics_smart_args: '--xall --json' # used to retrieve smart data for each device.
# metrics_smartctl_wait: 0 # time to wait in seconds between each disk's check
########################################################################################################################
# FEATURES COMING SOON
@@ -63,3 +91,10 @@ devices:
#
########################################################################################################################
#collect:
# long:
# enable: false
# command: ''
# short:
# enable: false
# command: ''
+56 -58
View File
@@ -1,6 +1,6 @@
# Commented Scrutiny Configuration File
#
# The default location for this file is /scrutiny/config/scrutiny.yaml.
# The default location for this file is /opt/scrutiny/config/scrutiny.yaml.
# In some cases to improve clarity default values are specified,
# uncommented. Other example values are commented out.
#
@@ -20,13 +20,38 @@ web:
listen:
port: 8080
host: 0.0.0.0
# if you're using a reverse proxy like apache/nginx, you can override this value to serve scrutiny on a subpath.
# eg. http://example.com/scrutiny/* vs http://example.com:8080
# see docs/TROUBLESHOOTING_REVERSE_PROXY.md
# basepath: `/scrutiny`
# leave empty unless behind a path prefixed proxy
basepath: ''
database:
# can also set absolute path here
location: /scrutiny/config/scrutiny.db
location: /opt/scrutiny/config/scrutiny.db
src:
# the location on the filesystem where scrutiny javascript + css is located
frontend:
path: /scrutiny/web
path: /opt/scrutiny/web
# if you're running influxdb on a different host (or using a cloud-provider) you'll need to update the host & port below.
# token, org, bucket are unnecessary for a new InfluxDB installation, as Scrutiny will automatically run the InfluxDB setup,
# and store the information in the config file. If you 're re-using an existing influxdb installation, you'll need to provide
# the `token`
influxdb:
# scheme: 'http'
host: 0.0.0.0
port: 8086
# token: 'my-token'
# org: 'my-org'
# bucket: 'bucket'
retention_policy: true
# if you wish to disable TLS certificate verification,
# when using self-signed certificates for example,
# then uncomment the lines below and set `insecure_skip_verify: true`
# tls:
# insecure_skip_verify: false
log:
file: '' #absolute or relative paths allowed, eg. web.log
@@ -34,61 +59,34 @@ log:
# Notification "urls" look like the following. For more information about service specific configuration see
# Shoutrrr's documentation: https://containrrr.dev/shoutrrr/services/overview/
# Shoutrrr's documentation: https://shoutrrr.nickfedor.com/services/overview/
#
# note, usernames and passwords containing special characters will need to be urlencoded.
# if your username is: "myname@example.com" and your password is "124@34$1"
# your shoutrrr url will look like: "smtp://myname%40example%2Ecom:124%4034%241@ms.my.domain.com:587"
#notify:
# urls:
# - "discord://token@channel"
# - "telegram://token@telegram?channels=channel-1[,channel-2,...]"
# - "pushover://shoutrrr:apiToken@userKey/?priority=1&devices=device1[,device2, ...]"
# - "slack://[botname@]token-a/token-b/token-c"
# - "smtp://username:password@host:port/?fromAddress=fromAddress&toAddresses=recipient1[,recipient2,...]"
# - "teams://token-a/token-b/token-c"
# - "gotify://gotify-host/token"
# - "pushbullet://api-token[/device/#channel/email]"
# - "ifttt://key/?events=event1[,event2,...]&value1=value1&value2=value2&value3=value3"
# - "mattermost://[username@]mattermost-host/token[/channel]"
# - "hangouts://chat.googleapis.com/v1/spaces/FOO/messages?key=bar&token=baz"
# - "zulip://bot-mail:bot-key@zulip-domain/?stream=name-or-id&topic=name"
# - "join://shoutrrr:api-key@join/?devices=device1[,device2, ...][&icon=icon][&title=title]"
# - "script:///file/path/on/disk"
# - "https://www.example.com/path"
########################################################################################################################
# FEATURES COMING SOON
#
# The following commented out sections are a preview of additional configuration options that will be available soon.
#
########################################################################################################################
#disks:
# include:
# # - /dev/sda
# exclude:
# # - /dev/sdb
#limits:
# ata:
# critical:
# error: 10
# standard:
# error: 20
# warn: 10
# scsi:
# critical: true
# standard: true
# nvme:
# critical: true
# standard: true
#collect:
# metric:
# enable: true
# command: '-a -o on -S on'
# long:
# enable: false
# command: ''
# short:
# enable: false
# command: ''
# - discord://token@id[?thread_id=threadid]
# - googlechat://chat.googleapis.com/v1/spaces/FOO/messages?key=bar&token=baz
# - hangouts://chat.googleapis.com/v1/spaces/FOO/messages?key=bar&token=baz
# - lark://host/token?secret=secret&title=title&link=url
# - matrix://username:password@host:port/[?rooms=!roomID1[,roomAlias2]]
# - mattermost://[username@]mattermost-host/token[/channel]
# - rocketchat://[username@]rocketchat-host/token[/channel|@recipient]
# - signal://[user[:password]@]host[:port]/source_phone/recipient1[,recipient2,...]
# - slack://[botname@]token-a/token-b/token-c
# - teams://group@tenant/altId/groupOwner?host=organization.webhook.office.com
# - telegram://token@telegram?chats=@channel-1[,chat-id-1,chat-id-2:message-thread-id,...]
# - wecom://key
# - zulip://bot-mail:bot-key@zulip-domain/?stream=name-or-id&topic=name
# - bark://devicekey@host
# - gotify://gotify-host/token
# - ifttt://key/?events=event1[,event2,...]&value1=value1&value2=value2&value3=value3
# - join://shoutrrr:api-key@join/?devices=device1[,device2, ...][&icon=icon][&title=title]
# - ntfy://username:password@ntfy.sh/topic
# - pushbullet://api-token[/device/#channel/email]
# - pushover://shoutrrr:apiToken@userKey/?devices=device1[,device2, ...]
# - opsgenie://host/token?responders=responder1[,responder2]
# - pagerduty://[host[:port]]/integration-key[?query-parameters]
# - smtp://username:password@host:port/?fromaddress=fromAddress&toaddresses=recipient1[,recipient2,...][&additional_params]
+87 -22
View File
@@ -1,27 +1,92 @@
module github.com/analogj/scrutiny
go 1.13
go 1.25
require (
github.com/analogj/go-util v0.0.0-20190301173314-5295e364eb14
github.com/containrrr/shoutrrr v0.4.4
github.com/fatih/color v1.10.0
github.com/gin-gonic/gin v1.6.3
github.com/golang/mock v1.4.3
github.com/google/uuid v1.2.0 // indirect
github.com/jaypipes/ghw v0.6.1
github.com/klauspost/compress v1.12.1 // indirect
github.com/kvz/logstreamer v0.0.0-20150507115422-a635b98146f0 // indirect
github.com/mattn/go-sqlite3 v1.14.4 // indirect
github.com/mitchellh/mapstructure v1.2.2
github.com/onsi/ginkgo v1.16.1 // indirect
github.com/sirupsen/logrus v1.4.2
github.com/spf13/viper v1.7.0
github.com/stretchr/testify v1.5.1
github.com/urfave/cli/v2 v2.2.0
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9
golang.org/x/sys v0.0.0-20210423185535-09eb48e85fd7 // indirect
gorm.io/driver/sqlite v1.1.3
gorm.io/gorm v1.20.2
nhooyr.io/websocket v1.8.7 // indirect
github.com/analogj/go-util v0.0.0-20210417161720-39b497cca03b
github.com/fatih/color v1.18.0
github.com/gin-gonic/gin v1.11.0
github.com/glebarez/sqlite v1.11.0
github.com/go-gormigrate/gormigrate/v2 v2.1.5
github.com/go-viper/mapstructure/v2 v2.5.0
github.com/influxdata/influxdb-client-go/v2 v2.14.0
github.com/jaypipes/ghw v0.21.2
github.com/nicholas-fedor/shoutrrr v0.13.2
github.com/samber/lo v1.52.0
github.com/sirupsen/logrus v1.9.4
github.com/spf13/viper v1.21.0
github.com/stretchr/testify v1.11.1
github.com/urfave/cli/v2 v2.27.7
go.uber.org/mock v0.6.0
golang.org/x/sync v0.19.0
gorm.io/gorm v1.31.1
)
require (
github.com/apapsch/go-jsonmerge/v2 v2.0.0 // indirect
github.com/bytedance/gopkg v0.1.3 // indirect
github.com/bytedance/sonic v1.15.0 // indirect
github.com/bytedance/sonic/loader v0.5.0 // indirect
github.com/cloudwego/base64x v0.1.6 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.7 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/fsnotify/fsnotify v1.9.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.13 // indirect
github.com/gin-contrib/sse v1.1.0 // indirect
github.com/glebarez/go-sqlite v1.22.0 // indirect
github.com/go-ole/go-ole v1.3.0 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-playground/validator/v10 v10.30.1 // indirect
github.com/goccy/go-json v0.10.5 // indirect
github.com/goccy/go-yaml v1.19.2 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/influxdata/line-protocol v0.0.0-20210922203350-b1ad95c89adf // indirect
github.com/jaypipes/pcidb v1.1.1 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect
github.com/jinzhu/now v1.1.5 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
github.com/kvz/logstreamer v0.0.0-20221024075423-bf5cfbd32e39 // indirect
github.com/leodido/go-urn v1.4.0 // indirect
github.com/mattn/go-colorable v0.1.14 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/ncruces/go-strftime v1.0.0 // indirect
github.com/oapi-codegen/runtime v1.1.2 // indirect
github.com/pelletier/go-toml/v2 v2.2.4 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/quic-go/qpack v0.6.0 // indirect
github.com/quic-go/quic-go v0.59.0 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/sagikazarmark/locafero v0.12.0 // indirect
github.com/spf13/afero v1.15.0 // indirect
github.com/spf13/cast v1.10.0 // indirect
github.com/spf13/pflag v1.0.10 // indirect
github.com/subosito/gotenv v1.6.0 // indirect
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
github.com/ugorji/go/codec v1.3.1 // indirect
github.com/xrash/smetrics v0.0.0-20250705151800-55b8f293f342 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
golang.org/x/arch v0.23.0 // indirect
golang.org/x/crypto v0.47.0 // indirect
golang.org/x/exp v0.0.0-20260112195511-716be5621a96 // indirect
golang.org/x/net v0.49.0 // indirect
golang.org/x/sys v0.40.0 // indirect
golang.org/x/term v0.39.0 // indirect
golang.org/x/text v0.33.0 // indirect
google.golang.org/protobuf v1.36.11 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
howett.net/plist v1.0.2-0.20250314012144-ee69052608d9 // indirect
modernc.org/libc v1.67.7 // indirect
modernc.org/mathutil v1.7.1 // indirect
modernc.org/memory v1.11.0 // indirect
modernc.org/sqlite v1.44.3 // indirect
)
+229 -563
View File
@@ -1,586 +1,252 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/firestore v1.1.0/go.mod h1:ulACoGHTpvq5r8rxGJ4ddJZBZqakUQqClKRT5SZwBmk=
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/StackExchange/wmi v0.0.0-20190523213315-cbe66965904d h1:G0m3OIz70MZUWq3EgK3CesDbo8upS2Vm9/P3FtgI+Jk=
github.com/StackExchange/wmi v0.0.0-20190523213315-cbe66965904d/go.mod h1:3eOhrUMpNV+6aFIbp5/iudMxNCF27Vw2OZgy4xEx0Fg=
github.com/agnivade/wasmbrowsertest v0.3.1/go.mod h1:zQt6ZTdl338xxRaMW395qccVE2eQm0SjC/SDz0mPWQI=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/analogj/go-util v0.0.0-20190301173314-5295e364eb14 h1:wsrSjiqQtseStRIoLLxS4C5IEtXkazZVEPDHq8jW7r8=
github.com/analogj/go-util v0.0.0-20190301173314-5295e364eb14/go.mod h1:lJQVqFKMV5/oDGYR2bra2OljcF3CvolAoyDRyOA4k4E=
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/chromedp/cdproto v0.0.0-20190614062957-d6d2f92b486d/go.mod h1:S8mB5wY3vV+vRIzf39xDXsw3XKYewW9X6rW2aEmkrSw=
github.com/chromedp/cdproto v0.0.0-20190621002710-8cbd498dd7a0/go.mod h1:S8mB5wY3vV+vRIzf39xDXsw3XKYewW9X6rW2aEmkrSw=
github.com/chromedp/cdproto v0.0.0-20190812224334-39ef923dcb8d/go.mod h1:0YChpVzuLJC5CPr+x3xkHN6Z8KOSXjNbL7qV8Wc4GW0=
github.com/chromedp/cdproto v0.0.0-20190926234355-1b4886c6fad6/go.mod h1:0YChpVzuLJC5CPr+x3xkHN6Z8KOSXjNbL7qV8Wc4GW0=
github.com/chromedp/chromedp v0.3.1-0.20190619195644-fd957a4d2901/go.mod h1:mJdvfrVn594N9tfiPecUidF6W5jPRKHymqHfzbobPsM=
github.com/chromedp/chromedp v0.4.0/go.mod h1:DC3QUn4mJ24dwjcaGQLoZrhm4X/uPHZ6spDbS2uFhm4=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/containrrr/shoutrrr v0.0.0-20200828202222-1da53231b05a h1:6ZMiughZYF6fJjFIf2X3D7AfImJeXnTMJ9qC2v75WPw=
github.com/containrrr/shoutrrr v0.0.0-20200828202222-1da53231b05a/go.mod h1:z3pUtEhu5zOpu+Q8wZWiEq+ZLL9hM0HiFNhttaI67Ks=
github.com/containrrr/shoutrrr v0.4.4 h1:vHZ4E/76pKVY+Jyn/qhBz3X540Bn8NI5ppPHK4PyILY=
github.com/containrrr/shoutrrr v0.4.4/go.mod h1:zqL2BvfC1W4FujrT4b3/ZCLxvD+uoeEpBL7rg9Dqpbg=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d h1:U+s90UTSYgptZMwQh2aRr3LuazLJIa+Pg3Kc1ylSYVY=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.0 h1:EoUDS0afbrsXAZ9YQ9jdu/mZ2sXgT1/2yyNng4PGlyM=
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/Masterminds/semver/v3 v3.4.0 h1:Zog+i5UMtVoCU8oKka5P7i9q9HgrJeGzI9SA1Xbatp0=
github.com/Masterminds/semver/v3 v3.4.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM=
github.com/RaveNoX/go-jsoncommentstrip v1.0.0/go.mod h1:78ihd09MekBnJnxpICcwzCMzGrKSKYe4AqU6PDYYpjk=
github.com/analogj/go-util v0.0.0-20210417161720-39b497cca03b h1:Y/+MfmdKPPpVY7C6ggt/FpltFSitlpUtyJEdcQyFXQg=
github.com/analogj/go-util v0.0.0-20210417161720-39b497cca03b/go.mod h1:bRSzJXgXnT5+Ihah7RSC7Cvp16UmoLn3wq6ROciS1Ow=
github.com/apapsch/go-jsonmerge/v2 v2.0.0 h1:axGnT1gRIfimI7gJifB699GoE/oq+F2MU7Dml6nw9rQ=
github.com/apapsch/go-jsonmerge/v2 v2.0.0/go.mod h1:lvDnEdqiQrp0O42VQGgmlKpxL1AP2+08jFMw88y4klk=
github.com/bmatcuk/doublestar v1.1.1/go.mod h1:UD6OnuiIn0yFxxA2le/rnRU1G4RaI4UvFv1sNto9p6w=
github.com/bytedance/gopkg v0.1.3 h1:TPBSwH8RsouGCBcMBktLt1AymVo2TVsBVCY4b6TnZ/M=
github.com/bytedance/gopkg v0.1.3/go.mod h1:576VvJ+eJgyCzdjS+c4+77QF3p7ubbtiKARP3TxducM=
github.com/bytedance/sonic v1.15.0 h1:/PXeWFaR5ElNcVE84U0dOHjiMHQOwNIx3K4ymzh/uSE=
github.com/bytedance/sonic v1.15.0/go.mod h1:tFkWrPz0/CUCLEF4ri4UkHekCIcdnkqXw9VduqpJh0k=
github.com/bytedance/sonic/loader v0.5.0 h1:gXH3KVnatgY7loH5/TkeVyXPfESoqSBSBEiDd5VjlgE=
github.com/bytedance/sonic/loader v0.5.0/go.mod h1:AR4NYCk5DdzZizZ5djGqQ92eEhCCcdf5x77udYiSJRo=
github.com/cloudwego/base64x v0.1.6 h1:t11wG9AECkCDk5fMSoxmufanudBtJ+/HemLstXDLI2M=
github.com/cloudwego/base64x v0.1.6/go.mod h1:OFcloc187FXDaYHvrNIjxSe8ncn0OOM8gEHfghB2IPU=
github.com/cpuguy83/go-md2man/v2 v2.0.7 h1:zbFlGlXEAKlwXpmvle3d8Oe3YnkKIK4xSRTd3sHPnBo=
github.com/cpuguy83/go-md2man/v2 v2.0.7/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/edsrzf/mmap-go v1.0.0/go.mod h1:YO35OhQPt3KJa3ryjFM5Bs14WD66h8eGKpfaBNrHW5M=
github.com/fatih/color v1.6.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fatih/color v1.9.0 h1:8xPHl4/q1VyqGIPif1F+1V3Y3lSmrq01EabUW3CoW5s=
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
github.com/fatih/color v1.10.0 h1:s36xzo75JdqLaaWoiEHk767eHiwo0598uUxyfiPkDsg=
github.com/fatih/color v1.10.0/go.mod h1:ELkj/draVOlAH/xkhN6mQ50Qd0MPOk5AAr3maGEBuJM=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/gin-contrib/sse v0.1.0 h1:Y/yl/+YNO8GZSjAhjMsSuLt29uWRFHdHYUb5lYOV9qE=
github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm+fLHvGI=
github.com/gin-gonic/gin v1.6.3 h1:ahKqKTFpO5KTPHxWZjEdPScmYaGtLo8Y4DMHoEsnp14=
github.com/gin-gonic/gin v1.6.3/go.mod h1:75u5sXoLsGZoRN5Sgbi1eraJ4GU3++wFwWzhwvtwp4M=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-interpreter/wagon v0.5.1-0.20190713202023-55a163980b6c/go.mod h1:5+b/MBYkclRZngKF5s6qrgWxSLgE9F5dFdO1hAueZLc=
github.com/go-interpreter/wagon v0.6.0/go.mod h1:5+b/MBYkclRZngKF5s6qrgWxSLgE9F5dFdO1hAueZLc=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-ole/go-ole v1.2.4 h1:nNBDSCOigTSiarFpYE9J/KtEA1IOW4CNeqT9TQDqCxI=
github.com/go-ole/go-ole v1.2.4/go.mod h1:XCwSNxSkXRo4vlyPy93sltvi/qJq0jqQhjqQNIwKuxM=
github.com/go-playground/assert/v2 v2.0.1 h1:MsBgLAaY856+nPRTKrp3/OZK38U/wa0CcBYNjji3q3A=
github.com/go-playground/assert/v2 v2.0.1/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
github.com/go-playground/locales v0.13.0 h1:HyWk6mgj5qFqCT5fjGBuRArbVDfE4hi8+e8ceBS/t7Q=
github.com/go-playground/locales v0.13.0/go.mod h1:taPMhCMXrRLJO55olJkUXHZBHCxTMfnGwq/HNwmWNS8=
github.com/go-playground/universal-translator v0.17.0 h1:icxd5fm+REJzpZx7ZfpaD876Lmtgy7VtROAbHHXk8no=
github.com/go-playground/universal-translator v0.17.0/go.mod h1:UkSxE5sNxxRwHyU+Scu5vgOQjsIJAF8j9muTVoKLVtA=
github.com/go-playground/validator/v10 v10.2.0 h1:KgJ0snyC2R9VXYN2rneOtQcw5aHQB1Vv0sFl1UcHBOY=
github.com/go-playground/validator/v10 v10.2.0/go.mod h1:uOYAAleCW8F/7oMFd6aG0GOhaH6EGOAJShg8Id5JGkI=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
github.com/gobwas/httphead v0.0.0-20180130184737-2c6c146eadee/go.mod h1:L0fX3K22YWvt/FAX9NnzrNzcI4wNYi9Yku4O0LKYflo=
github.com/gobwas/pool v0.2.0/go.mod h1:q8bcK0KcYlCgd9e7WYLm9LpyS+YeLd8JVDW6WezmKEw=
github.com/gobwas/ws v1.0.2/go.mod h1:szmBTxLgaFppYjEmNtny/v3w89xOydFnnZMcgRRu/EM=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
github.com/golang/mock v1.4.3 h1:GV+pQPG/EUUbkh47niozDcADz6go/dUwhVzdUQHIVRw=
github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.3 h1:gyjaxf+svBWX08ZjK86iN9geUJF0H6gp2IRKX6Nf6/I=
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.2 h1:+Z5KGCizgyZCbGh1KZqA0fcLLkwbsjIzS4aV2v7wJX0=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM=
github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU=
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k=
github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
github.com/gabriel-vasile/mimetype v1.4.13 h1:46nXokslUBsAJE/wMsp5gtO500a4F3Nkz9Ufpk2AcUM=
github.com/gabriel-vasile/mimetype v1.4.13/go.mod h1:d+9Oxyo1wTzWdyVUPMmXFvp4F9tea18J8ufA774AB3s=
github.com/gin-contrib/sse v1.1.0 h1:n0w2GMuUpWDVp7qSpvze6fAu9iRxJY4Hmj6AmBOU05w=
github.com/gin-contrib/sse v1.1.0/go.mod h1:hxRZ5gVpWMT7Z0B0gSNYqqsSCNIJMjzvm6fqCz9vjwM=
github.com/gin-gonic/gin v1.11.0 h1:OW/6PLjyusp2PPXtyxKHU0RbX6I/l28FTdDlae5ueWk=
github.com/gin-gonic/gin v1.11.0/go.mod h1:+iq/FyxlGzII0KHiBGjuNn4UNENUlKbGlNmc+W50Dls=
github.com/glebarez/go-sqlite v1.22.0 h1:uAcMJhaA6r3LHMTFgP0SifzgXg46yJkgxqyuyec+ruQ=
github.com/glebarez/go-sqlite v1.22.0/go.mod h1:PlBIdHe0+aUEFn+r2/uthrWq4FxbzugL0L8Li6yQJbc=
github.com/glebarez/sqlite v1.11.0 h1:wSG0irqzP6VurnMEpFGer5Li19RpIRi2qvQz++w0GMw=
github.com/glebarez/sqlite v1.11.0/go.mod h1:h8/o8j5wiAsqSPoWELDUdJXhjAhsVliSn7bWZjOhrgQ=
github.com/go-gormigrate/gormigrate/v2 v2.1.5 h1:1OyorA5LtdQw12cyJDEHuTrEV3GiXiIhS4/QTTa/SM8=
github.com/go-gormigrate/gormigrate/v2 v2.1.5/go.mod h1:mj9ekk/7CPF3VjopaFvWKN2v7fN3D9d3eEOAXRhi/+M=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE=
github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78=
github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s=
github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA=
github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY=
github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY=
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
github.com/go-playground/validator/v10 v10.30.1 h1:f3zDSN/zOma+w6+1Wswgd9fLkdwy06ntQJp0BBvFG0w=
github.com/go-playground/validator/v10 v10.30.1/go.mod h1:oSuBIQzuJxL//3MelwSLD5hc2Tu889bF0Idm9Dg26cM=
github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
github.com/go-viper/mapstructure/v2 v2.5.0 h1:vM5IJoUAy3d7zRSVtIwQgBj7BiWtMPfmPEgAXnvj1Ro=
github.com/go-viper/mapstructure/v2 v2.5.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM=
github.com/goccy/go-json v0.10.5 h1:Fq85nIqj+gXn/S5ahsiTlK3TmC85qgirsdTP/+DeaC4=
github.com/goccy/go-json v0.10.5/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
github.com/goccy/go-yaml v1.19.2 h1:PmFC1S6h8ljIz6gMRBopkjP1TVT7xuwrButHID66PoM=
github.com/goccy/go-yaml v1.19.2/go.mod h1:XBurs7gK8ATbW4ZPGKgcbrY1Br56PdM69F7LkFRi1kA=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190908185732-236ed259b199/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.5 h1:kxhtnfFVi+rYdOALN0B3k9UT86zVJKfBimRaciULW4I=
github.com/google/uuid v1.1.5/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.2.0 h1:qJYtXnJRWmpe7m/3XlyhrsLrEURqHRM2kxzoxXqyUDs=
github.com/google/uuid v1.2.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gorilla/websocket v1.4.1/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/hashicorp/consul/api v1.1.0/go.mod h1:VmuI/Lkw1nC05EYQWNKwWGbkg+FbDBtguAZLlVdkD9Q=
github.com/hashicorp/consul/sdk v0.1.1/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
github.com/hashicorp/go-rootcerts v1.0.0/go.mod h1:K6zTfqpRlCUIjkwsN4Z+hiSfzSTQa6eBIzfwKfwNnHU=
github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU=
github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4=
github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go.net v0.0.1/go.mod h1:hjKkEWcCURg++eb33jQU7oqQcI9XDCnUzHA0oac0k90=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64=
github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0mNTz8vQ=
github.com/hashicorp/memberlist v0.1.3/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I=
github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc=
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/jarcoal/httpmock v1.0.4 h1:jp+dy/+nonJE4g4xbVtl9QdrUNbn6/3hDT5R4nDIZnA=
github.com/jarcoal/httpmock v1.0.4/go.mod h1:ATjnClrvW/3tijVmpL/va5Z3aAyGvqU3gCT8nX0Txik=
github.com/jaypipes/ghw v0.6.1 h1:Ewt3mdpiyhWotGyzg1ursV/6SnToGcG4215X6rR2af8=
github.com/jaypipes/ghw v0.6.1/go.mod h1:QOXppNRCLGYR1H+hu09FxZPqjNt09bqUZUnOL3Rcero=
github.com/jaypipes/pcidb v0.5.0 h1:4W5gZ+G7QxydevI8/MmmKdnIPJpURqJ2JNXTzfLxF5c=
github.com/jaypipes/pcidb v0.5.0/go.mod h1:L2RGk04sfRhp5wvHO0gfRAMoLY/F3PKv/nwJeVoho0o=
github.com/google/pprof v0.0.0-20260115054156-294ebfa9ad83 h1:z2ogiKUYzX5Is6zr/vP9vJGqPwcdqsWjOt+V8J7+bTc=
github.com/google/pprof v0.0.0-20260115054156-294ebfa9ad83/go.mod h1:MxpfABSjhmINe3F1It9d+8exIHFvUqtLIRCdOGNXqiI=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/influxdata/influxdb-client-go/v2 v2.14.0 h1:AjbBfJuq+QoaXNcrova8smSjwJdUHnwvfjMF71M1iI4=
github.com/influxdata/influxdb-client-go/v2 v2.14.0/go.mod h1:Ahpm3QXKMJslpXl3IftVLVezreAUtBOTZssDrjZEFHI=
github.com/influxdata/line-protocol v0.0.0-20210922203350-b1ad95c89adf h1:7JTmneyiNEwVBOHSjoMxiWAqB992atOeepeFYegn5RU=
github.com/influxdata/line-protocol v0.0.0-20210922203350-b1ad95c89adf/go.mod h1:xaLFMmpvUxqXtVkUJfg9QmT88cDaCJ3ZKgdZ78oO8Qo=
github.com/jarcoal/httpmock v1.4.1 h1:0Ju+VCFuARfFlhVXFc2HxlcQkfB+Xq12/EotHko+x2A=
github.com/jarcoal/httpmock v1.4.1/go.mod h1:ftW1xULwo+j0R0JJkJIIi7UKigZUXCLLanykgjwBXL0=
github.com/jaypipes/ghw v0.21.2 h1:woW0lqNMPbYk59sur6thOVM8YFP9Hxxr8PM+JtpUrNU=
github.com/jaypipes/ghw v0.21.2/go.mod h1:GPrvwbtPoxYUenr74+nAnWbardIZq600vJDD5HnPsPE=
github.com/jaypipes/pcidb v1.1.1 h1:QmPhpsbmmnCwZmHeYAATxEaoRuiMAJusKYkUncMC0ro=
github.com/jaypipes/pcidb v1.1.1/go.mod h1:x27LT2krrUgjf875KxQXKB0Ha/YXLdZRVmw6hH0G7g8=
github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E=
github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc=
github.com/jinzhu/now v1.1.1 h1:g39TucaRWyV3dwDO++eEc6qf8TVIQ/Da48WmqjZ3i7E=
github.com/jinzhu/now v1.1.1/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.9 h1:9yzud/Ht36ygwatGx56VwCZtlI/2AD15T1X2sjSuGns=
github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.10.3/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/klauspost/compress v1.11.7 h1:0hzRabrMN4tSTvMfnL3SCv1ZGeAP23ynzodBgaHeMeg=
github.com/klauspost/compress v1.11.7/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/klauspost/compress v1.12.1 h1:/+xsCsk06wE38cyiqOR/o7U2fSftcH72xD+BQXmja/g=
github.com/klauspost/compress v1.12.1/go.mod h1:8dP1Hq4DHOhN9w426knH3Rhby4rFm6D8eO+e+Dq5Gzg=
github.com/knq/sysutil v0.0.0-20181215143952-f05b59f0f307/go.mod h1:BjPj+aVjl9FW/cCGiF3nGh5v+9Gd3VCgBQbod/GlMaQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.1 h1:mweAR1A6xJ3oS2pRaGiHgQ4OO8tzTaLawm8vnODuwDk=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.2 h1:DB17ag19krx9CFsz4o3enTrPXyIXCl+2iCXH/aMAp9s=
github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kvz/logstreamer v0.0.0-20150507115422-a635b98146f0 h1:3tLzEnUizyN9YLWFTT9loC30lSBvh2y70LTDcZOTs1s=
github.com/jinzhu/now v1.1.5 h1:/o9tlHleP7gOFmsnYNz3RGnqzefHA47wQpKrrdTIwXQ=
github.com/jinzhu/now v1.1.5/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/juju/gnuflag v0.0.0-20171113085948-2ce1bb71843d/go.mod h1:2PavIy+JPciBPrBUjwbNvtwB6RQlve+hkpll6QSNmOE=
github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y=
github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kvz/logstreamer v0.0.0-20150507115422-a635b98146f0/go.mod h1:8/LTPeDLaklcUjgSQBHbhBF1ibKAFxzS5o+H7USfMSA=
github.com/leodido/go-urn v1.2.0 h1:hpXL4XnriNwQ/ABnpepYM/1vCLWNDfUNts8dX3xTG6Y=
github.com/leodido/go-urn v1.2.0/go.mod h1:+8+nEpDfqqsY+g338gtMEUOtuK+4dEMhiQEgxpxOKII=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.1 h1:ZC2Vc7/ZFkGmsVC9KvOjumD+G5lXy2RtTKyzRKO2BQ4=
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/mailru/easyjson v0.0.0-20190403194419-1ea4449da983/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190620125010-da37f6c1e481/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.7.0/go.mod h1:KAzv3t3aY1NaHWoQz1+4F1ccyAH66Jk7yos7ldAVICs=
github.com/kvz/logstreamer v0.0.0-20221024075423-bf5cfbd32e39 h1:CBydX6UrnBjwD107XLBGulgvTezrrR38FAufUJr3dsk=
github.com/kvz/logstreamer v0.0.0-20221024075423-bf5cfbd32e39/go.mod h1:8/LTPeDLaklcUjgSQBHbhBF1ibKAFxzS5o+H7USfMSA=
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
github.com/mattn/go-colorable v0.1.2/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
github.com/mattn/go-colorable v0.1.4 h1:snbPLB8fVfU9iwbbo30TPtbLRzwWu6aJS6Xh4eaaviA=
github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
github.com/mattn/go-colorable v0.1.8 h1:c1ghPdyEDarC70ftn0y+A/Ee++9zz8ljHG1b13eJ0s8=
github.com/mattn/go-colorable v0.1.8/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.9/go.mod h1:YNRxwqDuOph6SZLI9vUUz6OYw3QyUt7WiY2yME+cCiQ=
github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE=
github.com/mattn/go-isatty v0.0.12 h1:wuysRhFDzyxgEmMf5xjvJ2M9dZoWAXNNr5LSBS7uHXY=
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-sqlite3 v1.14.3 h1:j7a/xn1U6TKA/PHHxqZuzh64CdtRc7rU9M+AvkOl5bA=
github.com/mattn/go-sqlite3 v1.14.3/go.mod h1:WVKg1VTActs4Qso6iwGbiFih2UIHo0ENGwNd0Lj+XmI=
github.com/mattn/go-sqlite3 v1.14.4 h1:4rQjbDxdu9fSgI/r3KN72G3c2goxknAqHHgPWWs8UlI=
github.com/mattn/go-sqlite3 v1.14.4/go.mod h1:WVKg1VTActs4Qso6iwGbiFih2UIHo0ENGwNd0Lj+XmI=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=
github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8=
github.com/mattn/go-isatty v0.0.6/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
github.com/mitchellh/gox v0.4.0/go.mod h1:Sd9lOJ0+aimLBi73mGofS1ycjY8lL3uZM3JPS42BGNg=
github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0QubkSMEySY=
github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.2.2 h1:dxe5oCinTXiTIcfgmZecdCzPmAJKd46KsCWc35r0TV4=
github.com/mitchellh/mapstructure v1.2.2/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9AWI=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/nxadm/tail v1.4.6 h1:11TGpSHY7Esh/i/qnq02Jo5oVrI1Gue8Slbq0ujPZFQ=
github.com/nxadm/tail v1.4.6/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE=
github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.8.0 h1:VkHVNpR4iVnU8XQR6DBm8BqYjN7CRzw+xKUbVVbbW9w=
github.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.14.2 h1:8mVmC9kjFFmA8H4pKMUhcblgifdkOIXPvbhN1T36q1M=
github.com/onsi/ginkgo v1.14.2/go.mod h1:iSB4RoI2tjJc9BBv4NKIKWKya62Rps+oPG/Lv9klQyY=
github.com/onsi/ginkgo v1.16.1 h1:foqVmeWDD6yYpK+Yz3fHyNIxFYNxswxqNFjSKe+vI54=
github.com/onsi/ginkgo v1.16.1/go.mod h1:CObGmKUOKaSC0RjmoAK7tKyn4Azo5P2IWuoMnvwxz1E=
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.5.0 h1:izbySO9zDPmjJ8rDjLvkA2zJHIo+HkYXHnf7eN7SSyo=
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.7.0 h1:7utD74fnzVc/cpcyy8sjrlFr5vYpypUixARcHIMIGuI=
github.com/pelletier/go-toml v1.7.0/go.mod h1:vwGMzjaWMwyfHwgIBhI2YUM4fB6nL6lVAvS1LBMMhTE=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w=
github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/nicholas-fedor/shoutrrr v0.13.2 h1:hfsYBIqSFYGg92pZP5CXk/g7/OJIkLYmiUnRl+AD1IA=
github.com/nicholas-fedor/shoutrrr v0.13.2/go.mod h1:ZqzV3gY/Wj6AvWs1etlO7+yKbh4iptSbeL8avBpMQbA=
github.com/oapi-codegen/runtime v1.1.2 h1:P2+CubHq8fO4Q6fV1tqDBZHCwpVpvPg7oKiYzQgXIyI=
github.com/oapi-codegen/runtime v1.1.2/go.mod h1:SK9X900oXmPWilYR5/WKPzt3Kqxn/uS/+lbpREv+eCg=
github.com/onsi/ginkgo/v2 v2.28.1 h1:S4hj+HbZp40fNKuLUQOYLDgZLwNUVn19N3Atb98NCyI=
github.com/onsi/ginkgo/v2 v2.28.1/go.mod h1:CLtbVInNckU3/+gC8LzkGUb9oF+e8W8TdUsxPwvdOgE=
github.com/onsi/gomega v1.39.1 h1:1IJLAad4zjPn2PsnhH70V4DKRFlrCzGBNrNaru+Vf28=
github.com/onsi/gomega v1.39.1/go.mod h1:hL6yVALoTOxeWudERyfppUcZXjMwIMLnuSfruD2lcfg=
github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4=
github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/russross/blackfriday/v2 v2.0.1 h1:lPqVAte+HuHNfhJ/0LC98ESWRz8afy9tM/0RK8m9o+Q=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/shurcooL/sanitized_anchor_name v1.0.0 h1:PdmoCO6wvbs+7yrJyMORt4/BmY5IYyJwS/kOiWx8mHo=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/sirupsen/logrus v1.0.5/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
github.com/sirupsen/logrus v1.2.0 h1:juTguoYk5qI21pwyTXY3B3Y5cOTH3ZUyZCg1v/mihuo=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.2 h1:SPIRibHv4MatM3XXNO2BJeFLZwZ2LvZgfQ5+UNI2im4=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d h1:zE9ykElWQ6/NYmHa3jpm/yHnI4xSofP+UP6SpjHcSeM=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v1.6.4 h1:fv0U8FUIMPNf1L9lnHLvLhgicrIVChEkdzIKYqbNC9s=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2 h1:m8/z1t7/fwjysjQRYbP0RD+bUIF/8tJwPdEZsI83ACI=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.2 h1:5jhuqJyZCZf2JRofRvN/nIFgIWNzPa3/Vz8mYylgbWc=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/cast v1.3.0 h1:oget//CVOEoFewqQxwr0Ej5yjygnqGkvggSE/gB35Q8=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng=
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/cobra v0.0.7/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE=
github.com/spf13/jwalterweatherman v1.0.0 h1:XHEdyB+EcvlqZamSM4ZOMGlc93t6AcsBEu9Gc1vn7yk=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/jwalterweatherman v1.1.0 h1:ue6voC5bR5F8YxI5S67j9i582FU4Qvo2bmqnqMYADFk=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
github.com/spf13/pflag v1.0.1/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.2/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
github.com/spf13/viper v1.6.3/go.mod h1:jUMtyi0/lB5yZH/FjyGAoH7IMNrIhlBf6pXZmbMDvzw=
github.com/spf13/viper v1.7.0 h1:xVKxvI7ouOI5I+U9s2eeiUfMaWBVoXA3AWskkrqK0VM=
github.com/spf13/viper v1.7.0/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5qpdg=
github.com/quic-go/qpack v0.6.0 h1:g7W+BMYynC1LbYLSqRt8PBg5Tgwxn214ZZR34VIOjz8=
github.com/quic-go/qpack v0.6.0/go.mod h1:lUpLKChi8njB4ty2bFLX2x4gzDqXwUpaO1DP9qMDZII=
github.com/quic-go/quic-go v0.59.0 h1:OLJkp1Mlm/aS7dpKgTc6cnpynnD2Xg7C1pwL6vy/SAw=
github.com/quic-go/quic-go v0.59.0/go.mod h1:upnsH4Ju1YkqpLXC305eW3yDZ4NfnNbmQRCMWS58IKU=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ=
github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/sagikazarmark/locafero v0.12.0 h1:/NQhBAkUb4+fH1jivKHWusDYFjMOOKU88eegjfxfHb4=
github.com/sagikazarmark/locafero v0.12.0/go.mod h1:sZh36u/YSZ918v0Io+U9ogLYQJ9tLLBmM4eneO6WwsI=
github.com/samber/lo v1.52.0 h1:Rvi+3BFHES3A8meP33VPAxiBZX/Aws5RxrschYGjomw=
github.com/samber/lo v1.52.0/go.mod h1:4+MXEGsJzbKGaUEQFKBq2xtfuznW9oz/WrgyzMzRoM0=
github.com/sirupsen/logrus v1.9.4 h1:TsZE7l11zFCLZnZ+teH4Umoq5BhEIfIzfRDZ1Uzql2w=
github.com/sirupsen/logrus v1.9.4/go.mod h1:ftWc9WdOfJ0a92nsE2jF5u5ZwH8Bv2zdeOC42RjbV2g=
github.com/spf13/afero v1.15.0 h1:b/YBCLWAJdFWJTN9cLhiXXcD7mzKn9Dm86dNnfyQw1I=
github.com/spf13/afero v1.15.0/go.mod h1:NC2ByUVxtQs4b3sIUphxK0NioZnmxgyCrfzeuq8lxMg=
github.com/spf13/cast v1.10.0 h1:h2x0u2shc1QuLHfxi+cTJvs30+ZAHOGRic8uyGTDWxY=
github.com/spf13/cast v1.10.0/go.mod h1:jNfB8QC9IA6ZuY2ZjDp0KtFO2LZZlg4S/7bzP6qqeHo=
github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk=
github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.21.0 h1:x5S+0EU27Lbphp4UKm1C+1oQO+rKx36vfCoaVebLFSU=
github.com/spf13/viper v1.21.0/go.mod h1:P0lhsswPGWD/1lZJ9ny3fYnVqxiegrlNrEmgLjbTCAY=
github.com/spkg/bom v0.0.0-20160624110644-59b7046e48ad/go.mod h1:qLr4V1qq6nMqFKkMo8ZTx3f+BZEkzsRUY10Xsm2mwU0=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1 h1:nOGnQDM7FYENwehXlg/kFVnos3rEvtKTjRvOWSzb6H4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/twitchyliquid64/golang-asm v0.0.0-20190126203739-365674df15fc/go.mod h1:NoCfSFWosfqMqmmD7hApkirIK9ozpHjxRnRxs1l413A=
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
github.com/ugorji/go v1.1.7 h1:/68gy2h+1mWMrwZFeD1kQialdSzAb432dtpeJ42ovdo=
github.com/ugorji/go v1.1.7/go.mod h1:kZn38zHttfInRq0xu/PH0az30d+z6vm202qpg1oXVMw=
github.com/ugorji/go/codec v1.1.7 h1:2SvQaVZ1ouYrrKKwoSk2pzd4A9evlKJb9oTL+OaLUSs=
github.com/ugorji/go/codec v1.1.7/go.mod h1:Ax+UKWsSmolVDwsd+7N3ZtXu+yMGCf907BLYF3GoBXY=
github.com/urfave/cli/v2 v2.2.0 h1:JTTnM6wKzdA0Jqodd966MVj4vWbbquZykeX1sKbe2C4=
github.com/urfave/cli/v2 v2.2.0/go.mod h1:SE9GqnLQmjVa0iPEY0f1w3ygNIYcIJ0OKPMoW2caLfQ=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.coder.com/go-tools v0.0.0-20190317003359-0c6a35b74a16/go.mod h1:iKV5yK9t+J5nG9O3uF6KYdPEz3dyfMyB15MN1rbQ8Qw=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
golang.org/x/crypto v0.0.0-20180426230345-b49d69b5da94/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200323165209-0ec3e9974c59 h1:3zb4D3T4G8jdExgVU/95+vQXfpEPiMdCaZgmGVxjNHM=
golang.org/x/crypto v0.0.0-20200323165209-0ec3e9974c59/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9 h1:psW17arqaxU48Z5kZ0CQnkZWQJsqcURM6tKiBApRjXI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181023162649-9b4f9f5ad519/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181102091132-c10e9556a7bc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e h1:3G+cUijn7XD+S4eJFddp53Pv7+slrESplyjG25HgL+k=
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58 h1:8gQV6CLnAEikrhgkHFbMAEhagSSnXWGV915qUMm9mrU=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9 h1:SQFwaSi55rU7vdNs9Yr0Z324VNlrF+0wMqRXT4St8ck=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS4MhqMhdFk5YI=
github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08=
github.com/ugorji/go/codec v1.3.1 h1:waO7eEiFDwidsBN6agj1vJQ4AG7lh2yqXyOXqhgQuyY=
github.com/ugorji/go/codec v1.3.1/go.mod h1:pRBVtBSKl77K30Bv8R2P+cLSGaTtex6fsA2Wjqmfxj4=
github.com/urfave/cli/v2 v2.27.7 h1:bH59vdhbjLv3LAvIu6gd0usJHgoTTPhCFib8qqOwXYU=
github.com/urfave/cli/v2 v2.27.7/go.mod h1:CyNAG/xg+iAOg0N4MPGZqVmv2rCoP267496AOXUZjA4=
github.com/xrash/smetrics v0.0.0-20250705151800-55b8f293f342 h1:FnBeRrxr7OU4VvAzt5X7s6266i6cSVkkFPS0TuXWbIg=
github.com/xrash/smetrics v0.0.0-20250705151800-55b8f293f342/go.mod h1:Ohn+xnUBiLI6FVj/9LpzZWtj1/D6lUovWYBkxHVV3aM=
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
go.uber.org/mock v0.6.0 h1:hyF9dfmbgIX5EfOdasqLsWD6xqpNZlXblLB/Dbnwv3Y=
go.uber.org/mock v0.6.0/go.mod h1:KiVJ4BqZJaMj4svdfmHM0AUx4NJYO8ZNpPnZn1Z+BBU=
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
golang.org/x/arch v0.23.0 h1:lKF64A2jF6Zd8L0knGltUnegD62JMFBiCPBmQpToHhg=
golang.org/x/arch v0.23.0/go.mod h1:dNHoOeKiyja7GTvF9NJS1l3Z2yntpQNzgrjh1cU103A=
golang.org/x/crypto v0.0.0-20190228161510-8dd112bcdc25/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.47.0 h1:V6e3FRj+n4dbpw86FJ8Fv7XVOql7TEwpHapKoMJ/GO8=
golang.org/x/crypto v0.47.0/go.mod h1:ff3Y9VzzKbwSSEzWqJsJVBnWmRwRSHt/6Op5n9bQc4A=
golang.org/x/exp v0.0.0-20260112195511-716be5621a96 h1:Z/6YuSHTLOHfNFdb8zVZomZr7cqNgTJvA8+Qz75D8gU=
golang.org/x/exp v0.0.0-20260112195511-716be5621a96/go.mod h1:nzimsREAkjBCIEFtHiYkrJyT+2uy9YZJB7H1k68CXZU=
golang.org/x/mod v0.32.0 h1:9F4d3PHLljb6x//jOyokMv3eX+YDeepZSEo3mFJy93c=
golang.org/x/mod v0.32.0/go.mod h1:SgipZ/3h2Ci89DlEtEXWUk/HteuRin+HHhN+WbNhguU=
golang.org/x/net v0.49.0 h1:eeHFmOGUTtaaPSGNmjBKpbng9MulQsJURQUAfUwY++o=
golang.org/x/net v0.49.0/go.mod h1:/ysNB2EvaqvesRkuLAyjI1ycPZlQHM3q01F02UY/MV8=
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190306220234-b354f8bf4d9e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190618155005-516e3c20635f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190712062909-fae7ac547cb7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190927073244-c990c680b611/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd h1:xhmwyvizuTgC2qz7ZlMluP20uW+C3Rm0FD/WLDX8884=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200409092240-59c9f1ba88fa h1:mQTN3ECqfsViCNBgq+A40vdwhkGykrrQlYe3mPj6BoU=
golang.org/x/sys v0.0.0-20200409092240-59c9f1ba88fa/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210113181707-4bcb84eeeb78 h1:nVuTkr9L6Bq62qpUqKo/RnZCFfzDBL0bYo6w9OJUqZY=
golang.org/x/sys v0.0.0-20210113181707-4bcb84eeeb78/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423185535-09eb48e85fd7 h1:iGu644GcxtEcrInvDsQRCwJjtCIOlT2V7IRt6ah2Whw=
golang.org/x/sys v0.0.0-20210423185535-09eb48e85fd7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3 h1:cokOdA+Jmi5PJGXLlLllQSgYigAEfHXJAERHVMaCc2k=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190920225731-5eefd052ad72/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191112195655-aa38f8e97acc/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/xerrors v0.0.0-20190513163551-3ee3066db522/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.23.0 h1:4MY060fB1DLGMB/7MBTLnwQUY6+F09GEiz6SsrNqyzM=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
golang.org/x/sys v0.0.0-20190228124157-a34e9553db1e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/term v0.39.0 h1:RclSuaJf32jOqZz74CkPA9qFuVTX7vhLlpfj/IGWlqY=
golang.org/x/term v0.39.0/go.mod h1:yxzUCTP/U+FzoxfdKmLaA0RV1WgE0VY7hXBwKtY/4ww=
golang.org/x/text v0.33.0 h1:B3njUFyqtHDUI5jMn1YIr5B0IE2U0qck04r6d4KPAxE=
golang.org/x/text v0.33.0/go.mod h1:LuMebE6+rBincTi9+xWTY8TztLzKHc/9C1uBCG27+q8=
golang.org/x/tools v0.41.0 h1:a9b8iMweWG+S0OBnlU36rzLp20z1Rp10w+IY2czHTQc=
golang.org/x/tools v0.41.0/go.mod h1:XSY6eDqxVNiYgezAVqqCeihT4j1U2CCsqvH3WhQpnlg=
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/gemnasium/logrus-airbrake-hook.v2 v2.1.2/go.mod h1:Xk6kEKp8OKb+X14hQBKWaSkCsqBpgog8nAV2xsGOxlo=
gopkg.in/ini.v1 v1.51.0 h1:AQvPpx3LzTDM0AjnIRlVFwFFGC+npRopjZxLJj6gdno=
gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/ini.v1 v1.55.0 h1:E8yzL5unfpW3M6fz/eB7Cb5MQAYSZ7GKo4Qth+N2sgQ=
gopkg.in/ini.v1 v1.55.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0 h1:clyUAQHOM3G0M3f5vQj7LuJrETvjVot3Z5el9nffUtU=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gorm.io/driver/sqlite v1.1.3 h1:BYfdVuZB5He/u9dt4qDpZqiqDJ6KhPqs5QUqsr/Eeuc=
gorm.io/driver/sqlite v1.1.3/go.mod h1:AKDgRWk8lcSQSw+9kxCJnX/yySj8G3rdwYlU57cB45c=
gorm.io/gorm v1.20.1/go.mod h1:0HFTzE/SqkGTzK6TlDPPQbAYCluiVvhzoA1+aVyzenw=
gorm.io/gorm v1.20.2 h1:bZzSEnq7NDGsrd+n3evOOedDrY5oLM5QPlCjZJUK2ro=
gorm.io/gorm v1.20.2/go.mod h1:0HFTzE/SqkGTzK6TlDPPQbAYCluiVvhzoA1+aVyzenw=
gosrc.io/xmpp v0.1.1 h1:iMtE9W3fx254+4E6rI34AOPJDqWvpfQR6EYaVMzhJ4s=
gosrc.io/xmpp v0.1.1/go.mod h1:4JgaXzw4MnEv2sGltONtK3GMhj+h9gpQ7cO8nwbFJLU=
gosrc.io/xmpp v0.5.1 h1:Rgrm5s2rt+npGggJH3HakQxQXR8ZZz3+QRzakRQqaq4=
gosrc.io/xmpp v0.5.1/go.mod h1:L3NFMqYOxyLz3JGmgFyWf7r9htE91zVGiK40oW4RwdY=
gotest.tools v2.1.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
gotest.tools/gotestsum v0.3.5/go.mod h1:Mnf3e5FUzXbkCfynWBGOwLssY7gTQgCHObK9tMpAriY=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
howett.net/plist v0.0.0-20181124034731-591f970eefbb h1:jhnBjNi9UFpfpl8YZhA9CrOqpnJdvzuiHsl/dnxl11M=
howett.net/plist v0.0.0-20181124034731-591f970eefbb/go.mod h1:vMygbs4qMhSZSc4lCUl2OEE+rDiIIJAIdR4m7MiMcm0=
mvdan.cc/sh v2.6.4+incompatible/go.mod h1:IeeQbZq+x2SUGBensq/jge5lLQbS3XT2ktyp3wrt4x8=
nhooyr.io/websocket v1.6.5/go.mod h1:F259lAzPRAH0htX2y3ehpJe09ih1aSHN7udWki1defY=
nhooyr.io/websocket v1.8.6 h1:s+C3xAMLwGmlI31Nyn/eAehUlZPwfYZu2JXM621Q5/k=
nhooyr.io/websocket v1.8.6/go.mod h1:B70DZP8IakI65RVQ51MsWP/8jndNma26DVA/nFSCgW0=
nhooyr.io/websocket v1.8.7 h1:usjR2uOr/zjjkVMy0lW+PPohFok7PCow5sDjLgX4P4g=
nhooyr.io/websocket v1.8.7/go.mod h1:B70DZP8IakI65RVQ51MsWP/8jndNma26DVA/nFSCgW0=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gorm.io/gorm v1.31.1 h1:7CA8FTFz/gRfgqgpeKIBcervUn3xSyPUmr6B2WXJ7kg=
gorm.io/gorm v1.31.1/go.mod h1:XyQVbO2k6YkOis7C2437jSit3SsDK72s7n7rsSHd+Gs=
howett.net/plist v1.0.2-0.20250314012144-ee69052608d9 h1:eeH1AIcPvSc0Z25ThsYF+Xoqbn0CI/YnXVYoTLFdGQw=
howett.net/plist v1.0.2-0.20250314012144-ee69052608d9/go.mod h1:fyFX5Hj5tP1Mpk8obqA9MZgXT416Q5711SDT7dQLTLk=
modernc.org/cc/v4 v4.27.1 h1:9W30zRlYrefrDV2JE2O8VDtJ1yPGownxciz5rrbQZis=
modernc.org/cc/v4 v4.27.1/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
modernc.org/ccgo/v4 v4.30.1 h1:4r4U1J6Fhj98NKfSjnPUN7Ze2c6MnAdL0hWw6+LrJpc=
modernc.org/ccgo/v4 v4.30.1/go.mod h1:bIOeI1JL54Utlxn+LwrFyjCx2n2RDiYEaJVSrgdrRfM=
modernc.org/fileutil v1.3.40 h1:ZGMswMNc9JOCrcrakF1HrvmergNLAmxOPjizirpfqBA=
modernc.org/fileutil v1.3.40/go.mod h1:HxmghZSZVAz/LXcMNwZPA/DRrQZEVP9VX0V4LQGQFOc=
modernc.org/gc/v2 v2.6.5 h1:nyqdV8q46KvTpZlsw66kWqwXRHdjIlJOhG6kxiV/9xI=
modernc.org/gc/v2 v2.6.5/go.mod h1:YgIahr1ypgfe7chRuJi2gD7DBQiKSLMPgBQe9oIiito=
modernc.org/gc/v3 v3.1.1 h1:k8T3gkXWY9sEiytKhcgyiZ2L0DTyCQ/nvX+LoCljoRE=
modernc.org/gc/v3 v3.1.1/go.mod h1:HFK/6AGESC7Ex+EZJhJ2Gni6cTaYpSMmU/cT9RmlfYY=
modernc.org/goabi0 v0.2.0 h1:HvEowk7LxcPd0eq6mVOAEMai46V+i7Jrj13t4AzuNks=
modernc.org/goabi0 v0.2.0/go.mod h1:CEFRnnJhKvWT1c1JTI3Avm+tgOWbkOu5oPA8eH8LnMI=
modernc.org/libc v1.67.7 h1:H+gYQw2PyidyxwxQsGTwQw6+6H+xUk+plvOKW7+d3TI=
modernc.org/libc v1.67.7/go.mod h1:UjCSJFl2sYbJbReVQeVpq/MgzlbmDM4cRHIYFelnaDk=
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI=
modernc.org/memory v1.11.0/go.mod h1:/JP4VbVC+K5sU2wZi9bHoq2MAkCnrt2r98UGeSK7Mjw=
modernc.org/opt v0.1.4 h1:2kNGMRiUjrp4LcaPuLY2PzUfqM/w9N23quVwhKt5Qm8=
modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns=
modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w=
modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE=
modernc.org/sqlite v1.44.3 h1:+39JvV/HWMcYslAwRxHb8067w+2zowvFOUrOWIy9PjY=
modernc.org/sqlite v1.44.3/go.mod h1:CzbrU2lSB1DKUusvwGz7rqEKIq+NUd8GWuBBZDs9/nA=
modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0=
modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A=
modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=
modernc.org/token v1.1.0/go.mod h1:UGzOrNV1mAFSEB63lOFHIpNRUVMvYTc6yu1SMY/XTDM=
+2 -14
View File
@@ -1,18 +1,6 @@
---
engine_enable_code_mutation: true
engine_cmd_compile:
- go build -ldflags '-w -extldflags "-static"' -o scrutiny webapp/backend/cmd/scrutiny/scrutiny.go
- 'GOOS=linux GOARCH=amd64 go build -ldflags "-X main.goos=linux -X main.goarch=amd64" -o scrutiny-web-linux-amd64 -tags "static" webapp/backend/cmd/scrutiny/scrutiny.go'
- 'chmod +x scrutiny-web-linux-amd64'
- 'GOOS=linux GOARCH=amd64 go build -ldflags "-X main.goos=linux -X main.goarch=amd64" -o scrutiny-collector-metrics-linux-amd64 -tags "static" collector/cmd/collector-metrics/collector-metrics.go'
- 'chmod +x scrutiny-collector-metrics-linux-amd64'
mgr_keep_lock_file: true
engine_version_metadata_path: 'webapp/backend/pkg/version/version.go'
engine_cmd_test: 'go test -v -tags "static" $(go list ./... | grep -v /vendor/)'
engine_golang_package_path: 'github.com/analogj/scrutiny'
scm_enable_branch_cleanup: true
engine_disable_lint: true
scm_release_assets:
- local_path: scrutiny-web-linux-amd64
artifact_name: scrutiny-web-linux-amd64
- local_path: scrutiny-collector-metrics-linux-amd64
artifact_name: scrutiny-collector-metrics-linux-amd64
+1 -1
View File
@@ -1,4 +1,4 @@
#!/usr/bin/with-contenv bash
#!/command/with-contenv bash
if [ -n "${TZ}" ]
then
+15
View File
@@ -0,0 +1,15 @@
#!/command/with-contenv bash
# Cron runs in its own isolated environment (usually using only /etc/environment )
# So when the container starts up, we will do a dump of the runtime environment into a .env file that we
# will then source into the crontab file (/etc/cron.d/scrutiny)
(set -o posix; export -p) > /env.sh
# adding ability to customize the cron schedule.
COLLECTOR_CRON_SCHEDULE=${COLLECTOR_CRON_SCHEDULE:-"0 0 * * *"}
# if the cron schedule has been overridden via env variable (eg docker-compose) we should make sure to strip quotes
[[ "${COLLECTOR_CRON_SCHEDULE}" == \"*\" || "${COLLECTOR_CRON_SCHEDULE}" == \'*\' ]] && COLLECTOR_CRON_SCHEDULE="${COLLECTOR_CRON_SCHEDULE:1:-1}"
# replace placeholder with correct value
sed -i 's|{COLLECTOR_CRON_SCHEDULE}|'"${COLLECTOR_CRON_SCHEDULE}"'|g' /etc/cron.d/scrutiny
+1 -1
View File
@@ -11,5 +11,5 @@ MAILTO=""
# correctly route collector logs (STDOUT & STDERR) to Cron foreground (collectable by Docker STDOUT)
# cron schedule to run daily at midnight: '0 0 * * *'
# System environmental variables are stripped by cron, source our dump of the docker environmental variables before each command (/env.sh)
0 0 * * * root . /env.sh; /scrutiny/bin/scrutiny-collector-metrics run >/proc/1/fd/1 2>/proc/1/fd/2
{COLLECTOR_CRON_SCHEDULE} root . /env.sh; /opt/scrutiny/bin/scrutiny-collector-metrics run >/proc/1/fd/1 2>/proc/1/fd/2
# An empty line is required at the end of this file for a valid cron file.

Some files were not shown because too many files have changed in this diff Show More