Files
docker-configs/linkwarden/data/archives/1/82_readability.json

1 line
28 KiB
JSON
Executable File

{"title":"","byline":"jsapede","dir":null,"lang":null,"content":"<div id=\"readability-page-1\" class=\"page\"><div data-hpc=\"true\"><article><p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">frigate-proxmox-docker-openvino</h2><a href=\"#frigate-proxmox-docker-openvino\" aria-label=\"Permalink: frigate-proxmox-docker-openvino\" id=\"user-content-frigate-proxmox-docker-openvino\"></a></p>\n<p dir=\"auto\">Complete setting for OpenVINO hardware acceleration in frigate, instead of CORAL</p>\n<p dir=\"auto\">tutorial is adapted for Docker version of frigate installed in a proxmox LXC and deals mainly with GPU passthroughs</p>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">Prerequisites</h2><a href=\"#prerequisites\" aria-label=\"Permalink: Prerequisites\" id=\"user-content-prerequisites\"></a></p>\n<ul dir=\"auto\">\n<li>Intel iX &gt; GEN6 architecture (i.e. compatible with openvino acceleration)</li>\n<li>A proxmox working installation</li>\n</ul>\n<p dir=\"auto\">check in your PVE Shell that <code>/dev/dri/renderD128</code> is available :</p>\n\n<p dir=\"auto\">optionnally install intel GPU tools :</p>\n<div><pre><code>apt install intel-gpu-tools\n</code></pre></div>\n<p dir=\"auto\">now you can check GPU access / usage :</p>\n\n<p dir=\"auto\">should lead to something like this :</p>\n<p dir=\"auto\"><a href=\"https://private-user-images.githubusercontent.com/66303110/376966918-0474d76c-e4c7-45df-8023-5dc10809c01c.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzMxMzMxMzYsIm5iZiI6MTczMzEzMjgzNiwicGF0aCI6Ii82NjMwMzExMC8zNzY5NjY5MTgtMDQ3NGQ3NmMtZTRjNy00NWRmLTgwMjMtNWRjMTA4MDljMDFjLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDEyMDIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMjAyVDA5NDcxNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWY2NjMyYzJmYzA4OTUzZDg1NmI5OGQzNGQ0MzBlMDJkMDQzOTMyY2I5MmQwNWYwYmExNjI4YTNkMGY2MzkzY2EmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.kKOXLi0uPcvKeF45FB7fNkIPKSWfKrIdx5_1wOhJyX8\" rel=\"noopener noreferrer\"><img alt=\"image\" src=\"https://private-user-images.githubusercontent.com/66303110/376966918-0474d76c-e4c7-45df-8023-5dc10809c01c.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzMxMzMxMzYsIm5iZiI6MTczMzEzMjgzNiwicGF0aCI6Ii82NjMwMzExMC8zNzY5NjY5MTgtMDQ3NGQ3NmMtZTRjNy00NWRmLTgwMjMtNWRjMTA4MDljMDFjLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDEyMDIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMjAyVDA5NDcxNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWY2NjMyYzJmYzA4OTUzZDg1NmI5OGQzNGQ0MzBlMDJkMDQzOTMyY2I5MmQwNWYwYmExNjI4YTNkMGY2MzkzY2EmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.kKOXLi0uPcvKeF45FB7fNkIPKSWfKrIdx5_1wOhJyX8\"></a></p>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">create Docker LXC :</h2><a href=\"#create-docker-lxc-\" aria-label=\"Permalink: create Docker LXC :\" id=\"user-content-create-docker-lxc-\"></a></p>\n<p dir=\"auto\">The easiest way is to use <a rel=\"nofollow\" href=\"https://tteck.github.io/Proxmox/\">Tteck's scripts</a></p>\n<p dir=\"auto\">first in the PVE console launch the tteck's script to install a new docker LXC :</p>\n<div><pre><code>bash -c \"$(wget -qLO - https://github.com/tteck/Proxmox/raw/main/ct/docker.sh)\"\n</code></pre></div>\n<p dir=\"auto\">during installation :</p>\n<ul dir=\"auto\">\n<li>switch to \"advanced mode\"</li>\n<li>select debian 12</li>\n<li>make the LXC <strong>PRIVILEGED</strong></li>\n<li>you'd better choose 8Go ram and 2 or 4 cores</li>\n<li>add portainer if needed</li>\n<li>add docker compose</li>\n</ul>\n<p dir=\"auto\">Once the LXC is created you have to can also install intel-gpu-tools <strong>inside</strong> the LXC</p>\n<div><pre><code>apt install intel-gpu-tools\n</code></pre></div>\n<p dir=\"auto\">Next you have to add GPU passthrough to the LXC to allow frigate access the OpenVINO acceleratons. On your LXC \"Ressources\" add \"Device Passtrough\" :</p>\n<p dir=\"auto\"><a href=\"https://private-user-images.githubusercontent.com/66303110/376968903-071007bb-ad90-43c9-92ac-0c79313b83eb.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzMxMzMxMzYsIm5iZiI6MTczMzEzMjgzNiwicGF0aCI6Ii82NjMwMzExMC8zNzY5Njg5MDMtMDcxMDA3YmItYWQ5MC00M2M5LTkyYWMtMGM3OTMxM2I4M2ViLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDEyMDIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMjAyVDA5NDcxNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTg5ZWVlZTU0MDFlN2RlZDY5ZjZiNzY4NTZiZTcxNDIwYjA2NTA4NzI2YmEwYmZjNjYzOWE4MTllNGNjMjA0OGQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.LawTJt6sOVYmCSCzzInuKq_Cjx9afIbRv7YAN4tTzF0\" rel=\"noopener noreferrer\"><img alt=\"image\" src=\"https://private-user-images.githubusercontent.com/66303110/376968903-071007bb-ad90-43c9-92ac-0c79313b83eb.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzMxMzMxMzYsIm5iZiI6MTczMzEzMjgzNiwicGF0aCI6Ii82NjMwMzExMC8zNzY5Njg5MDMtMDcxMDA3YmItYWQ5MC00M2M5LTkyYWMtMGM3OTMxM2I4M2ViLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDEyMDIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMjAyVDA5NDcxNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTg5ZWVlZTU0MDFlN2RlZDY5ZjZiNzY4NTZiZTcxNDIwYjA2NTA4NzI2YmEwYmZjNjYzOWE4MTllNGNjMjA0OGQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.LawTJt6sOVYmCSCzzInuKq_Cjx9afIbRv7YAN4tTzF0\"></a></p>\n<p dir=\"auto\">and specify the path you want to add : <code>/dev/dri/renderD128</code></p>\n<p dir=\"auto\"><strong>Reboot</strong></p>\n<p dir=\"auto\">now your LXC has access to the GPU.</p>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">Frigate Docker</h2><a href=\"#frigate-docker\" aria-label=\"Permalink: Frigate Docker\" id=\"user-content-frigate-docker\"></a></p>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">create folders</h2><a href=\"#create-folders\" aria-label=\"Permalink: create folders\" id=\"user-content-create-folders\"></a></p>\n<p dir=\"auto\">On the LXC shell, create folders to organize your frigate storage for videos / captures / models and configs.</p>\n<p dir=\"auto\">Here are my usually settings :</p>\n<div><pre><code>mkdir /opt/frigate\nmkdir /opt/frigate/media\nmkdir /opt/frigate/config\n</code></pre></div>\n<p dir=\"auto\">create the forlders according to your needs</p>\n<p dir=\"auto\">next we will build the docker container.</p>\n<p dir=\"auto\">create a docker-compose.yml at the root folder</p>\n<div><pre><code>cd /opt/frigate\nnano docker-compose.yml\n</code></pre></div>\n<p dir=\"auto\">or create a stack in portainer :</p>\n<p dir=\"auto\"><a href=\"https://private-user-images.githubusercontent.com/66303110/376981141-3a4add3d-38ec-4313-a9b1-d6761057726c.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzMxMzMxMzYsIm5iZiI6MTczMzEzMjgzNiwicGF0aCI6Ii82NjMwMzExMC8zNzY5ODExNDEtM2E0YWRkM2QtMzhlYy00MzEzLWE5YjEtZDY3NjEwNTc3MjZjLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDEyMDIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMjAyVDA5NDcxNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTc1NGE3ZTNjYmQ1ODlkODZjNzdkYzhlNjE5NWNkMWVlZDZkYmNlZjU4ZGY0NTViNWM1OGUzMGUwZjMwMjBkOTMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.lYe2bjjV-sHsmd85yI2YzuxNF2rSZ5UNIKe0EvOPTJU\" rel=\"noopener noreferrer\"><img alt=\"image\" src=\"https://private-user-images.githubusercontent.com/66303110/376981141-3a4add3d-38ec-4313-a9b1-d6761057726c.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzMxMzMxMzYsIm5iZiI6MTczMzEzMjgzNiwicGF0aCI6Ii82NjMwMzExMC8zNzY5ODExNDEtM2E0YWRkM2QtMzhlYy00MzEzLWE5YjEtZDY3NjEwNTc3MjZjLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDEyMDIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMjAyVDA5NDcxNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTc1NGE3ZTNjYmQ1ODlkODZjNzdkYzhlNjE5NWNkMWVlZDZkYmNlZjU4ZGY0NTViNWM1OGUzMGUwZjMwMjBkOTMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.lYe2bjjV-sHsmd85yI2YzuxNF2rSZ5UNIKe0EvOPTJU\"></a></p>\n<p dir=\"auto\">and add :</p>\n<div><pre><code>version: \"3.9\"\nservices:\n frigate:\n container_name: frigate\n privileged: true \n restart: unless-stopped\n image: ghcr.io/blakeblackshear/frigate:0.14.1\n cap_add:\n - CAP_PERFMON\n shm_size: \"256mb\"\n devices:\n - /dev/dri/renderD128:/dev/dri/renderD128\n - /dev/dri/card0:/dev/dri/card0\n volumes:\n - /etc/localtime:/etc/localtime:ro\n - /opt/frigate/config:/config\n - /opt/media:/media/frigate\n - type: tmpfs\n target: /tmp/cache\n tmpfs:\n size: 1G\n ports:\n - \"5000:5000\"\n - \"8971:8971\"\n - \"1984:1984\"\n - \"8554:8554\" # RTSP feeds\n - \"8555:8555/tcp\" # WebRTC over tcp\n - \"8555:8555/udp\" # WebRTC over udp\n environment:\n FRIGATE_RTSP_PASSWORD: ****\n PLUS_API_KEY: ****\n</code></pre></div>\n<p dir=\"auto\">as you can see :</p>\n<ul dir=\"auto\">\n<li>container is <strong>privileged</strong></li>\n<li>/dev/dri/renderD128 is passtrhough from the LXC to the container</li>\n<li>created folders are bind to frigate usual folders</li>\n<li>shm_size has to be set according to <a rel=\"nofollow\" href=\"https://docs.frigate.video/frigate/installation/#calculating-required-shm-size\">documentation</a></li>\n<li>tmpfs has to be adjusted to your configuration, see <a rel=\"nofollow\" href=\"https://docs.frigate.video/frigate/installation/#storage\">documentation</a></li>\n<li>ports for UI, RTSP and webRTC are forwarded</li>\n<li>define some FRIGATE_RTSP_PASSWORD and PLUS_API_KEY if needed</li>\n</ul>\n<p dir=\"auto\">From now the docker container is ready, and have access to the GPU.</p>\n<p dir=\"auto\"><strong>Do not start it right now as you have to provide frigate configuraton !</strong></p>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">Setup Frigate for OpenVINO acceleration</h2><a href=\"#setup-frigate-for-openvino-acceleration\" aria-label=\"Permalink: Setup Frigate for OpenVINO acceleration\" id=\"user-content-setup-frigate-for-openvino-acceleration\"></a></p>\n<p dir=\"auto\">add your frigate configutration :</p>\n<div><pre><code>cd /opt/frigate/config\nnano config.yml\n</code></pre></div>\n<p dir=\"auto\">edit it accroding to your setup and now you must add the <a rel=\"nofollow\" href=\"https://docs.frigate.video/configuration/object_detectors/#openvino-detector\">following lines</a> to your frigate config :</p>\n<div><pre><code>detectors:\n ov:\n type: openvino\n device: GPU\n\nmodel:\n width: 300\n height: 300\n input_tensor: nhwc\n input_pixel_format: bgr\n path: /openvino-model/ssdlite_mobilenet_v2.xml\n labelmap_path: /openvino-model/coco_91cl_bkgr.txt\n</code></pre></div>\n<p dir=\"auto\">Once your <code>config.yml</code> is ready build your container by either <code>docker compose up</code> or \"deploy Stack\" if you're using portainer</p>\n<p dir=\"auto\">reboot all, and go to frigate UI to check everything is working :</p>\n<p dir=\"auto\"><a href=\"https://private-user-images.githubusercontent.com/66303110/376977552-abad95f0-f0c9-4b59-853f-56a252e6bb65.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzMxMzMxMzYsIm5iZiI6MTczMzEzMjgzNiwicGF0aCI6Ii82NjMwMzExMC8zNzY5Nzc1NTItYWJhZDk1ZjAtZjBjOS00YjU5LTg1M2YtNTZhMjUyZTZiYjY1LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDEyMDIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMjAyVDA5NDcxNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTU0ZDE1MjcxYWYwZmNmNzIwNjgxYWNlNDUzNDM5YzIwY2EyZGVmYzY0MmM5NDBjZjMxODg5ZDYwOWRiOWZiY2ImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.nYDjnYIhXNfEUvjmD81eUPqvEQr4FEpd7Vyu0w3PJN8\" rel=\"noopener noreferrer\"><img alt=\"image\" src=\"https://private-user-images.githubusercontent.com/66303110/376977552-abad95f0-f0c9-4b59-853f-56a252e6bb65.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzMxMzMxMzYsIm5iZiI6MTczMzEzMjgzNiwicGF0aCI6Ii82NjMwMzExMC8zNzY5Nzc1NTItYWJhZDk1ZjAtZjBjOS00YjU5LTg1M2YtNTZhMjUyZTZiYjY1LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDEyMDIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMjAyVDA5NDcxNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTU0ZDE1MjcxYWYwZmNmNzIwNjgxYWNlNDUzNDM5YzIwY2EyZGVmYzY0MmM5NDBjZjMxODg5ZDYwOWRiOWZiY2ImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.nYDjnYIhXNfEUvjmD81eUPqvEQr4FEpd7Vyu0w3PJN8\"></a></p>\n<p dir=\"auto\">you should see :</p>\n<ul dir=\"auto\">\n<li>low inference time : ~20 ms</li>\n<li>low CPU usage</li>\n<li>GPU usage</li>\n</ul>\n<p dir=\"auto\">you can also check with <code>intel_gpu_top</code> inside the LXC console and see that Render/3D has some loads according to frigate detections</p>\n<p dir=\"auto\"><a href=\"https://private-user-images.githubusercontent.com/66303110/376978506-ce307fb5-e856-4846-b1ee-94a6bda9758a.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzMxMzMxMzYsIm5iZiI6MTczMzEzMjgzNiwicGF0aCI6Ii82NjMwMzExMC8zNzY5Nzg1MDYtY2UzMDdmYjUtZTg1Ni00ODQ2LWIxZWUtOTRhNmJkYTk3NThhLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDEyMDIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMjAyVDA5NDcxNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWM5YzdiNmY3NTM0MWUwODcyYjNhNTcyYzEzZWI4MTlhZGJhNjY1Y2Q5ZTk0NTVlNjY3NmMxYWNiZWUzMzMxNGQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.pT9n3UOaRosqrY0W4J2MKe00umeoxEk6bESbfXZSies\" rel=\"noopener noreferrer\"><img alt=\"image\" src=\"https://private-user-images.githubusercontent.com/66303110/376978506-ce307fb5-e856-4846-b1ee-94a6bda9758a.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzMxMzMxMzYsIm5iZiI6MTczMzEzMjgzNiwicGF0aCI6Ii82NjMwMzExMC8zNzY5Nzg1MDYtY2UzMDdmYjUtZTg1Ni00ODQ2LWIxZWUtOTRhNmJkYTk3NThhLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDEyMDIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMjAyVDA5NDcxNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWM5YzdiNmY3NTM0MWUwODcyYjNhNTcyYzEzZWI4MTlhZGJhNjY1Y2Q5ZTk0NTVlNjY3NmMxYWNiZWUzMzMxNGQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.pT9n3UOaRosqrY0W4J2MKe00umeoxEk6bESbfXZSies\"></a></p>\n<p dir=\"auto\">and on your PROXMOX, you can see that CPU load of the LXC is drastically lowered :</p>\n<p dir=\"auto\"><a href=\"https://private-user-images.githubusercontent.com/66303110/376982742-365406eb-f0bc-4367-ba31-42609c587d87.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzMxMzMxMzYsIm5iZiI6MTczMzEzMjgzNiwicGF0aCI6Ii82NjMwMzExMC8zNzY5ODI3NDItMzY1NDA2ZWItZjBiYy00MzY3LWJhMzEtNDI2MDljNTg3ZDg3LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDEyMDIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMjAyVDA5NDcxNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTFjZmY0NjYyYjhkZDUzYzc4NmVkMGJjNTA3YTQ3OGMxODQ2MmVjODEyOGVhNGIwZDE2YmY4NjI1MGI1ZmRkMWQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.W319zexNah8XEiMVjKnP2l4lp3ynGZjL5wuStIB97no\" rel=\"noopener noreferrer\"><img alt=\"image\" src=\"https://private-user-images.githubusercontent.com/66303110/376982742-365406eb-f0bc-4367-ba31-42609c587d87.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzMxMzMxMzYsIm5iZiI6MTczMzEzMjgzNiwicGF0aCI6Ii82NjMwMzExMC8zNzY5ODI3NDItMzY1NDA2ZWItZjBiYy00MzY3LWJhMzEtNDI2MDljNTg3ZDg3LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDEyMDIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMjAyVDA5NDcxNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTFjZmY0NjYyYjhkZDUzYzc4NmVkMGJjNTA3YTQ3OGMxODQ2MmVjODEyOGVhNGIwZDE2YmY4NjI1MGI1ZmRkMWQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.W319zexNah8XEiMVjKnP2l4lp3ynGZjL5wuStIB97no\"></a></p>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">Extra settings</h2><a href=\"#extra-settings\" aria-label=\"Permalink: Extra settings\" id=\"user-content-extra-settings\"></a></p>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">CPU load</h2><a href=\"#cpu-load\" aria-label=\"Permalink: CPU load\" id=\"user-content-cpu-load\"></a></p>\n<p dir=\"auto\">i experimentally found that running those 2 Tteck's scripts int the PVE console greatly reduces the CPU consumption in \"idle mode\" (i.e. when frigate only \"observes\" and has no detection running) :</p>\n<ul dir=\"auto\">\n<li><a rel=\"nofollow\" href=\"https://tteck.github.io/Proxmox/#proxmox-ve-lxc-filesystem-trim\">Filesystem Trim</a></li>\n<li><a rel=\"nofollow\" href=\"https://tteck.github.io/Proxmox/#proxmox-ve-cpu-scaling-governor\">CPU Scaling Governor</a> : set governor to <strong>powersave</strong></li>\n</ul>\n<p dir=\"auto\">experiment on your own !</p>\n<p dir=\"auto\"><h2 dir=\"auto\" tabindex=\"-1\">YOLO NAS models</h2><a href=\"#yolo-nas-models\" aria-label=\"Permalink: YOLO NAS models\" id=\"user-content-yolo-nas-models\"></a></p>\n<p dir=\"auto\">Except default SSDLite model, <a href=\"https://github.com/Deci-AI/super-gradients\">YOLO NAS</a> model is also <a rel=\"nofollow\" href=\"https://docs.frigate.video/configuration/object_detectors/#yolo-nas\">available for OpenVINO acceleration</a>.</p>\n<p dir=\"auto\">To use it you have to build the model to make it compatible with frigate. this can be easily done with the dedicated <a rel=\"nofollow\" href=\"https://colab.research.google.com/github/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb\">google collab</a></p>\n<p dir=\"auto\">the only thing to do is to define the dimensions of the input image shape. 320x320 leads to higher inference time, i'd use 256x256.</p>\n<p dir=\"auto\"><code> input_image_shape=(256,256),</code></p>\n<p dir=\"auto\">and select the base precision of the model. <strong>S</strong> version is good enough, <strong>M</strong> induces much higer inference time :</p>\n<div><pre><code>model = models.get(Models.YOLO_NAS_S, pretrained_weights=\"coco\")\n</code></pre></div>\n<p dir=\"auto\"><em>NOTE: you can make some tests and find the good combination for your hardware. try to limit inference time around 20 ms</em></p>\n<p dir=\"auto\">and specify the name of the model file you will generate :</p>\n<div><pre><code>files.download('yolo_nas_s.onnx')\n</code></pre></div>\n<p dir=\"auto\">now simply launch all the steps of the collab, 1 by 1, and it will download the model file :</p>\n<p dir=\"auto\"><a href=\"https://private-user-images.githubusercontent.com/66303110/376989320-53a211a9-c2e9-4a9d-bff2-946ce674fe27.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzMxMzMxMzYsIm5iZiI6MTczMzEzMjgzNiwicGF0aCI6Ii82NjMwMzExMC8zNzY5ODkzMjAtNTNhMjExYTktYzJlOS00YTlkLWJmZjItOTQ2Y2U2NzRmZTI3LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDEyMDIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMjAyVDA5NDcxNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWNlZGY4NzY1M2M3ZGFmYzNjYmRmNzFmMmUxOTA0YWQwMTJmODFmYzY5MDM4YWYzMzM5YzU0ZTc3MDQyYjE4ZDgmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.NDAlx1UVS7XKbUuQfh7cf7u9MC4OIPylDn5dvx3mzl4\" rel=\"noopener noreferrer\"><img alt=\"image\" src=\"https://private-user-images.githubusercontent.com/66303110/376989320-53a211a9-c2e9-4a9d-bff2-946ce674fe27.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzMxMzMxMzYsIm5iZiI6MTczMzEzMjgzNiwicGF0aCI6Ii82NjMwMzExMC8zNzY5ODkzMjAtNTNhMjExYTktYzJlOS00YTlkLWJmZjItOTQ2Y2U2NzRmZTI3LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDEyMDIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMjAyVDA5NDcxNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWNlZGY4NzY1M2M3ZGFmYzNjYmRmNzFmMmUxOTA0YWQwMTJmODFmYzY5MDM4YWYzMzM5YzU0ZTc3MDQyYjE4ZDgmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.NDAlx1UVS7XKbUuQfh7cf7u9MC4OIPylDn5dvx3mzl4\"></a></p>\n<p dir=\"auto\">Copy the model file you generated to your frigate config folder <code>/opt/frigate/config</code></p>\n<p dir=\"auto\">and now change your detector and adapt it accordingly to your settings :</p>\n<div><pre><code>detectors:\n ov:\n type: openvino\n device: GPU\n\nmodel:\n model_type: yolonas\n width: 256 # &lt;--- should match whatever was set in notebook\n height: 256 # &lt;--- should match whatever was set in notebook\n input_tensor: nchw # &lt;--- take care, it changes from the setting fot the SSDLite model !\n input_pixel_format: bgr\n path: /config/yolo_nas_s_256.onnx # &lt;--- should match the path and the name of your model file\n labelmap_path: /labelmap/coco-80.txt # &lt;--- should match the name and the location of the COCO80 labelmap file\n</code></pre></div>\n<p dir=\"auto\"><em>NOTE : YOLO NAS uses the COCO80 labelmap instead of COCO91</em></p>\n<p dir=\"auto\">restart ... and VOILA !</p>\n</article></div></div>","textContent":"frigate-proxmox-docker-openvino\nComplete setting for OpenVINO hardware acceleration in frigate, instead of CORAL\ntutorial is adapted for Docker version of frigate installed in a proxmox LXC and deals mainly with GPU passthroughs\nPrerequisites\n\nIntel iX > GEN6 architecture (i.e. compatible with openvino acceleration)\nA proxmox working installation\n\ncheck in your PVE Shell that /dev/dri/renderD128 is available :\n\noptionnally install intel GPU tools :\napt install intel-gpu-tools\n\nnow you can check GPU access / usage :\n\nshould lead to something like this :\n\ncreate Docker LXC :\nThe easiest way is to use Tteck's scripts\nfirst in the PVE console launch the tteck's script to install a new docker LXC :\nbash -c \"$(wget -qLO - https://github.com/tteck/Proxmox/raw/main/ct/docker.sh)\"\n\nduring installation :\n\nswitch to \"advanced mode\"\nselect debian 12\nmake the LXC PRIVILEGED\nyou'd better choose 8Go ram and 2 or 4 cores\nadd portainer if needed\nadd docker compose\n\nOnce the LXC is created you have to can also install intel-gpu-tools inside the LXC\napt install intel-gpu-tools\n\nNext you have to add GPU passthrough to the LXC to allow frigate access the OpenVINO acceleratons. On your LXC \"Ressources\" add \"Device Passtrough\" :\n\nand specify the path you want to add : /dev/dri/renderD128\nReboot\nnow your LXC has access to the GPU.\nFrigate Docker\ncreate folders\nOn the LXC shell, create folders to organize your frigate storage for videos / captures / models and configs.\nHere are my usually settings :\nmkdir /opt/frigate\nmkdir /opt/frigate/media\nmkdir /opt/frigate/config\n\ncreate the forlders according to your needs\nnext we will build the docker container.\ncreate a docker-compose.yml at the root folder\ncd /opt/frigate\nnano docker-compose.yml\n\nor create a stack in portainer :\n\nand add :\nversion: \"3.9\"\nservices:\n frigate:\n container_name: frigate\n privileged: true \n restart: unless-stopped\n image: ghcr.io/blakeblackshear/frigate:0.14.1\n cap_add:\n - CAP_PERFMON\n shm_size: \"256mb\"\n devices:\n - /dev/dri/renderD128:/dev/dri/renderD128\n - /dev/dri/card0:/dev/dri/card0\n volumes:\n - /etc/localtime:/etc/localtime:ro\n - /opt/frigate/config:/config\n - /opt/media:/media/frigate\n - type: tmpfs\n target: /tmp/cache\n tmpfs:\n size: 1G\n ports:\n - \"5000:5000\"\n - \"8971:8971\"\n - \"1984:1984\"\n - \"8554:8554\" # RTSP feeds\n - \"8555:8555/tcp\" # WebRTC over tcp\n - \"8555:8555/udp\" # WebRTC over udp\n environment:\n FRIGATE_RTSP_PASSWORD: ****\n PLUS_API_KEY: ****\n\nas you can see :\n\ncontainer is privileged\n/dev/dri/renderD128 is passtrhough from the LXC to the container\ncreated folders are bind to frigate usual folders\nshm_size has to be set according to documentation\ntmpfs has to be adjusted to your configuration, see documentation\nports for UI, RTSP and webRTC are forwarded\ndefine some FRIGATE_RTSP_PASSWORD and PLUS_API_KEY if needed\n\nFrom now the docker container is ready, and have access to the GPU.\nDo not start it right now as you have to provide frigate configuraton !\nSetup Frigate for OpenVINO acceleration\nadd your frigate configutration :\ncd /opt/frigate/config\nnano config.yml\n\nedit it accroding to your setup and now you must add the following lines to your frigate config :\ndetectors:\n ov:\n type: openvino\n device: GPU\n\nmodel:\n width: 300\n height: 300\n input_tensor: nhwc\n input_pixel_format: bgr\n path: /openvino-model/ssdlite_mobilenet_v2.xml\n labelmap_path: /openvino-model/coco_91cl_bkgr.txt\n\nOnce your config.yml is ready build your container by either docker compose up or \"deploy Stack\" if you're using portainer\nreboot all, and go to frigate UI to check everything is working :\n\nyou should see :\n\nlow inference time : ~20 ms\nlow CPU usage\nGPU usage\n\nyou can also check with intel_gpu_top inside the LXC console and see that Render/3D has some loads according to frigate detections\n\nand on your PROXMOX, you can see that CPU load of the LXC is drastically lowered :\n\nExtra settings\nCPU load\ni experimentally found that running those 2 Tteck's scripts int the PVE console greatly reduces the CPU consumption in \"idle mode\" (i.e. when frigate only \"observes\" and has no detection running) :\n\nFilesystem Trim\nCPU Scaling Governor : set governor to powersave\n\nexperiment on your own !\nYOLO NAS models\nExcept default SSDLite model, YOLO NAS model is also available for OpenVINO acceleration.\nTo use it you have to build the model to make it compatible with frigate. this can be easily done with the dedicated google collab\nthe only thing to do is to define the dimensions of the input image shape. 320x320 leads to higher inference time, i'd use 256x256.\n input_image_shape=(256,256),\nand select the base precision of the model. S version is good enough, M induces much higer inference time :\nmodel = models.get(Models.YOLO_NAS_S, pretrained_weights=\"coco\")\n\nNOTE: you can make some tests and find the good combination for your hardware. try to limit inference time around 20 ms\nand specify the name of the model file you will generate :\nfiles.download('yolo_nas_s.onnx')\n\nnow simply launch all the steps of the collab, 1 by 1, and it will download the model file :\n\nCopy the model file you generated to your frigate config folder /opt/frigate/config\nand now change your detector and adapt it accordingly to your settings :\ndetectors:\n ov:\n type: openvino\n device: GPU\n\nmodel:\n model_type: yolonas\n width: 256 # <--- should match whatever was set in notebook\n height: 256 # <--- should match whatever was set in notebook\n input_tensor: nchw # <--- take care, it changes from the setting fot the SSDLite model !\n input_pixel_format: bgr\n path: /config/yolo_nas_s_256.onnx # <--- should match the path and the name of your model file\n labelmap_path: /labelmap/coco-80.txt # <--- should match the name and the location of the COCO80 labelmap file\n\nNOTE : YOLO NAS uses the COCO80 labelmap instead of COCO91\nrestart ... and VOILA !\n","length":6002,"excerpt":"frigate-proxmox-docker-openvino","siteName":null}