a:6:{s:5:"child";a:1:{s:0:"";a:1:{s:3:"rss";a:1:{i:0;a:6:{s:4:"data";s:6:"
";s:7:"attribs";a:1:{s:0:"";a:1:{s:7:"version";s:3:"2.0";}}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";s:5:"child";a:1:{s:0:"";a:1:{s:7:"channel";a:1:{i:0;a:6:{s:4:"data";s:317:"
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";s:5:"child";a:2:{s:0:"";a:6:{s:5:"title";a:1:{i:0;a:5:{s:4:"data";s:19:"LinuxServer.io Blog";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"link";a:1:{i:0;a:5:{s:4:"data";s:31:"https://www.linuxserver.io/blog";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:11:"description";a:1:{i:0;a:5:{s:4:"data";s:13:"Posts n stuff";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:8:"language";a:1:{i:0;a:5:{s:4:"data";s:2:"en";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:13:"lastBuildDate";a:1:{i:0;a:5:{s:4:"data";s:31:"Mon, 16 Dec 2024 15:00:00 +0000";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"item";a:10:{i:0;a:6:{s:4:"data";s:231:"
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";s:5:"child";a:1:{s:0:"";a:6:{s:5:"title";a:1:{i:0;a:5:{s:4:"data";s:25:"New and Improved For 2025";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"link";a:1:{i:0;a:5:{s:4:"data";s:57:"https://www.linuxserver.io/blog/new-and-improved-for-2025";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"guid";a:1:{i:0;a:5:{s:4:"data";s:57:"https://www.linuxserver.io/blog/new-and-improved-for-2025";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:7:"pubDate";a:1:{i:0;a:5:{s:4:"data";s:31:"Mon, 16 Dec 2024 15:00:00 +0000";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:11:"description";a:1:{i:0;a:5:{s:4:"data";s:3644:"
As we approach the end of the year we thought we would take the opportunity to take a look back at some of the things we've achieved in 2024 and how they benefit you now and in 2025. A lot of the changes we make to our pipelines and images take a while to actually reach end users, both because of the time it takes for us to roll them out across our fleet and because some users don't regularly update their containers. So, what have we done and what does it mean for you? Well:
Software Bill of Materials (SBOM) Attestations for Docker images describe what software artifacts an image contains, and artifacts used to create the image. We have for a long time published a package_versions.txt file in each image repo listing all the software included as part of the container, but it was hard for users to know for sure that it was an accurate reflection of the actual image, or to validate it for older images they might be using.
Now that we're publishing SBOM attestations as part of the images, you can view them for any image tag going forward by doing something like docker buildx imagetools inspect lscr.io/linuxserver/plex:latest --format '{{ json (index .SBOM "linux/amd64").SPDX }}, adjusting the architecture to match yours. If you're pulling an arch-specific tag you can instead do something like docker buildx imagetools inspect lscr.io/linuxserver/socket-proxy:arm64v8-latest --format '{{ json .SBOM.SPDX }}'. You can then use a tool like Syft to convert the SPDX JSON manifest into something more human-readable e.g. syft convert SBOM.json -o table=./package_versions.txt.
Please note there are a few images that cannot currently support SBOM attestation but we're working to get them into a position where they can.
In a similar vein, Provenance Attestations for Docker images include facts about the build process, allowing you to see exactly how and when they were built. Again, we already made a lot of this public via our CI but it was hard to link a specific build run to any given image you might be using.
In the same way as the SBOM attestations you can view Provenance details for any image tag going forward by doing something like docker buildx imagetools inspect lscr.io/linuxserver/plex:latest --format '{{ json (index .Provenance "linux/amd64").SLSA }}', with a similar adjustment for arch-specific tags. Unfortunately we're not aware of any good tools to produce a nice human-readable output for SLSA.
Please note there are a few images that cannot currently support Provenance attestation but we're working to get them into a position where they can.
Providing Read Only running of our containers has involved a steep learning curve. It's amazing just how much stuff expects to be able to write to the container filesystem and doesn't provide you any practical way to redirect it. At time of writing, after testing nearly 200 images, and making quite a few modifications to how we handling their init processes, 35...
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:8:"category";a:5:{i:0;a:5:{s:4:"data";s:6:"docker";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:1;a:5:{s:4:"data";s:4:"news";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:2;a:5:{s:4:"data";s:11:"linuxserver";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:3;a:5:{s:4:"data";s:4:"sbom";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:4;a:5:{s:4:"data";s:4:"slsa";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}}}}i:1;a:6:{s:4:"data";s:173:" ";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";s:5:"child";a:1:{s:0:"";a:6:{s:5:"title";a:1:{i:0;a:5:{s:4:"data";s:38:"Better Practices For Docker Networking";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"link";a:1:{i:0;a:5:{s:4:"data";s:70:"https://www.linuxserver.io/blog/better-practices-for-docker-networking";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"guid";a:1:{i:0;a:5:{s:4:"data";s:70:"https://www.linuxserver.io/blog/better-practices-for-docker-networking";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:7:"pubDate";a:1:{i:0;a:5:{s:4:"data";s:31:"Tue, 08 Oct 2024 13:00:00 +0100";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:11:"description";a:1:{i:0;a:5:{s:4:"data";s:3629:"
Many years ago we wrote a blog post on Docker networks and how to use them, but that was 2017 and this is now so surely everything has changed?
Well, not really, but at lot of this stuff is abstracted away by management interfaces and people don't understand what's going on behind the scenes so they make bad decisions that come back to haunt them 6 months down the line. With that in mind we're going to try present a clear idea of what good practice looks like for Docker networking so you can properly plan out your infrastructure and avoid the pain of having to rip everything out again when you run into problems.
There are lots of different types of Docker network available to you:
For our purposes today we only care about user-defined bridge networks; there are use cases for Macvlan/IPvlan but they're outside the scope of this post as 99% of the time they're the wrong choice. You can think of a bridge network as its own mini-LAN, with Docker passing traffic in and out via NAT the same way you would with your router.
The most important feature of a user-defined bridge network is that it enables DNS name resolution and communication between its containers; if you have your speedtest-tracker container and your speedtest-tracker-db Postgres container on the lsio bridge network you can tell Speedtest Tracker to use speedtest-tracker-db:5432 as the database host rather than having to use the host IP, and having to expose ports. You can also control the size of the network, and whether or not it can communicate with your wider LAN/WAN networks. One of the side effects of this is that you no longer have to worry about port conflicts. You can have 50 containers all on the same bridge network, all using port 80, and it doesn't matter. Your reverse proxy is the only thing that needs to expose port 80 (and 443) to the host.
You can create bridge networks with the Docker docker network create CLI command, but Docker Compose will do all the heavy lifting for you, so let's work with that for our examples.
If you don't define any networks in your compose project then it will automatically create a <compose project>_default network with default settings and attach all containers in the project to it, but...
Our infrastructure can be broadly split into three categories: Public Facing, Internal, and Builders.The latter should be pretty self-explanatory and are a mix of amd64 and aarch64 boxes that build all our images. Internal services include things like our wiki, monitoring and metrics, automation, Discord bots, etc. The bulk of our public facing services - our website, Discourse forums, Fleet, Info, and Status pages - are hosted on Digital Ocean droplets, however, we use a number of different hosting providers for our other services, in part because it's wise to distribute your eggs, in part because DO don't offer arm droplets, and in part because running 24/7 droplets of the required number and spec for our builders would simply be more expensive than more traditional hosting providers.
One of the nice things about DO is that we can upload and store our own custom OS images, which makes life much easier when we want to deploy Alpine-based hosts, for example, and they also offer much more detailed metrics and monitoring than our other hosting providers, which really helps with making sure we've got everything sized correctly and aren't overpaying or underperforming. They do also offer a Container Registry service, but it's designed for private, internal use, and their largest offering is only 100 GB storage, which we would burn through in no time (Webtop alone accounts for over 3Tb of images).
In the 4 years I've been part of Linuxserver I don't think we've had a single outage across our droplets, which is honestly quite impressive, and with weekly backups (daily now available) and on-demand snapshots we've got some pretty solid reliability for our services. However, even the best hosting provider can't protect you against someone screwing up while managing a box and so we've taken some additional steps to try and protect ourselves.
Beyond the metrics and alerting provided by DO, we use Gatus to monitor our services for outages, and it doubles as a public status page. If something goes down we know about it within a few minutes, rather than relying on chance or annoyed users notifying us about it. We also monitor a number of 3rd party services that we rely heavily on, such as the Docker registries and the Github API so that when users complain about not being able to pull images we know if there's a wider issue.
We operate a pre-production clone of our live website that allows us to test upgrades and changes without making us look too stupid when everything breaks. We were "inspired" to set it up after an upgrade to Grav (our CMS) broke our custom CSS, and it's already paid dividends.
Did you know that Docker Compose can pull from a remote git repo? No? I'm not surprised as it's an experimental feature and very lightly documented. In fact this blog post was the only official place outside of the Docker git repos that I found reference to it....
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:8:"category";a:4:{i:0;a:5:{s:4:"data";s:6:"docker";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:1;a:5:{s:4:"data";s:7:"compose";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:2;a:5:{s:4:"data";s:12:"digitalocean";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:3;a:5:{s:4:"data";s:14:"infrastructure";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}}}}i:3;a:6:{s:4:"data";s:173:" ";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";s:5:"child";a:1:{s:0:"";a:6:{s:5:"title";a:1:{i:0;a:5:{s:4:"data";s:55:"Why Can't You Just Implement <Thing I Want>?";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"link";a:1:{i:0;a:5:{s:4:"data";s:72:"https://www.linuxserver.io/blog/why-cant-you-just-implement-thing-i-want";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"guid";a:1:{i:0;a:5:{s:4:"data";s:72:"https://www.linuxserver.io/blog/why-cant-you-just-implement-thing-i-want";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:7:"pubDate";a:1:{i:0;a:5:{s:4:"data";s:31:"Sat, 15 Jun 2024 15:56:00 +0100";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:11:"description";a:1:{i:0;a:5:{s:4:"data";s:3312:"
It's a common refrain: "Why can't you just implement this feature that would make my life easier? It's not even that hard, it's just this environment variable or this bit of init logic. There are lots of people just like me who would benefit from it."

It should be pretty obvious that we can't implement every feature request we receive, but the important question is why didn't we implement yours?
As The Notorious B.I.G famously said: Mo Code Mo Problems. Almost every time we add functionality to an image, we're increasing its complexity, and the number of ways it can go wrong. Sure you made this change in your homelab and it's all working fine, but you are not representative of our user base. You're not running on a 3.x kernel, or on a prebuilt NAS that still ships version 17.x of Docker, or on Proxmox running a VM running LXC running Docker in Docker for no good reason. You also probably didn't copy/paste our docker compose example as-is and spin it up; a surprising number of people do, so every extra configuration option requires a level of planning and documentation to ensure that those copy/pasters don't end up with a broken install, which brings us to...
We're all volunteers - none of us get paid for this - and we do it in our own time because we enjoy it. Having people yell at you that you're shit because you won't do the things they want is not enjoyable. We have to be realistic about what we're able to support, which is why when images start to require too much ongoing work to keep them running, we have to take a hard look at whether they're worth continuing to maintain (RIP armhf). We maintain over 200 images, so while for you this is a quick fix for your particular use case, for us it could open the gates to a lot of extra work.
It's impossible for us to provide support for every use case, and every conceivable way of running our images, which is why we have a published Support Policy to try and limit the scope of what we're expected to cover. We're not doing it because we hate k8s or Podman, or think running rootless is a bad idea, we're doing it because we have to draw a reasonable support boundary that covers the majority of our users. What this means is that if you come to us and say "I'm using this thing you don't support and I want you to change your images to make my life easier doing it" we're probably going to say no, unless it's something trivial with no risk of impacting anything else.
We can't be all things to all people. Even if we had unlimited resources it just isn't practical to support everything with one image. We design our images with a particular audience in mind: homelab users running...
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:8:"category";a:3:{i:0;a:5:{s:4:"data";s:10:"containers";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:1;a:5:{s:4:"data";s:6:"docker";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:2;a:5:{s:4:"data";s:11:"linuxserver";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}}}}i:4;a:6:{s:4:"data";s:347:" ";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";s:5:"child";a:1:{s:0:"";a:6:{s:5:"title";a:1:{i:0;a:5:{s:4:"data";s:22:"Advanced Wireguard Hub";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"link";a:1:{i:0;a:5:{s:4:"data";s:54:"https://www.linuxserver.io/blog/advanced-wireguard-hub";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"guid";a:1:{i:0;a:5:{s:4:"data";s:54:"https://www.linuxserver.io/blog/advanced-wireguard-hub";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:7:"pubDate";a:1:{i:0;a:5:{s:4:"data";s:31:"Thu, 16 Nov 2023 11:50:00 +0000";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:11:"description";a:1:{i:0;a:5:{s:4:"data";s:4770:"

In a couple of prior articles (here and here) we showcased the capabilities of our WireGuard Docker container with some real world examples. At the time, our WireGuard container only supported one active tunnel at a time so the second article resorted to using multiple WireGuard containers running on the same host and using the host's routing tables to do advanced routing between and through them.
In October 2023, our WireGuard container received a major update and started supporting multiple WireGuard tunnels at a time, which made it much more versatile than before. In this article we'll take advantage of this new capability and showcase a setup that involves a single container that acts as both a server and a client that tunnels peers through multiple redundant VPN connections while maintaining access to the LAN.
Many VPN providers have a limit on the number of devices (or tunnels). This setup will allow you to have an unlimited amount of devices tunneled through a single VPN connection while also supporting a fail-over backup connection!
DISCLAIMER: This article is not meant to be a step by step guide, but instead a showcase for what can be achieved with our WireGuard image. We do not officially provide support for routing whole or partial traffic through our WireGuard container (aka split tunneling) as it can be very complex and require specific customization to fit your network setup and your VPN provider's. But you can always seek community support on our Discord server's #other-support channel.
Tested on Ubuntu 23.04, Docker 24.0.5, Docker Compose 2.20.2, with Mullvad.
Configure a standard WireGuard server according to the WireGuard documentation.
wireguard:
image: lscr.io/linuxserver/wireguard:latest
container_name: wireguard
cap_add:
- NET_ADMIN
- SYS_MODULE
environment:
- PUID=1000
- PGID=1000
- TZ=Etc/UTC
- SERVERURL=wireguard.domain.com
- SERVERPORT=51820
- PEERS=1
- PEERDNS=auto
- INTERNAL_SUBNET=10.13.13.0
volumes:
- /path/to/appdata/config:/config
- /lib/modules:/lib/modules
ports:
- 51820:51820/udp
sysctls:
- net.ipv4.conf.all.src_valid_mark=1
restart: unless-stopped
Start the container and validate that docker logs wireguard contains no errors, and validate that the server is working properly by connecting a client to it.
Copy the 2 WireGuard configs that you get from your VPN providers into files under /config/wg_confs/wg1.conf and /config/wg_confs/wg2.conf.
Make the following changes:
Table = 55111 to distinguish rules for this interface.PostUp = ip rule add pref 10001 from 10.13.13.0/24 lookup 55111 to forward traffic from the wireguard server through the tunnel using table 55111 and priority 10001.PreDown = ip rule del from 10.13.13.0/24 lookup 55111 to remove the previous rule when the interface goes down.PersistentKeepalive = 25 to keep the tunnel alive.AllowedIPs = and calculate the value using a Wireguard AllowedIPs Calculator.
0.0.0.0/0 in the Allowed IPs field.Disallowed IPs field, for example: 192.168.0.0/24, 10.13.13.0/24, make sure it doesn't include the VPN interface address (...
We maintain a lot of images, which are used by a lot of people, on a lot of platforms, using a lot of tools, and it's not always immediately clear which of those many combinations we support, and will provide support for. This post is an attempt to clarify that situation and provide links to our formal documentation on the matter.
Any exceptions to our support policy will be clearly called out in the readme for the relevant image.
The TL;DR is if you run up to date versions of our currently maintained images using a supported version of Docker, rootfully, on Linux, using docker compose or the docker CLI to create and update your containers, we will support you with any issues you encounter.
Our support policy can be grouped into 4 categories:
With the exception of the last category it's worth noting that unsupported does not mean it won't work, it just means we won't help you make it work. Additionally, if you do manage to get something in the last category working it doesn't change anything, it's still unsupported and a bad idea. Requests for help with anything outside of the Formally Supported category should use the #other-support channel on our Discord server.
Our general support philosophy can be summarised as follows:
With that out of the way, our current support policy can always be found at https://linuxserver.io/supportpolicy and we will make announcements via our usual channels if anything substantial changes.
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:8:"category";a:4:{i:0;a:5:{s:4:"data";s:6:"docker";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:1;a:5:{s:4:"data";s:11:"linuxserver";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:2;a:5:{s:4:"data";s:7:"support";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:3;a:5:{s:4:"data";s:6:"policy";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}}}}i:6;a:6:{s:4:"data";s:231:" ";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";s:5:"child";a:1:{s:0:"";a:6:{s:5:"title";a:1:{i:0;a:5:{s:4:"data";s:12:"Hello MkDocs";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"link";a:1:{i:0;a:5:{s:4:"data";s:44:"https://www.linuxserver.io/blog/hello-mkdocs";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"guid";a:1:{i:0;a:5:{s:4:"data";s:44:"https://www.linuxserver.io/blog/hello-mkdocs";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:7:"pubDate";a:1:{i:0;a:5:{s:4:"data";s:31:"Wed, 25 Oct 2023 23:50:00 +0100";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:11:"description";a:1:{i:0;a:5:{s:4:"data";s:3224:"
As early as 2019, we started centralizing all our documentation for container images, informational snippets, Frequently Asked Questions as well as full-blown user-guides, this has all lived on GitBook. The reason for going with GitBook at the time was simply because of its native ability to build off of a git repo, as well as its hosted nature (yes, we want to spend most of our time creating containers, not maintaining infrastructure). We were also considering Read The Docs and Bookstack for this usecase. The git integration was a killer feature, as it allowed us to implement it as step in our pipeline project to automatically push updated documentation with the same base as the readme.
As time went on, the LinuxServer team grew. Which as an organization, the skillset also grew. A part of this skillset included various takes on other documentation tools. Since we always want improve, our documentation has also seen multiple iterations. While doing these updates certain pain points arose:
The sync from our GitHub repo to GitBook has been disabled for a couple of months, as we have been preparing, improving and testing MkDocs. The freeze has been necessary because we adapted the templates our jenkins-builder generates for MkDocs, and we didn't want the current docs to get formatted weirdly, as the syntaxes differ just enough.
The switch to MkDocs allows us to customize the build-output to our liking, with the knowledge we have within the team. It also resolves all the pain-points listed above.
We would just like to give a shout out to GitBook and say thank you for providing us with a OSS license.
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:8:"category";a:5:{i:0;a:5:{s:4:"data";s:6:"docker";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:1;a:5:{s:4:"data";s:13:"documentation";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:2;a:5:{s:4:"data";s:6:"readme";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:3;a:5:{s:4:"data";s:6:"mkdocs";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:4;a:5:{s:4:"data";s:7:"gitbook";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}}}}i:7;a:6:{s:4:"data";s:318:" ";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";s:5:"child";a:1:{s:0:"";a:6:{s:5:"title";a:1:{i:0;a:5:{s:4:"data";s:41:"Docker Tags: So Many Tags, So Little Time";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"link";a:1:{i:0;a:5:{s:4:"data";s:71:"https://www.linuxserver.io/blog/docker-tags-so-many-tags-so-little-time";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"guid";a:1:{i:0;a:5:{s:4:"data";s:71:"https://www.linuxserver.io/blog/docker-tags-so-many-tags-so-little-time";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:7:"pubDate";a:1:{i:0;a:5:{s:4:"data";s:31:"Mon, 07 Aug 2023 11:31:00 +0100";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:11:"description";a:1:{i:0;a:5:{s:4:"data";s:4601:"
As an organization, we maintain hundreds of Docker images and with each image having multiple tags and different naming conventions on different registries, things can become confusing. In this article we attempt to untangle that web and clarify how all the images and tags we push relate to each other.
docker.io.latest is used. The format is <registry>/<repo>/<image>:<tag> (except for Gitlab, which uses the format <registry>/organization/<repo>/<image>:<tag>). If <registry> is not provided, it defaults to docker.io, so attempting to pull linuxserver/swag will result in pulling docker.io/linuxserver/swag:latest.docker pull, the image manifest is first retrieved.lscr.io/linuxserver/swag:latest tag is a dynamic one and it points to a different image every time a new stable build is pushed. Static tags on the other hand are pushed to the registry once and never updated. Repulling the same static tag at a later time will pull the same image as before. lscr.io/linuxserver/swag:arm64v8-2.6.0-ls224 is a static tag as it contains the specific build number (ls224) and will not get overwritten as the build number will get incremented in the next build and push.We push our images to four public registries. There are subtle differences between these registries in how the repos and images are structured and named.
Docker.io is the default registry. If the user does not define a registry in a command, docker client automatically adds docker.io/. For instance pulling linuxserver/swag is the same as pulling docker.io/linuxserver/swag
In the beginning of time, Linuxserver.io decided to set up multiple organizations on Docker Hub to host images. There were separate orgs for different arches such as armhf and aarch64, and there were separate orgs for baseimages and community images. Over time, the secondary arch images were brought under the same orgs as the amd64 ones through the use of multi-arch manifests and those additional orgs were deprecated. The community org that hosted community provided and maintained images was also deprecated as we realized that the community did not contribute further into support and maintenance of the images,...
It has been two years since Webtop and our accompanying base images were released with the goal of delivering a full Linux Desktop to your Web Browser. If you were not aware the backend technology enabling this was a combination of xrdp, Guacamole, and an in house client Gclient. Guacamole and xrdp are amazing pieces of software, but are held back by forcing a square peg into a round hole. Most remote desktop software is built for a native desktop client and requires a significant amount of overhead to convert it into a format a modern Web browser understands, because of that fidelity and performance is not great. The folks at noVNC have done a great job of creating an RFB compliant native VNC client for the browser, but again they are bound by that protocol and can only do so much to optimize for web.
This led me (TheLamer) down a rabbit hole of trying to find an open source project with the singular goal of delivering Linux to Web Browsers. I am happy to announce, after more than a year of work in the background, that not only have I found it but have joined the KasmVNC team to see this through. This is the fundamental technology driving the new containers that just went live. Some important notes before we get started:
Here is a quick comparison of our previous version vs now: (1080p capture)
On top of a drastic improvement in responsiveness and FPS there is also fidelity with fine grain control over compression, format, and frame-rate to suit your needs.
The real question though is how high can you go?

Lossless, not fake lossless, or semi lossless, but actual true 24 bit RGB leveraging the Quite OK Image Format decoded client-side with threaded webassembly, more info here. Even better this mode is capable of going over a gigabit at high FPS so if you have been eyeballing that 10gb switch you just found your excuse.
When you pair this with the 32-bit float audio and a fullscreen browser window you get that local feel all from the comfort of your browser.
It is difficult to show a demo of what lossless is like so why not try it yourself?
sudo docker run --rm -it --security-opt seccomp=unconfined --shm-size="1gb" -p 3001:3001 lscr.io/linuxserver/webtop:latest bash
Hop into https://yourhost:3001 and swap Settings > Stream Quality > Preset Modes > Lossless. Check Render Native Resolution if you use UI scaling.
As we wrote almost a year ago now, 32-bit Arm has been on life support for a while, and you may have noticed that none of the new images we've released in recent months have offered 32-bit Arm (armhf) versions, and a number of older images have dropped support over the same period. This has been part of our "soft deprecation" of the architecture, as it has become more and more difficult to support with contemporary applications.
Last week, Raspberry Pi OS started defaulting to a 64-bit kernel on boot, if the hardware supports it, which was possibly not the most graceful way to handle things, but here we are. What this means is that, essentially, 32-bit Arm has transitioned from "on life support" to "doomed"; there is obviously still hardware out there that doesn't support 64-bit, but the single biggest pool of users who can move to 64-bit is now having it (sort of) done for them.
A year ago, around 2/3 of our Arm users were still on 32-bit platforms, today it's less than 1/5, and consequently we have taken the difficult decision to formally deprecate 32-bit Arm builds from 2023-07-01. Due to the number of images and how our build pipelines work there's going to be some wiggle room here, but essentially from the 1st of July 2023 we will no longer support any 32-bit platforms.
Old images will continue to work, but will not receive application or OS updates, and we will not provide support for them. Additionally, the latest and arm32v7-latest tags will no longer work for 32-bit Arm, you will need to provide a specific version tag if you wish to pull one of the old images.
If you're currently using our 32-bit Arm images, what are your options?
uname -m from a terminal session - a response of armv7l or armhf means you're running a 32-bit kernel.getconf LONG_BIT will give you a response of 32 if this is the case.As before, we know this probably isn't what you want to hear, but unfortunately technology marches forward and 32-bit is doomed. Hopefully by providing as much notice as possible you'll have time to find a solution that works for you.
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:8:"category";a:6:{i:0;a:5:{s:4:"data";s:5:"linux";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:1;a:5:{s:4:"data";s:6:"docker";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:2;a:5:{s:4:"data";s:5:"armhf";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:3;a:5:{s:4:"data";s:12:"announcement";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:4;a:5:{s:4:"data";s:11:"deprecation";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:5;a:5:{s:4:"data";s:3:"arm";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}}}}}}s:27:"http://www.w3.org/2005/Atom";a:1:{s:4:"link";a:1:{i:0;a:5:{s:4:"data";s:0:"";s:7:"attribs";a:1:{s:0:"";a:3:{s:4:"href";s:35:"https://www.linuxserver.io/blog.rss";s:3:"rel";s:4:"self";s:4:"type";s:19:"application/rss+xml";}}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}}}}}}}}}}}s:4:"type";i:128;s:7:"headers";a:10:{s:6:"server";s:5:"nginx";s:4:"date";s:29:"Mon, 16 Dec 2024 21:44:34 GMT";s:12:"content-type";s:34:"application/rss+xml; charset=utf-8";s:10:"connection";s:10:"keep-alive";s:12:"x-powered-by";s:10:"PHP/8.3.13";s:10:"set-cookie";s:195:"linux-server-io-b4e2fbe59292f23ee327e4043b64b395=psutna32p255312hcb5cpi6co8; expires=Mon, 16 Dec 2024 22:14:34 GMT; Max-Age=1800; path=/; domain=www.linuxserver.io; secure; HttpOnly; SameSite=Lax";s:6:"pragma";s:8:"no-cache";s:13:"cache-control";s:14:"max-age=604800";s:7:"expires";s:29:"Mon, 23 Dec 2024 21:44:34 GMT";s:4:"etag";s:34:""d9cbdd56873a302ace6adc43c4f2ebe3"";}s:5:"build";s:14:"20240924100504";s:5:"mtime";i:1734385474;s:3:"md5";s:32:"a0b341c9ef194e1ef1e795ad7a75ba36";}