Files
docker-configs/freshRSS/data/cache/94de572833a209eb996461e37207ae0eddebaacf.spc

402 lines
58 KiB
Plaintext
Executable File

a:6:{s:5:"child";a:1:{s:0:"";a:1:{s:3:"rss";a:1:{i:0;a:6:{s:4:"data";s:6:"
";s:7:"attribs";a:1:{s:0:"";a:1:{s:7:"version";s:3:"2.0";}}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";s:5:"child";a:1:{s:0:"";a:1:{s:7:"channel";a:1:{i:0;a:6:{s:4:"data";s:317:"
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";s:5:"child";a:2:{s:0:"";a:6:{s:5:"title";a:1:{i:0;a:5:{s:4:"data";s:19:"LinuxServer.io Blog";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"link";a:1:{i:0;a:5:{s:4:"data";s:31:"https://www.linuxserver.io/blog";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:11:"description";a:1:{i:0;a:5:{s:4:"data";s:13:"Posts n stuff";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:8:"language";a:1:{i:0;a:5:{s:4:"data";s:2:"en";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:13:"lastBuildDate";a:1:{i:0;a:5:{s:4:"data";s:31:"Thu, 16 Nov 2023 11:50:00 +0000";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"item";a:10:{i:0;a:6:{s:4:"data";s:347:"
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";s:5:"child";a:1:{s:0:"";a:6:{s:5:"title";a:1:{i:0;a:5:{s:4:"data";s:22:"Advanced Wireguard Hub";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"link";a:1:{i:0;a:5:{s:4:"data";s:54:"https://www.linuxserver.io/blog/advanced-wireguard-hub";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"guid";a:1:{i:0;a:5:{s:4:"data";s:54:"https://www.linuxserver.io/blog/advanced-wireguard-hub";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:7:"pubDate";a:1:{i:0;a:5:{s:4:"data";s:31:"Thu, 16 Nov 2023 11:50:00 +0000";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:11:"description";a:1:{i:0;a:5:{s:4:"data";s:4770:"
<img alt="" src="https://www.linuxserver.io/images/d/6/d/8/4/d6d84245c3136f50224e55df8ccfe9eab152762e-hub2.png" />
<p><img title="hub" alt="hub" src="/user/pages/03.blog/advanced-wireguard-hub/hub.png"></p>
<p>In a couple of prior articles (<a href="https://www.linuxserver.io/blog/routing-docker-host-and-container-traffic-through-wireguard"><strong>here</strong></a> and <a href="https://www.linuxserver.io/blog/advanced-wireguard-container-routing"><strong>here</strong></a>) we showcased the capabilities of <a href="https://github.com/linuxserver/docker-wireguard">our WireGuard Docker container</a> with some real world examples. At the time, our WireGuard container only supported one active tunnel at a time so the second article resorted to using multiple WireGuard containers running on the same host and using the host's routing tables to do advanced routing between and through them.</p>
<p>In October 2023, our WireGuard container received a <a href="https://github.com/linuxserver/docker-wireguard#versions">major update</a> and started supporting multiple WireGuard tunnels at a time, which made it much more versatile than before. In this article we'll take advantage of this new capability and showcase a setup that involves a single container that acts as both a server and a client that tunnels peers through multiple redundant VPN connections while maintaining access to the LAN.</p>
<p>Many VPN providers have a limit on the number of devices (or tunnels). This setup will allow you to have an unlimited amount of devices tunneled through a single VPN connection while also supporting a fail-over backup connection!</p>
<p>DISCLAIMER: This article is not meant to be a step by step guide, but instead a showcase for what can be achieved with our WireGuard image. We do not officially provide support for routing whole or partial traffic through our WireGuard container (aka split tunneling) as it can be very complex and require specific customization to fit your network setup and your VPN provider's. But you can always seek community support on our <a href="https://discord.gg/YWrKVTn">Discord server</a>'s #other-support channel. </p>
<p>Tested on Ubuntu 23.04, Docker 24.0.5, Docker Compose 2.20.2, with Mullvad.</p>
<h2>Requirements</h2>
<ul>
<li>A working instance of our <a href="https://github.com/linuxserver/docker-wireguard">Wireguard container</a> in server mode.</li>
</ul>
<h2>Initial WireGuard Server Configuration</h2>
<p>Configure a standard WireGuard server according to the <a href="https://github.com/linuxserver/docker-wireguard">WireGuard documentation</a>.</p>
<pre><code class="language-Yaml"> wireguard:
image: lscr.io/linuxserver/wireguard:latest
container_name: wireguard
cap_add:
- NET_ADMIN
- SYS_MODULE
environment:
- PUID=1000
- PGID=1000
- TZ=Etc/UTC
- SERVERURL=wireguard.domain.com
- SERVERPORT=51820
- PEERS=1
- PEERDNS=auto
- INTERNAL_SUBNET=10.13.13.0
volumes:
- /path/to/appdata/config:/config
- /lib/modules:/lib/modules
ports:
- 51820:51820/udp
sysctls:
- net.ipv4.conf.all.src_valid_mark=1
restart: unless-stopped</code></pre>
<p>Start the container and validate that <code>docker logs wireguard</code> contains no errors, and validate that the server is working properly by connecting a client to it.</p>
<h2>VPN Client Tunnels Configuration</h2>
<p>Copy the 2 WireGuard configs that you get from your VPN providers into files under <code>/config/wg_confs/wg1.conf</code> and <code>/config/wg_confs/wg2.conf</code>.</p>
<h3>Example wg1.conf</h3>
<p>Make the following changes:</p>
<ul>
<li>Add <code>Table = 55111</code> to distinguish rules for this interface.</li>
<li>Add <code>PostUp = ip rule add pref 10001 from 10.13.13.0/24 lookup 55111</code> to forward traffic from the wireguard server through the tunnel using table 55111 and priority 10001.</li>
<li>Add <code>PreDown = ip rule del from 10.13.13.0/24 lookup 55111</code> to remove the previous rule when the interface goes down.</li>
<li>Add <code>PersistentKeepalive = 25</code> to keep the tunnel alive.</li>
<li>Add <code>AllowedIPs =</code> and calculate the value using a <a href="https://www.procustodibus.com/blog/2021/03/wireguard-allowedips-calculator/">Wireguard AllowedIPs Calculator</a>.
<ul>
<li>Write <code>0.0.0.0/0</code> in the <code>Allowed IPs</code> field.</li>
<li>Write your LAN subnet and Wireguard server subnet in the <code>Disallowed IPs</code> field, for example: <code>192.168.0.0/24, 10.13.13.0/24</code>, make sure it doesn't include the VPN interface address (...</li></ul></li></ul>
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:8:"category";a:9:{i:0;a:5:{s:4:"data";s:5:"linux";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:1;a:5:{s:4:"data";s:6:"docker";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:2;a:5:{s:4:"data";s:6:"guides";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:3;a:5:{s:4:"data";s:6:"how to";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:4;a:5:{s:4:"data";s:3:"vpn";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:5;a:5:{s:4:"data";s:9:"wireguard";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:6;a:5:{s:4:"data";s:7:"routing";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:7;a:5:{s:4:"data";s:12:"split tunnel";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:8;a:5:{s:4:"data";s:7:"mullvad";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}}}}i:1;a:6:{s:4:"data";s:202:"
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";s:5:"child";a:1:{s:0:"";a:6:{s:5:"title";a:1:{i:0;a:5:{s:4:"data";s:18:"Our Support Policy";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"link";a:1:{i:0;a:5:{s:4:"data";s:50:"https://www.linuxserver.io/blog/our-support-policy";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"guid";a:1:{i:0;a:5:{s:4:"data";s:50:"https://www.linuxserver.io/blog/our-support-policy";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:7:"pubDate";a:1:{i:0;a:5:{s:4:"data";s:31:"Thu, 26 Oct 2023 11:00:00 +0100";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:11:"description";a:1:{i:0;a:5:{s:4:"data";s:2558:"
<img alt="" src="https://www.linuxserver.io/images/6/7/f/1/b/67f1bb22f86142002e43a5ea48edb6a96789e80e-167379978354b8499d246o.jpg" />
<p>We maintain a lot of images, which are used by a lot of people, on a lot of platforms, using a lot of tools, and it's not always immediately clear which of those many combinations we support, and will provide support for. This post is an attempt to clarify that situation and provide links to our formal documentation on the matter.</p>
<p>Any exceptions to our support policy will be clearly called out in the readme for the relevant image.</p>
<p>The TL;DR is if you run up to date versions of our currently maintained images using a supported version of Docker, rootfully, on Linux, using docker compose or the docker CLI to create and update your containers, we will support you with any issues you encounter.</p>
<p>Our support policy can be grouped into 4 categories:</p>
<ul>
<li>Formally Supported</li>
<li>Reasonable Endeavours Support</li>
<li>Unsupported</li>
<li>Unsupported and Known To Be Broken</li>
</ul>
<p>With the exception of the last category it's worth noting that unsupported does not mean it won't work, it just means we won't help you make it work. Additionally, if you <em>do</em> manage to get something in the last category working it doesn't change anything, it's still unsupported and a bad idea. Requests for help with anything outside of the Formally Supported category should use the <code>#other-support</code> channel on our <a href="https://discord.gg/linuxserver">Discord</a> server.</p>
<p>Our general support philosophy can be summarised as follows:</p>
<ul>
<li>If we build and test on it, we support you running on it.</li>
<li>If it's not formally supported but we use it ourselves or we know it works, we'll make reasonable endeavours to help you with issues.</li>
<li>If we don't have knowledge of, or the ability to test/replicate your configuration, we will not provide support for it (though other community members may still be able to help you).</li>
<li>If we know a configuration will not work, or has serious issues, we will not provide any support for it and will advise you of the risks.</li>
</ul>
<p>With that out of the way, our current support policy can always be found at <a href="https://linuxserver.io/supportpolicy">https://linuxserver.io/supportpolicy</a> and we will make announcements via our usual channels if anything substantial changes.</p>
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:8:"category";a:4:{i:0;a:5:{s:4:"data";s:6:"docker";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:1;a:5:{s:4:"data";s:11:"linuxserver";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:2;a:5:{s:4:"data";s:7:"support";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:3;a:5:{s:4:"data";s:6:"policy";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}}}}i:2;a:6:{s:4:"data";s:231:"
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";s:5:"child";a:1:{s:0:"";a:6:{s:5:"title";a:1:{i:0;a:5:{s:4:"data";s:12:"Hello MkDocs";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"link";a:1:{i:0;a:5:{s:4:"data";s:44:"https://www.linuxserver.io/blog/hello-mkdocs";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"guid";a:1:{i:0;a:5:{s:4:"data";s:44:"https://www.linuxserver.io/blog/hello-mkdocs";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:7:"pubDate";a:1:{i:0;a:5:{s:4:"data";s:31:"Wed, 25 Oct 2023 23:50:00 +0100";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:11:"description";a:1:{i:0;a:5:{s:4:"data";s:3224:"
<img alt="" src="https://www.linuxserver.io/images/2/b/8/1/7/2b81795e9aee3d9d6c9820513b44cff9528634e8-archive-18501701920.jpg" />
<p>As early as 2019, we started centralizing all our documentation for container images, informational snippets, Frequently Asked Questions as well as full-blown user-guides, this has all lived on GitBook. The reason for going with GitBook at the time was simply because of its native ability to build off of a git repo, as well as its hosted nature (yes, we want to spend most of our time creating containers, not maintaining infrastructure). We were also considering Read The Docs and Bookstack for this usecase. The git integration was a killer feature, as it allowed us to implement it as step in our <a href="https://www.linuxserver.io/blog/2019-02-21-the-lsio-pipeline-project">pipeline project</a> to automatically push updated documentation with the same base as the readme. </p>
<p>As time went on, the LinuxServer team grew. Which as an organization, the skillset also grew. A part of this skillset included various takes on other documentation tools. Since we always want improve, our documentation has also seen multiple iterations. While doing these updates certain pain points arose:</p>
<ul>
<li>No ability to easily test changes, other tools we knew for documenting let us spin up a development instance.
<ul>
<li>This caused multiple pushes to master in order to test i.e. GitBook specific markdown syntax. </li>
</ul></li>
<li>No automatic index generation. For some reason the git integration we depend on does not automatically update the index of pages, meaning that while we push a file for a new container image into the repo, it does not get included on the site.
<ul>
<li>For 3 years, listing the latest containers in the sidebar was a <a href="https://github.com/linuxserver/docker-documentation/commits/master?before=a75f02127ae30e06001aa32619d908f58dd906ad%2B35&amp;branch=master&amp;path%5B%5D=SUMMARY.md&amp;qualified_name=refs%2Fheads%2Fmaster">manual task</a>, which we automated in <a href="https://github.com/linuxserver/docker-documentation/pull/63">Nov 2022</a>. </li>
</ul></li>
<li>GitBook also doesn't build pages for markdown-files that are in the repo, but not defined in the index, which means we couldn't publish documentation for "pre-release" containers, which are applications we package despite there being no stable build upstream. </li>
</ul>
<h2>Freezing GitBook</h2>
<p>The sync from our GitHub repo to GitBook has been disabled for a couple of months, as we have been preparing, improving and testing MkDocs. The freeze has been necessary because we adapted the templates our jenkins-builder generates for MkDocs, and we didn't want the current docs to get formatted weirdly, as the syntaxes differ just enough.</p>
<p>The switch to MkDocs allows us to customize the build-output to our liking, with the knowledge we have within the team. It also resolves all the pain-points listed above.</p>
<h3>Thanks GitBook</h3>
<p>We would just like to give a shout out to GitBook and say thank you for providing us with a OSS license.</p>
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:8:"category";a:5:{i:0;a:5:{s:4:"data";s:6:"docker";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:1;a:5:{s:4:"data";s:13:"documentation";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:2;a:5:{s:4:"data";s:6:"readme";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:3;a:5:{s:4:"data";s:6:"mkdocs";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:4;a:5:{s:4:"data";s:7:"gitbook";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}}}}i:3;a:6:{s:4:"data";s:318:"
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";s:5:"child";a:1:{s:0:"";a:6:{s:5:"title";a:1:{i:0;a:5:{s:4:"data";s:41:"Docker Tags: So Many Tags, So Little Time";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"link";a:1:{i:0;a:5:{s:4:"data";s:71:"https://www.linuxserver.io/blog/docker-tags-so-many-tags-so-little-time";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"guid";a:1:{i:0;a:5:{s:4:"data";s:71:"https://www.linuxserver.io/blog/docker-tags-so-many-tags-so-little-time";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:7:"pubDate";a:1:{i:0;a:5:{s:4:"data";s:31:"Mon, 07 Aug 2023 11:31:00 +0100";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:11:"description";a:1:{i:0;a:5:{s:4:"data";s:4601:"
<img alt="" src="https://www.linuxserver.io/images/9/0/7/f/4/907f4e40abf3476189235d75be300e11b90c3774-angele-kamp-kaeauitiwnc-unsplash.jpg" />
<p>As an organization, we maintain hundreds of Docker images and with each image having multiple tags and different naming conventions on different registries, things can become confusing. In this article we attempt to untangle that web and clarify how all the images and tags we push relate to each other.</p>
<h2>Table Of Contents:</h2>
<ul>
<li><a href="#nomenclature">Nomenclature</a></li>
<li><a href="#registries-used-by-linuxserver-io">Registries Used By Linuxserver.io</a>
<ul>
<li><a href="#docker-hub-docker-io">Docker Hub (docker.io)</a></li>
<li><a href="#github-container-registry-ghcr-io">Github Container Registry (ghcr.io)</a></li>
<li><a href="#quay">Quay</a></li>
<li><a href="#gitlab">GitLab</a></li>
<li><a href="#lscr-io-honorable-mention">Lscr.io (Honorable Mention)</a></li>
</ul></li>
<li><a href="#branch-tags">Branch Tags</a></li>
<li><a href="#manifests">Manifests</a>
<ul>
<li><a href="#1-branch-tag-dynamic">Branch tag (dynamic)</a></li>
<li><a href="#2-build-tag-static">Build tag (static)</a></li>
<li><a href="#3-version-tag-dynamic">Version tag (dynamic)</a></li>
<li><a href="#4-pseudo-semver-tag-dynamic">Pseudo SemVer tag (dynamic)</a></li>
</ul></li>
<li><a href="#dev-and-pr-images-and-tags">Dev and PR Images and Tags</a></li>
<li><a href="#semver-info">SemVer Info</a></li>
</ul>
<h2>Nomenclature</h2>
<ul>
<li><strong>Registry:</strong> Docker registry is the location/server where the images are stored. The default registry is <code>docker.io</code>.</li>
<li><strong>Image tag:</strong> Each docker image is assigned a specific tag. If no tag is defined when pushing or pulling, the default tag <code>latest</code> is used. The format is <code>&lt;registry&gt;/&lt;repo&gt;/&lt;image&gt;:&lt;tag&gt;</code> (except for Gitlab, which uses the format <code>&lt;registry&gt;/organization/&lt;repo&gt;/&lt;image&gt;:&lt;tag&gt;</code>). If <code>&lt;registry&gt;</code> is not provided, it defaults to <code>docker.io</code>, so attempting to pull <code>linuxserver/swag</code> will result in pulling <code>docker.io/linuxserver/swag:latest</code>.</li>
<li><strong>Manifest:</strong> Docker image manifest contains information on the size, layers and digest of an image. A manifest can also contain a list of these items for multiple images such as a multi-arch image manifest. When issuing a <code>docker pull</code>, the image manifest is first retrieved.</li>
<li><strong>Dynamic tag:</strong> If a docker image tag is updated or overwritten by newer images over time, it is considered a dynamic image tag. Pulling that tag at different times may result in pulling different images. <code>lscr.io/linuxserver/swag:latest</code> tag is a dynamic one and it points to a different image every time a new stable build is pushed. Static tags on the other hand are pushed to the registry once and never updated. Repulling the same static tag at a later time will pull the same image as before. <code>lscr.io/linuxserver/swag:arm64v8-2.6.0-ls224</code> is a static tag as it contains the specific build number (<code>ls224</code>) and will not get overwritten as the build number will get incremented in the next build and push.</li>
</ul>
<h2>Registries Used By Linuxserver.io</h2>
<p>We push our images to four public registries. There are subtle differences between these registries in how the repos and images are structured and named.</p>
<h3>Docker Hub (docker.io)</h3>
<p>Docker.io is the default registry. If the user does not define a registry in a command, docker client automatically adds <code>docker.io/</code>. For instance pulling <code>linuxserver/swag</code> is the same as pulling <code>docker.io/linuxserver/swag</code></p>
<p>In the beginning of time, Linuxserver.io decided to set up multiple organizations on Docker Hub to host images. There were separate orgs for different arches such as <code>armhf</code> and <code>aarch64</code>, and there were separate orgs for baseimages and community images. Over time, the secondary arch images were brought under the same orgs as the <code>amd64</code> ones through the use of multi-arch manifests and those additional orgs were deprecated. The community org that hosted community provided and maintained images was also deprecated as we realized that the community did not contribute further into support and maintenance of the images,...</p>
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:8:"category";a:8:{i:0;a:5:{s:4:"data";s:5:"linux";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:1;a:5:{s:4:"data";s:6:"docker";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:2;a:5:{s:4:"data";s:3:"tag";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:3;a:5:{s:4:"data";s:5:"image";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:4;a:5:{s:4:"data";s:8:"registry";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:5;a:5:{s:4:"data";s:4:"GHCR";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:6;a:5:{s:4:"data";s:4:"quay";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:7;a:5:{s:4:"data";s:6:"gitlab";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}}}}i:4;a:6:{s:4:"data";s:463:"
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";s:5:"child";a:1:{s:0:"";a:6:{s:5:"title";a:1:{i:0;a:5:{s:4:"data";s:42:"Webtop 2.0 - The year of the Linux desktop";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"link";a:1:{i:0;a:5:{s:4:"data";s:72:"https://www.linuxserver.io/blog/webtop-2-0-the-year-of-the-linux-desktop";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"guid";a:1:{i:0;a:5:{s:4:"data";s:72:"https://www.linuxserver.io/blog/webtop-2-0-the-year-of-the-linux-desktop";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:7:"pubDate";a:1:{i:0;a:5:{s:4:"data";s:31:"Fri, 21 Jul 2023 19:40:48 +0100";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:11:"description";a:1:{i:0;a:5:{s:4:"data";s:4045:"
<img alt="" src="https://www.linuxserver.io/images/4/d/4/3/6/4d43690adcf68700937e797299fc38bf63fa92f4-registry.png" />
<p>It has been two years since <a href="https://github.com/linuxserver/docker-webtop">Webtop</a> and our accompanying base images were released with the goal of delivering a full Linux Desktop to your Web Browser.
If you were not aware the backend technology enabling this was a combination of <a href="https://github.com/neutrinolabs/xrdp">xrdp</a>, <a href="https://guacamole.apache.org/">Guacamole</a>, and an in house client <a href="https://github.com/linuxserver/gclient">Gclient</a>. Guacamole and xrdp are amazing pieces of software, but are held back by forcing a square peg into a round hole. Most remote desktop software is built for a native desktop client and requires a significant amount of overhead to convert it into a format a modern Web browser understands, because of that fidelity and performance is not great. The folks at <a href="https://novnc.com/info.html">noVNC</a> have done a great job of creating an RFB compliant native VNC client for the browser, but again they are bound by that protocol and can only do so much to optimize for web. </p>
<p>This led me (TheLamer) down a rabbit hole of trying to find an open source project with the singular goal of delivering Linux to Web Browsers. I am happy to announce, after more than a year of work in the background, that not only have I found it but have joined the <a href="https://www.kasmweb.com/kasmvnc">KasmVNC</a> team to see this through. This is the fundamental technology driving the new containers that just went live. Some important notes before we get started: </p>
<ul>
<li><strong>Armhf is being deprecated, it has simply become too difficult to continue to support the porting of everything that makes up desktop environments and the accompanying applications. Almost every single board computer supports aarch64 at this point, so please take the time to install an arm64 OS, it will be worth it.</strong></li>
<li><strong>For best results use a Chromium based browser. Firefox works, but their rendering engine and WebAssembly integration is simply not as fast.</strong></li>
</ul>
<h2>We are approaching reality</h2>
<p>Here is a quick comparison of our previous version vs now: (1080p capture)</p>
<p><video controls="controls" title="fidelity" alt="fidelity"><source src="/user/pages/03.blog/webtop-2-0-the-year-of-the-linux-desktop/fidelity.mp4?loading=auto">Your browser does not support the video tag.</source></video></p>
<p>On top of a drastic improvement in responsiveness and FPS there is also fidelity with fine grain control over compression, format, and frame-rate to suit your needs.<br>
The real question though is how high can you go?<br>
<img title="all-go-to-11" alt="all-go-to-11" src="/user/pages/03.blog/webtop-2-0-the-year-of-the-linux-desktop/all-go-to-11.gif"><br>
Lossless, not fake lossless, or semi lossless, but actual true 24 bit RGB leveraging the <a href="https://qoiformat.org/">Quite OK Image Format</a> decoded client-side with threaded webassembly, more info <a href="https://www.kasmweb.com/docs/latest/how_to/lossless.html">here</a>. Even better this mode is capable of going over a gigabit at high FPS so if you have been eyeballing that 10gb switch you just found your excuse.<br>
When you pair this with the 32-bit float audio and a fullscreen browser window you get that local feel all from the comfort of your browser. </p>
<p>It is difficult to show a demo of what lossless is like so why not try it yourself? </p>
<pre><code>sudo docker run --rm -it --security-opt seccomp=unconfined --shm-size="1gb" -p 3001:3001 lscr.io/linuxserver/webtop:latest bash</code></pre>
<p>Hop into https://yourhost:3001 and swap <strong>Settings &gt; Stream Quality &gt; Preset Modes &gt; Lossless</strong>. Check <strong>Render Native Resolution</strong> if you use UI scaling.</p>
<h2>New</h2>...
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:8:"category";a:13:{i:0;a:5:{s:4:"data";s:6:"ubuntu";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:1;a:5:{s:4:"data";s:6:"docker";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:2;a:5:{s:4:"data";s:3:"VDI";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:3;a:5:{s:4:"data";s:4:"XFCE";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:4;a:5:{s:4:"data";s:3:"KDE";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:5;a:5:{s:4:"data";s:6:"Alpine";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:6;a:5:{s:4:"data";s:4:"MATE";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:7;a:5:{s:4:"data";s:10:"Arch Linux";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:8;a:5:{s:4:"data";s:6:"Debian";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:9;a:5:{s:4:"data";s:6:"Webtop";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:10;a:5:{s:4:"data";s:7:"KasmVNC";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:11;a:5:{s:4:"data";s:6:"Fedora";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:12;a:5:{s:4:"data";s:14:"Remote Desktop";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}}}}i:5;a:6:{s:4:"data";s:260:"
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";s:5:"child";a:1:{s:0:"";a:6:{s:5:"title";a:1:{i:0;a:5:{s:4:"data";s:21:"A Farewell To Arm(hf)";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"link";a:1:{i:0;a:5:{s:4:"data";s:52:"https://www.linuxserver.io/blog/a-farewell-to-arm-hf";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"guid";a:1:{i:0;a:5:{s:4:"data";s:52:"https://www.linuxserver.io/blog/a-farewell-to-arm-hf";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:7:"pubDate";a:1:{i:0;a:5:{s:4:"data";s:31:"Thu, 06 Apr 2023 12:00:00 +0100";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:11:"description";a:1:{i:0;a:5:{s:4:"data";s:3239:"
<img alt="" src="https://www.linuxserver.io/images/5/3/e/f/5/53ef5f81117e764bc6d2279f2b2e19ed2ea17fd7-banner-armhf.png" />
<h3>The State of Play</h3>
<p>As we wrote <a href="https://www.linuxserver.io/blog/end-of-an-arch">almost a year ago now</a>, 32-bit Arm has been on life support for a while, and you may have noticed that none of the new images we've released in recent months have offered 32-bit Arm (armhf) versions, and a number of older images have dropped support over the same period. This has been part of our "soft deprecation" of the architecture, as it has become more and more difficult to support with contemporary applications.</p>
<p>Last week, Raspberry Pi OS <a href="https://github.com/raspberrypi/linux/issues/5402">started defaulting to a 64-bit kernel on boot</a>, if the hardware supports it, which was possibly not the most graceful way to handle things, but here we are. What this means is that, essentially, 32-bit Arm has transitioned from "on life support" to "doomed"; there is obviously still hardware out there that doesn't support 64-bit, but the single biggest pool of users who <em>can</em> move to 64-bit is now having it (sort of) done for them.</p>
<h3>What Now?</h3>
<p>A year ago, around 2/3 of our Arm users were still on 32-bit platforms, today it's less than 1/5, and consequently we have taken the difficult decision to formally deprecate 32-bit Arm builds from 2023-07-01. Due to the number of images and how our build pipelines work there's going to be some wiggle room here, but essentially from the 1st of July 2023 we will no longer support any 32-bit platforms.</p>
<p>Old images will continue to work, but will not receive application or OS updates, and we will not provide support for them. Additionally, the <code>latest</code> and <code>arm32v7-latest</code> tags will no longer work for 32-bit Arm, you will need to provide a specific version tag if you wish to pull one of the old images.</p>
<p>If you're currently using our 32-bit Arm images, what are your options?</p>
<ul>
<li>If you're not sure what architecture you're on, run <code>uname -m</code> from a terminal session - a response of <code>armv7l</code> or <code>armhf</code> means you're running a 32-bit kernel.</li>
<li>You may also have a 64-bit kernel with a 32-bit userspace, especially if you're running an OS like LibreELEC or OSMC, which likely means a 32-bit Docker install. Running <code>getconf LONG_BIT</code> will give you a response of <code>32</code> if this is the case.</li>
<li>If your hardware is Armv8 and offers support for 64-bit, such as the Pi 3 or 4, or Zero 2 W, then look to migrate to a 64-bit OS.</li>
<li>If your hardware is Armv7 or Armv6 you don't have a lot of options other than replacing it or accepting the risk and inconvenience of remaining on old versions of the images, the hardware simply doesn't support 64-bit.</li>
</ul>
<p>As before, we know this probably isn't what you want to hear, but unfortunately technology marches forward and 32-bit is doomed. Hopefully by providing as much notice as possible you'll have time to find a solution that works for you.</p>
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:8:"category";a:6:{i:0;a:5:{s:4:"data";s:5:"linux";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:1;a:5:{s:4:"data";s:6:"docker";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:2;a:5:{s:4:"data";s:5:"armhf";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:3;a:5:{s:4:"data";s:12:"announcement";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:4;a:5:{s:4:"data";s:11:"deprecation";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:5;a:5:{s:4:"data";s:3:"arm";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}}}}i:6;a:6:{s:4:"data";s:202:"
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";s:5:"child";a:1:{s:0:"";a:6:{s:5:"title";a:1:{i:0;a:5:{s:4:"data";s:19:"Docker Team Changes";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"link";a:1:{i:0;a:5:{s:4:"data";s:51:"https://www.linuxserver.io/blog/docker-team-changes";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"guid";a:1:{i:0;a:5:{s:4:"data";s:51:"https://www.linuxserver.io/blog/docker-team-changes";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:7:"pubDate";a:1:{i:0;a:5:{s:4:"data";s:31:"Thu, 16 Mar 2023 12:00:00 +0000";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:11:"description";a:1:{i:0;a:5:{s:4:"data";s:3127:"
<img alt="" src="https://www.linuxserver.io/images/c/a/f/d/4/cafd48590df8337821148aadd17d6abd08ed5adc-45923164272d160bda67fc.jpg" />
<p>As you may already know, Docker Inc. is <a href="https://github.com/docker/hub-feedback/issues/2314">sunsetting Free team organizations</a>.</p>
<h2>What does this mean for Linuxserver?</h2>
<p>The Linuxserver organization is safe, as it is "Sponsored OSS". This is a program Docker introduced after enforcing pull-limits, being part of this program means our images do not count in the quota for the users' pull-limit. The perks this gives a sponsored organization are comparable to the "Team" organization plan.</p>
<p>In our daily operations we use 3 organizations, this is because we scope images three ways: <code>linuxserver</code> for our production images, <code>lsiodev</code> for non-live branch builds, and <code>lspipepr</code> for PR builds. This means that we need to figure out how to handle lsiodev and lspipepr, as they are currently both Free Team organizations.</p>
<p>Docker's comms on this change have been appalling, but they have at least made a commitment that they won't free up the namespace for any account they delete, which means that at least we don't have to worry about bad actors snapping up popular org names and hosting malicious images on them.</p>
<p>As an end-user, if you are using <a href="https://www.linuxserver.io/blog/wrap-up-warm-for-the-winter">lscr.io</a> for your images, you won't notice any changes regardless of what happens.</p>
<h2>What is a Team organization?</h2>
<p>It is an organizational unit which allows multiple users access to a shared "namespace" on Docker Hub. The namespace refers to the <code>linuxserver</code> part of <code>docker.io/linuxserver/swag:latest</code>. Being able to share this namespace between users is beneficial for multiple reasons; the linuxserver account is an organization primarily to reduce credential sharing, and for the ability to have a "bot" account, which is responsible for the actual pushing of images.</p>
<p>Not having to deal with shared credentials lowers the barrier of entry for onboarding new maintainers, especially if there is no release pipeline set up for the project. There is also quite a lot of trust involved with sharing credentials, you are essentially handing over ownership. A good organization implementation also includes permission management, Docker Hub has just enough roles to be able to say who can push images.</p>
<h2>I publish images under a Free Team organization, what can I do?</h2>
<ul>
<li>You <em>can</em> stop using Docker Hub, or at least not use it as the sole registry you push to. GHCR (GitHub), GitLab, and Quay are all alternatives to consider.</li>
<li>You can apply to the Docker-Sponsored Open Source (DSOS) program <a href="https://www.docker.com/community/open-source/application/">here</a>.</li>
<li>You can upgrade your Docker Hub organization to a paid subscription, although they currently have a minimum 5 user count.</li>
</ul>
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:8:"category";a:4:{i:0;a:5:{s:4:"data";s:6:"docker";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:1;a:5:{s:4:"data";s:4:"news";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:2;a:5:{s:4:"data";s:11:"linuxserver";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:3;a:5:{s:4:"data";s:10:"docker hub";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}}}}i:7;a:6:{s:4:"data";s:202:"
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";s:5:"child";a:1:{s:0:"";a:6:{s:5:"title";a:1:{i:0;a:5:{s:4:"data";s:9:"Brand New";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"link";a:1:{i:0;a:5:{s:4:"data";s:41:"https://www.linuxserver.io/blog/brand-new";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"guid";a:1:{i:0;a:5:{s:4:"data";s:41:"https://www.linuxserver.io/blog/brand-new";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:7:"pubDate";a:1:{i:0;a:5:{s:4:"data";s:31:"Thu, 02 Mar 2023 16:00:00 +0000";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:11:"description";a:1:{i:0;a:5:{s:4:"data";s:2645:"
<img alt="" src="https://www.linuxserver.io/images/9/c/a/2/3/9ca23cecc82db051a5cf64c3dc712e4688dd59ac-linuxservermedium.png" />
<p>If you use our base images for your own projects, or fork our downstream images to modify them, you're probably aware that we ask you to change the branding that appears in the container init logs to make it clear that your image is not associated with us. This is for your benefit as much as ours: we aren't well-equipped to provide your users with support, and you don't want them crediting us for your work.</p>
<p>As part of some recent changes, we realised that the current approach doesn't work very well. Most people don't bother, or realise they need, to change the branding, and for those who do it's a bit of a pain. So we've changed things around.</p>
<p>From today, if you build from one of our modernised base images and don't change anything, your init logs will look something like this:</p>
<p><img title="Container Init with Custom Build branding" alt="custombuild" src="/user/pages/03.blog/brand-new/custombuild.png" /></p>
<p>If you want to add your own branding, when using our base images <em>or</em> a forked downstream one, just place a file called <code>branding</code> containing the text you want to use into the <code>/etc/s6-overlay/s6-rc.d/init-adduser</code> folder of your image. The branding file will replace the highlighted section of the init:</p>
<p><img title="Container Init with Custom Branding section highlighted" alt="custombuild_highlight" src="/user/pages/03.blog/brand-new/custombuild_highlight.png" /></p>
<p>On start-up, the base image will automatically load the branding into its init, allowing you to inflict whatever ASCII art you like on your users:</p>
<p><img title="Container init with some horrible Custom Branding ASCII art" alt="myimages" src="/user/pages/03.blog/brand-new/myimages.png" /></p>
<p>The affected bases are:</p>
<ul>
<li>Alpine 3.16, 3.17, Edge</li>
<li>Ubuntu Focal, Jammy</li>
<li>Arch</li>
<li>Fedora 37</li>
</ul>
<p>As well as any derivative base images that use them, such as our nginx and rdesktop bases. We'll be slowly phasing out our older base images over the next few months.</p>
<p>Hopefully this makes it simpler for everyone to manage the branding of your images when using our bases.</p>
<p>A final note: if you're already overriding the adduser init to do custom branding, it'll keep working, but we'd recommend switching to the new approach so that you don't miss out on any future changes to that init step.</p>
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:8:"category";a:4:{i:0;a:5:{s:4:"data";s:6:"docker";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:1;a:5:{s:4:"data";s:12:"announcement";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:2;a:5:{s:4:"data";s:11:"linuxserver";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:3;a:5:{s:4:"data";s:8:"branding";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}}}}i:8;a:6:{s:4:"data";s:347:"
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";s:5:"child";a:1:{s:0:"";a:6:{s:5:"title";a:1:{i:0;a:5:{s:4:"data";s:36:"Advanced Wireguard Container Routing";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"link";a:1:{i:0;a:5:{s:4:"data";s:68:"https://www.linuxserver.io/blog/advanced-wireguard-container-routing";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"guid";a:1:{i:0;a:5:{s:4:"data";s:68:"https://www.linuxserver.io/blog/advanced-wireguard-container-routing";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:7:"pubDate";a:1:{i:0;a:5:{s:4:"data";s:31:"Tue, 14 Feb 2023 06:00:00 +0000";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:11:"description";a:1:{i:0;a:5:{s:4:"data";s:3880:"
<img alt="" src="https://www.linuxserver.io/images/4/2/8/f/6/428f6418f8b4df71d49fc9b845451def53939a29-ralph-ravi-kayden-rwgphuj4l0e-unsplash.jpg" />
<h2>Introduction</h2>
<p>WireGuard at this point needs no introduction as it became quite ubiquitous especially within the homelab community due to its ease of use and high performance. We previously showcased several ways to route host and container traffic through <a href="https://github.com/linuxserver/docker-wireguard">our WireGuard docker container</a> in a <a href="https://www.linuxserver.io/blog/routing-docker-host-and-container-traffic-through-wireguard">prior blog article</a>. In this article, we will showcase a more complex setup utilizing multiple WireGuard containers on a VPS to achieve split tunneling so we can send outgoing connections through a commercial VPN while maintaining access to homelab when remote.</p>
<p><img title="wireguard" alt="wireguard" src="/user/pages/03.blog/advanced-wireguard-container-routing/wireguard.svg"></p>
<h2>Background/Motivation</h2>
<p>The setup showcased in this article was born out of my specific need to a) tunnel my outgoing connections through a VPN provider like Mullvad for privacy, b) have access to my homelab, and c) maintain as fast of a connection as possible, while on the go (ie. on public wifi at a hotel or a coffeeshop). Needs a) and b) would be easily achieved with a WireGuard server and a client running on my home router (OPNsense), which I already have. However, my cable internet provider's anemic upload speeds translate into a low download speed when connected to my home router remotely. To achieve c), I had to rely on a VPS with a faster connection and split the tunneling between Mullvad and my home. Alternatively, I could split the tunnel on each client's WireGuard config, but that is a lot more work, and each client would use up a separate private key, which becomes an issue with commercial VPNs (Mullvad allows up to 5 keys).</p>
<p>In this article, we will set up 3 WireGuard containers, one in server mode and 2 in client modes, on a <a href="https://contabo.com/en">Contabo</a> VPS server. One client will connect to <a href="https://mullvad.net/">Mullvad</a> for most outgoing connections, and the other will connect to my <a href="https://opnsense.org/">OPNsense</a> router at home for access to my homelab.</p>
<p>Before we start, I should add that while having other containers use the WireGuard container's network (ie. <code>network_mode: service:wireguard</code>) seems like the simplest approach, it has some major drawbacks:</p>
<ul>
<li>If the WireGuard container is recreated (or in some cases restarted), the other containers also need to be restarted or recreated (depending on the setup).</li>
<li>Multiple apps using the same WireGuard container's network may have port clashes.</li>
<li>With our current implementation, a WireGuard container (ie. server) cannot use another WireGuard container's (ie. client) network due to interface clash (<code>wg0</code> is hardcoded).</li>
</ul>
<p>The last point listed becomes a deal breaker for this exercise. Therefore we will rely on routing tables to direct connections between and through WireGuard containers.</p>
<p><strong> DISCLAIMER: This article is not meant to be a step by step guide, but instead a showcase for what can be achieved with our WireGuard image. We do not officially provide support for routing whole or partial traffic through our WireGuard container (aka split tunneling) as it can be very complex and require specific customization to fit your network setup and your VPN provider's. But you can always seek community support on <a href="https://discord.gg/YWrKVTn" target="_blank">our Discord server</a>'s #other-support</strong>...</p>
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:8:"category";a:9:{i:0;a:5:{s:4:"data";s:5:"linux";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:1;a:5:{s:4:"data";s:6:"docker";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:2;a:5:{s:4:"data";s:6:"guides";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:3;a:5:{s:4:"data";s:6:"how to";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:4;a:5:{s:4:"data";s:3:"vpn";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:5;a:5:{s:4:"data";s:9:"wireguard";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:6;a:5:{s:4:"data";s:7:"routing";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:7;a:5:{s:4:"data";s:12:"split tunnel";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:8;a:5:{s:4:"data";s:7:"mullvad";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}}}}i:9;a:6:{s:4:"data";s:231:"
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";s:5:"child";a:1:{s:0:"";a:6:{s:5:"title";a:1:{i:0;a:5:{s:4:"data";s:24:"How Is Container Formed?";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"link";a:1:{i:0;a:5:{s:4:"data";s:55:"https://www.linuxserver.io/blog/how-is-container-formed";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:4:"guid";a:1:{i:0;a:5:{s:4:"data";s:55:"https://www.linuxserver.io/blog/how-is-container-formed";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:7:"pubDate";a:1:{i:0;a:5:{s:4:"data";s:31:"Fri, 03 Feb 2023 13:00:00 +0000";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:11:"description";a:1:{i:0;a:5:{s:4:"data";s:3369:"
<img alt="" src="https://www.linuxserver.io/images/7/2/8/6/8/72868f05fe3f8505881029a03107cf8462515b1f-31744142112a26c56ff04k.jpg" />
<h2>Introduction</h2>
<p>One of the questions we get asked pretty regularly is: "How do I customise/modify/otherwise make use of one of your images?". Now, some of this is already covered in our documentation, and reading that always helps, but I thought it might be instructive to run through the details of how our containers actually hang together and what the various options for extending and customising them are. This isn't going to be a hugely technical post, but it does assume a basic level of understanding of both Linux and containers generally, and you might struggle to follow it without that.</p>
<h2>Container Design</h2>
<p>There are 3 main schools of thought when it comes to container design: One says that a container should run a single <em>process</em>, and you should have as many containers as you need to run the processes for your application. The second says that a container should run a single <em>application</em>, and you should have as many processes as you need to do so in a single container (excluding databases, KV caches, etc.). The third says that a container should run <em>everything</em> you need for an application; front end, back end, database, cache, kitchen sink.</p>
<p>We subscribe to the second approach, in part because our target audience wants straightforward setups and doesn't want to run 9 separate containers for a password manager, and in part because sometimes it just doesn't make sense to split things out into their own containers just for the sake of ideological purity. In addition, our containers don't really need to be highly scalable because they're mostly used in homelab environments where you're looking at tens of users, not tens of thousands. That doesn't mean you <em>can't</em> scale our containers, but there are some inherent limitations when you move beyond One Container, One Process that need careful planning to work around.</p>
<h2>Keeping Track of Your Processes</h2>
<p>If you're going to be running more than one process, you need a process manager, just like you would on a native host. There are a number of options depending on your needs; everything from full-on systemd if you're completely mad, to options like SysVinit and supervisord, all the way down to our init of choice s6. Specifically, we make use of <a href="https://github.com/just-containers/s6-overlay">s6-overlay</a>, which is a bundle of tarballs and init scripts designed to make it easy to run s6 as your process manager in a container. We recently went through a complete overhaul of our init process to take advantage of the new features available in version 3 of s6-overlay, and that's what I'm going to focus on in this post.</p>
<h2>Init Basics</h2>
<p>The very short version of how our container init works is as follows:</p>
<ul>
<li>On build, our base image installs s6-overlay and sets its entrypoint to the s6 /init script</li>
<li>On container start, s6 sets up the basic container environment</li>
<li>We run our docker-mods logic to download and extract any Mods that users are installing</li>
<li>s6 iterates through our init scripts setting...</li></ul>
";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}s:8:"category";a:5:{i:0;a:5:{s:4:"data";s:6:"docker";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:1;a:5:{s:4:"data";s:11:"open source";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:2;a:5:{s:4:"data";s:11:"linuxserver";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:3;a:5:{s:4:"data";s:6:"images";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}i:4;a:5:{s:4:"data";s:9:"deep dive";s:7:"attribs";a:0:{}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}}}}}}s:27:"http://www.w3.org/2005/Atom";a:1:{s:4:"link";a:1:{i:0;a:5:{s:4:"data";s:0:"";s:7:"attribs";a:1:{s:0:"";a:3:{s:4:"href";s:35:"https://www.linuxserver.io/blog.rss";s:3:"rel";s:4:"self";s:4:"type";s:19:"application/rss+xml";}}s:8:"xml_base";s:0:"";s:17:"xml_base_explicit";b:0;s:8:"xml_lang";s:0:"";}}}}}}}}}}}}s:4:"type";i:128;s:7:"headers";a:9:{s:6:"server";s:5:"nginx";s:4:"date";s:29:"Sat, 02 Dec 2023 08:58:01 GMT";s:12:"content-type";s:34:"application/rss+xml; charset=utf-8";s:12:"x-powered-by";s:10:"PHP/8.2.12";s:10:"set-cookie";s:195:"linux-server-io-b4e2fbe59292f23ee327e4043b64b395=n8s4vc8iudeb1cg7u8goq7rti5; expires=Sat, 02 Dec 2023 09:28:01 GMT; Max-Age=1800; path=/; domain=www.linuxserver.io; secure; HttpOnly; SameSite=Lax";s:6:"pragma";s:8:"no-cache";s:13:"cache-control";s:14:"max-age=604800";s:7:"expires";s:29:"Sat, 09 Dec 2023 08:58:01 GMT";s:4:"etag";s:34:""915be8eb0615c7e46f8894f43cd96487"";}s:5:"build";s:14:"20231030185604";s:5:"mtime";i:1701507481;s:3:"md5";s:32:"8eb708a6490b43eb5b69a6764079722d";}