Selfhosting and your Homeserver - E02 - Whats running on my Server
This is the second episode in this mini-series. Before reading this article make sure, to have read the first one
The previous article provided a comprehensive review of the process of building a server, from the hardware setup to the available options for operating systems. It concluded with an explanation of my personal choices in this matter.
Requirements #
I want to start by stating my software requirements and detailing my approach to solving and implementing them.
Also, just because my base implementation is from 2019, it's apparent that there have been advancements in technology,
my expertise has improved and some problems have better solutions now.
Hence, we will investigate additional solutions that are yet to be implemented.
Typical NAS Usecases #
The handling of files over the network is a fundamental use case for my server.
Ideally, it should be possible to carry out all of these requirements remotely, beyond my local network.
The following requirements are relevant to me in this category:
Backups #
I wanted to perform network backups for my computers, ideally with an automated process that includes the entire boot drive. This will enable me to quickly restore my system at a later stage. Additionally, these backups should be incremental.
File Synchronisation #
I currently utilise four network-attached devices: laptop, desktop, smartphone, and server. I require selected directories to be automatically shared across these devices in real-time. This synchronization must also function beyond my home network.
Remote File Access #
As my laptop and smartphone lack sufficient storage to hold all my files simultaneously, I require the ability to access
these folders over a network, even beyond my home network.
For certain file types, I require the option to view them in the most effective manner.
For instance, movies should be accessed through a web-gui, such as Netflix, rather than a file explorer, providing
enhanced options for displaying, sorting, and streaming the content.
Secure Storage #
As I handle my most sensitive documents, it is crucial that both data storage and transmission are secure.
Thus, storage must be encrypted, and file access should require authentication.
In addition, it is essential to establish and follow the best backup policies and practices.
Hosting #
I intend to host two types of applications.
Firstly, FOSS tools will be hosted as an alternative to paying for cloud solutions.
For instance, I plan to host an RSS reader and a finance manager.
By doing so, I can avoid subscription costs and have complete control over my private data.
Secondly, I want to host my web presence comprising my website and the blog you are reading at the
moment.
Additionally, I wish to be able to quickly spin up hosting for pet projects or other small self-hosted programs.
Security #
In terms of security, there are three specific requirements that I must highlight as relevant to my server setup.
Firstly, I aim to restrict access to the server's contents where necessary, in order to prevent unauthorized users from
accessing data.
Secondly, the server should be available for the most part with minimal downtime.
Ideally, the server should only be unavailable during updates and reboots, except during cases of extremely rare
blackouts in my area.
When discussing uptime, it is important to prioritize control over mere trust.
To guarantee that the services perform as intended, I intend to establish a monitoring system that assesses each
component's functionality.
Implementation #
Now that the requirements have been established, we can focus on my implementation decisions and areas for further improvement.
Local Network Setup #
While the server itself is the most important part of this article, we also need to talk a little bit about the network
that it resides in.
My local home network segregates entirely my trusted VLAN, which includes all my LAN and WLAN equipment, from the
"Guest"-VLAN.
The guest VLAN can not interact with any of my devices, and a Firewall is in place that limits Traffic to allowed Ports
like HTTPS, SMTP and IMAP.
Therefore, the server is completely invisible to guests and untrusted devices due to its design.
Initially, an OpenVPN Server Docker Container was used to enable remote access to the network.
This posed a problem as connection to the VPN was not possible when the server was rebooting or unreachable.
Consequently, migration was made to employ Wireguard VPN for network access through the router.
In my experience, Wireguard offers a more stable solution with better performance.
In conjunction with the PiKVM, which I deployed as a total overkill measure, I can access my Servers BIOS from any
location in the world.
As an added benefit, my smartphone remains constantly connected to the Wireguard VPN, allowing for the use of
Adblocking, which will be explained later.
This also ensures secure access even when using public Wifi Hotspots.
Before deploying your initial server, it is crucial to ensure that you can access it from the outside if you want to
host web pages or applications, or access your home network via a VPN.
Consequently, you need to obtain a static IPV4 address, as using IPV6 comes with constraints, or at the very least, a
public dedicated IPV4 address.
Since my ISP no longer has enough IPV4 addresses, they only provide dual stack lite, which uses shared IPV4s. However,
after contacting their support and requesting nicely, I was granted a real public IPV4, although it is not static.
Consequently, I enabled the Dynamic DNS service on my router and pointed my domain to my server.
Additionally, I use a ddclient Docker container to provide dynamic DNS for different domains.
The relevant Ports are opened in the Firewall, namely 80, 443 and a random SSH-port.
Machine Setup #
With that said, let's proceed to setting up the machine.
The initial step after installing Debian was to establish an encrypted mirrored RAID using LUKS and Linux Software Raid
over my HDDs.
The drives are unlocked following booting, either through SSH or KVM.
At present, I utilise Vorta on desktops and native Borg Backup on the server for backing up my files and operating
system.
The RAID and backup function correctly, but in the long term, transitioning to a ZFS-based system would be optimal.
This migration would significantly streamline RAID management and drive replacement, as well as ensure the integrity
checks and encryption process.
Additionally, it would introduce the ability to produce snapshots and encrypted network backups.
Moreover, ZFS natively supports SMB as a bonus feature.
You can find further information regarding the features of ZFS in
my dedicated blogpost.
After configuring SSH Login for remote management, I restricted access to only known SSH Keys.
In addition, I use fail2ban for other public-facing services and SMB, which blocks access after three consecutive
unsuccessful login attempts.
I ensure software updates are up to date by utilizing APT-Cron, which automatically updates packages.
For all the above-mentioned features and programmes, I utilise centralised email notifications to inform me of issues
and provide regular updates.
"NAS Framework" #
For ease of use, I utilize OMV as a NAS framework, which offers a management Web-GUI. This enables me to configure the SMB shares I utilize and perform maintenance tasks, such as monitoring the HDDs' S.M.A.R.T. values to ensure their functionality and data integrity. All of my files are shared via SMB shares and require authentication within my local network.
Docker #
I have chosen to utilise Docker for my Deployments.
This decision was made based on the ability to construct CICD pipelines that grant direct deployment of my code from
GitHub to my server (It should be noted that I might write a dedicated article regarding this technique in the future).
A noteworthy advantage of Docker is the ease of having multiple instances of the same program - such as a web server -
bound to different ports.
It's also important to mention that Docker provides separated container environments where your programs reside,
provided that you maintain adherence to security best practices.
Also, it is possible to rapidly create program instances for testing purposes.
All my Docker applications store their state on disk through volume mounts, which are frequently backed up.
Docker-compose configurations are used for all multi-container projects.
To enhance usability, particularly when remoting into my server with my smartphone, I utilise Portainer as a Docker
management graphical user interface.
The different Containers deployed, roughly 30 in total are split up in three different segments which i will explain in
their corresponding chapters.
Management #
I currently use the S.W.A.G. project, which comprises of an nginx Reverse Proxy and certificate management based on Lets
Encrypt.
I made this decision when the alternative solution Traefik was emerging.
Today, I would prefer Traefik for its steep initial learning curve but automated configuration down the line, along with
its capability to conveniently route local traffic.
The Reverse Proxy employs an Nginx-based configuration to direct Router-originated traffic towards its respective
container destination, based on the provided hostname.
As an example, traffic towards blog.jeujeus.de. will be directed to the blog webserver container.
The Certificate Manager guarantees the issuance and updating of Let's Encrypt Certificates for the specified domains,
hence facilitating TLS encryption of the traffic.
For monitoring purposes, I utilise Changedetection to visually track specified websites, which provides more in-depth
monitoring of container functions.
For instance, the Nextcloud Container may issue a 200 response, but fail to render the Login Page due to PHP problems.
Additionally, I use Kuma to monitor the Uptime, Certificates, and Response Time of both external and internal services.
If any of the mentioned services observe an irregularity, a notification email is triggered.
As previously mentioned, I utilise ddclient via a container to facilitate automated DynDNS updates for multiple domains.
One of my more controversial decisions is to use Watchtower to automatically update my containers.
Watchtower automatically updates images via a cronjob, dependent on a flag that is set for specific containers.
While this approach effectively addresses security concerns, it can increase the risk of container crashes, particularly if manual configuration is required during version transitions.
I read container updates via RSS regularly to stay on top of issues.
Based on my assessment, the infrequent half-yearly downtime of exemplary Nextcloud, a self-hosted cloud storage service, outweighs the drawbacks
of not updating or manually updating 30 private use containers.
This has functioned effectively for me during the past four years, and my monitoring system notifies me promptly if any issues arise.
Tools #
When it comes to file synchronisation, I employ Syncthing as a service on all of my devices.
This enables me to securely synchronise all of my files from anywhere and has proven to be one of the best tools I've used in recent years.
Syncthing reliably synchronises almost two terabytes of storage for me.
In relation to the adblocking mentioned earlier, I utilize Pihole as a container with its dedicated IP address.
By configuring Pihole as the primary DNS server through my router's DHCP, I am able to block almost 20% of DNS requests and have an almost entirely ad-free mobile device experience.
To enhance this approach further, it would be beneficial to relocate my setup from exemplary http://192.168.178.5:9443 to https://portainer.local.
While not strictly necessary, implementing local DNS and HTTPS certificates can greatly enhance usability and increase security for network traffic.
To share files in a more conventional cloud like manner, I use Nextcloud to share files with friends and family through temporary links. I also offer the option for file uploads into my secure personal cloud solution.
I keep track of my finance via a selfhosted instance of Firefly iii which allows me to track my finances on any desktop as well as via PWA on my smartphone.
As a bonus it allows for 2FA login.
In line with my self-hosting plan, I utilize Homeassistant for managing Smart Home devices, though I generally try to avoid Smart Home in general.
One area for enhancement involves shifting my Zigbee appliances from their current hub to a locally-controlled Zigbee USB receiver, running everything on my server.
As mentioned earlier, it is more appropriate to present movies as streaming content rather than listing them on a SMB share.
Therefore, I prefer to use Jellyfin as a self-hosted alternative to Netflix for streaming my movies, TV series and music.
For several years, I utilized an Android RSS Reader application.
Recently, I moved onto Miniflux, a self-hosted alternative.
By migrating to Miniflux, I have access to my feeds on any device using a single account.
What sets Miniflux apart is its support for a Webview and a PWA.
Hosting #
I do host public-facing services and several small self-hosted and private projects. I will not provide any further details regarding the latter ones here. One of the public facing ones is the blog you are currently reading, whilst another one is my website.
This was an overview of my server configuration. Due to its length and extensive scope, I kept it rather superficial, I may go into more detail later on. Until then make sure to subscribe via RSS.