Tiago Portugal
me@tiagoportugal.top
Bye, bye cloud !!!
2024-05-23
Three weeks ago, I decided that I wanted to create a blog/portfolio and selfhost it.
While I may have ended up doing something different in order to hide my public IP, the little selfhosting itch behind my neck stayed.
So here you have my journey into this niche.
Madding up my mind, I began my search for a platform to bring the concept to life.
Before diving into the outer wilds of marketplaces, began a treasure hunt at my own house for potential candidates to live as a server, going through closets and drawers to see what I got and organize it.
Sadly, I came up to the conclusion that all my compute devices were in one of the following three states :
In the last category I had an Intel Classmate running Linux Mint with a dying disk, which it would be pretty hard to replace since my wee self decided to try to open it with everything at hand ending up bashing the screws, and an old Toshiba Satellite that crashes all the time for an unknown reason.
While with a little bit of work could have done something with them, I discarded them right away for not matching my two and only requirements :
The second point I might have gotten around it playing with C-STATES, however the first one is a hard no, for which I did not even bother to try.
Now the reason why the first point is so important is because, for security reasons and ease of set up, I want to run any major services/applications inside containers.
Given my search for reasonable hardware at home as done, I began searching used marketplaces for them sweet, sweet deals.
Started by searching 4th gen Intel pre-builds, since these are the earliest generations to support VT-x, which lead me to mostly used corp machines from brands like HP, Dell, Lenovo, Fujitsu etc . . . for about 60~70€.
While that is great price and all, I was looking for something to upgrade down the line when my budget allowed it, however, most of these, with some small exceptions, are rigged to the teeth with proprietary mechanisms, that would be a pain in the ass when upgrading.
So with all that said, the only options left are, turning the difficulty to 100x and build the server from used parts, for about the same price, or . . . get a Raspberry PI.
As I did not feel like being scammed, went with the second option, yet a bit undecided, between the Raspberry PI 4 8GB and Raspberry PI 5 4GB, same price, however one with more memory and the other with faster USB speeds, a PCIe slot and faster components. Ended up ponnying up 20 € and buying the Raspberry PI 5 8GB, the best decision looking forwards since my all services combined use about 4GB ram.
Had a week to think about it, but mostly was done on the spot. Not great, not terrible but above all usable.
With two domains (one for my selfhosting stuff and the other for my website) already in my back pocket, I began working.
To start off, I completely ignored remote access and began filling my machine with containers.
Most tutorials mention docker but I like podman better, so I used it. The cool thing that used to set apart podman from docker were the rootless containers and was what I intended to use, however, after a couple of hours, trying to understand why my services were only working with an SSH connection with my user, I realised that for the containers to be maintained and restarted automagically they had to be under the root domain (at least that's what I think, not sure though), so there is that.
One thing I also realized after using x86 all my life is that some docker images don't support ARM or require a specific tag to do so. Nothing egregious but a Hey this image does not support ARM
would be useful instead of the generic error Failed Exec
.
An additional aspect of my setup that I did not see much in the wild, was the use of a systemd container integration, quadlets,or podman/docker-systemd, which allow me with just configuration files and systemd, to pull images and auto-start containers.
That's pretty much it, after 2–3 days I had all my services accessible by localhost:port
.
With all done, I just needed a uniform way to access all my services from in and out of my home network.
Well to solve this problem I began my setting a static IP for my server in the router settings, and pointing a domain I previously acquired to its public IP, by adding an A record
with the address in it.
Still in the router settings, opened the ports 80 and 443, for http and https in that order.
As my and most ISPs change the public IP regularly, for security and I think the lack of IPs since the beginning of the internet, a Dynamic DNS client is need. A DDNS client although a scary name is nothing more than a daemon, that using your domain registrar API, updates the A records of your domains.
So, after creating all the needed subdomains, installed ddclient and configured it to do updates every so often.
After that I just spin up a nginx proxy manager (aka NPM), set up the domain-ip:port
pointers, generated the let's encrypt certificates, and just like that I and you all (though a password would still be needed) could access my services from my home and anywhere else.
. . . and just like that I and you all could access my services from my home and anywere else.
Yeah but that would let me sleep at night, so I followed the advice that I would find every so often in my researches about the topic, and decided to run a VPN server to access all my services.
Went with WireGuard since it's the hot stuff nowadays, and it was quite easy to set up, the only problems I had, appeared, when I decided to use an interface to interact with, WireGuard-UI, where took me some time to realize that I had to do all the configs in the web interface and run a couple systemd services. If I did not do that, there would be conflicts between the original config and the generated one.
Done with that, opened the VPN port in the router settings and close the http/https ports, generated a config for my devices, and abracadabra . . . nothing was accessible because the domains and subdomains were still pointing to an external address, but after fixing that, all worked fine.
Here is a sketch of what my architecture looks like, if you want to replicate it, go to my repo and check the configs.
Besides having learned a thing or two and the benefits of having a homelab, one of the most important aspects with achieving this setup is being a step closer to being free, to being self-sufficient.
In a time when ensh*tification is a norm, having something local, private and an assurance that we will not get asked 2 more bucks, just because a stockholder wants infinite growth, in a finite world, more than essential, I risk saying that it's necessary.
Anyway I am just happy to say goodbye to someone else's computer, aka the cloud.