I have been wanting to self-host recently I have an old laptop it’s a Toshiba satellite m100-221 sitting around it only has 4gb of ram, but I don’t know what is a good starting point for an OS for my home lab I discovered yunohost but heard mixed opinions about it when searching I would like lemmy’s opinion on a good OS for a beginner wanting to start a home lab I would prefer a simple solution like yunohost but would like it to be configurable it’s fine if it needs a bit of tinkering.
I’ve been using YUNOhost for the last 3 years and it’s been great. I had no prior experience and would have had no chance of self-hosting without it.
XKCD 2501 applies in this thread.
OP, get CasaOS or Yunohost. Very very simple. Your laptop is fine (you’ll probably want to upgrade the ram soon).
XKCD 2501 applies in this thread.
I agree, there are so many layers of complexity in self-hosting, that most of us tend to forget, when the most basic thing would be a simple bare metal OS and Docker
you’ll probably want to upgrade the ram soon
His hardware has a max ram limit of 4, so the only probable upgrade he could do is a SATA SSD, even so I’m running around 15 docker containers on similar specs so as a starting point is totally fine.
Without knowing what you actually want to do, I’d put Debian on it. Very good, very stable, very widespread OS with plenty of tutorials around for whatever you decide to do with it. Do a minimal installation and 4 GB RAM are plenty to play around with.
I use debian for my ancient media server. It’s great.
i use dietpi, which is built upon a minimal debian.
Even better when you’re already familiar with it. And I’d consider a media server to already be “selfhosting”.
Edit: Sorry, thought you were OP.
Yup Debian would also be my way of getting started.
I used yunohost for a bit and while it was easy setup, it wasn’t easy to troubleshoot weird errors because hardly anyone uses it.
I’d recommend setting up:
- debian with a desktop environment to start with
- figure out how to ssh into it from your main machine and maybe how to use tmux
- docker and how docker works
- self-hosting services using docker
4gb isn’t much ram, but it can be surprisingly useful if you configure Zswap. Lots of guides out there. Here’s one of them.
Step 1: be psychologically prepared to break it all. Don’t depend on your services, at first, and don’t host stuff for others, for the same reason.
Yunohost? Good for trying out stuff, I suppose. I haven’t tried it myself. You could also try Debian, Alpine, or any other. They’re approximately equivalent. Any differences between distros will be minuscule compared to differences between software packages (Debian is much more similar to Alpine than Nextcloud to Syncthing).
4GB of RAM? Don’t set up a graphical interface. You don’t need a desktop environment to run a server. Connect to it via SSh from your regular PC or phone. Set up pubkey auth and then disable password auth.
I recommend setting up SSH login first, then a webserver serving up HTTP, only, accessible via IP address.
Next comes DNS - get a name at https://freedns.afraid.org/
Then add HTTPS, get the certs from LetsEncrypt.
Finally, Nextcloud. It runs kind of “inside” your webserver. Now you can back up your phone, and share photos with family, etc.
I’ll add a vote to all the people suggesting Yunohost. Yunohost is a perfect place to get your feet wet with basically no experience required. I’ve played with it myself and it does a good job of simplifying and holding your hand without oversimplifying or keeping you on a strict, tight leash. It even helps you deal with common newbie issues like dynamic IPs so you can become more reliably available on the internet, something that a lot of other guides just assume you’re going to have a static IP assigned by your ISP or VPS and handwave away the complexity of what you’ll have to do if you have a dynamic IP like most home connections. (Experienced self-hosters gradually discover that having access to a static IP somewhere, anywhere, makes life a lot easier, but don’t worry, you’ll get there too eventually, it’s not important when getting started)
You can get started by working your way through the process here.
I agree but I do have to say that Yunohost is becoming bloated. Successive versions run way slower than they used to - I know it is part of the security and ease of use but it is becoming noticeable. You can also use some docker things in yunohost but for an absolute beginner Yunohost is by far the best way to do it.
Nobody is saying it, so I will. The most important thing is to just get started!
It doesn’t matter if you go for a plain Debian server or a fancy proxmox installation with high availability. I believe the most important thing is just to start and experiment. And enjoy!
Take a few steps back and ask yourself what needs you’re trying to fill. I never heard of Yunohost before, but it sounds high-level and abstract. Are you a programmer? Are you familiar with Linux? Are you comfortable in a terminal? Are you familiar with networking?
Find out what you want to do before installing “everything and the kitchen sink” solutions.
I am familiar with Linux and comfortable in terminal, but I am not comfortable networking, and I am not a programmer.
So what do you want to do? What need are you trying to fill?
I want to host a couple of things like email and immich and Nextcloud for privacy and security and money saving needs I currently use Google photos and hosted Nextcloud by adminforge.de its good and privacy respecting and a nice owner, but I would like more than 2gb.
don’t do email until you know what you’re doing imo
and even when you do know what you’re doing, you’re probably choosing not to host your own. at least not one that faces the public. a private mail ‘server’ that consolidates mail for you from multiple providers (and sends mail back out the same way) is different.
Don’t host email from home. Many ISPs block that to combat spam and most email servers don’t accept mails from home-IPs for the same reason.
Most people will recommend not hosting email at all because it is a pain in the arse to set up so that other aervers actually accept your mails.
As you want to do multiple different things, I recommend you install a hypervisor on the laptop, such as Proxmox. It’ll make it easier for you, as a non-programmer, to manage containers and virtual machines.
You will have to deal some high level networking concepts regardless of what level of self-hosting you do, so you should familarize yourself with it. IP addresses, ports, basic firewalling, etc.
I won’t stop you, but I will strongly discourage you from trying to host your own Email. It is a complicated mess of new standards stacked on top of ancient standards and it’s miserable to work with even when it works. If you misconfigure your email server, you’ll get blocked by every major email provider and there’s no way back from that except starting over with a whole new IP and domain.
Thank you very much I will try out proxmox
If you’re comfortable in the terminal you’ll be fine just starting out and figuring it as you go. Be ready for a few reinstalls but it becomes part of the fun, albeit sometimes frustrating! Go for a mainstream server os like Ubuntu or Debian (as if you google them with any issue you’re likely to find at answer). Get SSH up and running with keys for security, install tailscale and don’t expose to the internet until you feel more comfortable. Install docker then start on one software you think will be useful, get it up and running then move onto the next. I would recommend homepage as a front end then keep it up to date with new software so you can quickly see what you have and what ports are in use. Vaultwarden is useful for the admin passwords. I use authentik for sso but would try caddy if I was starting now.
Start with docker. Any OS will do. Most Linux distros are better but I run docker on Mac, Linux, Windows (not a lot in windows since I despise Microsoft but it does work).
The great thing about docker is it is very portable, modular, and easy to get back to a known state. Say you screw something up, just revert and start over. It’s also very easy to understand in my opinion. It’s like all the benefits of virtualization with much less over head.
Yeah, my only note is that Docker on Windows is… Kinda fucky? It uses WSL to run Linux in the background, which means that the volumes it creates aren’t easily accessible by Windows. If your container requires editing a config.json, for instance… That can be daunting for a newbie on Windows, because they won’t even know how to find the file.
You can work around this by mounting your volumes directly to a C:\ folder instead, but that’s something that many tutorials just completely skip past because they assume you already know that.
I’ve never understood the reason for WSL. If you want Linux, run Linux. At the very least in a VM.
I used to run Linux VMs in HyperV. It felt dirty.
don’t expect a 19 year old laptop to perform all the tricks something more ‘modern’ can do, such as transcoding video for a streaming media server. also note that a t5600 is not a ulv chip (draws as much as 34w under load, on its own)–so probably not a candidate to run ‘lid down’ without some outside help for cooling.
it’s not fast, it’s not power efficient, it has slow networking (10/100 and 22-year old ‘g’ wifi), and lacks usb3 for ‘tolerable’ speed on extra external storage space—but it will be ‘ok enough’ for learning on.
if you go with something like yunohost or even dietpi, you will pretty much restrict yourself to what it can run and do and how it does it. if you want more ‘control’ or to install things they don’t offer themselves, you’ll need to ‘roll your own’. a base (console only) debian would be a great place to start. popular, stable, and tons of online resources and tutorials.
I don’t know about yunohost, but dietpi doesn’t feel restrictive. You can use the dietpi software manager, but you can also install whatever else you want next it using apt, docker, etc, adjust systemd, Cron, rsync etc outside of it. They just don’t guarantee they might sometimes break a thing you run outside of what they offer when you run dietpi updates?
Step 1: Install proxmox
Step 2: run the post install script here, disable anything enterprise, test or related to high availability.
Step 3: check out the other scripts on the link. I suggest starting with a pi hole and experimenting from there.
That sounds overly complicated, why get VMs involved? Just install Debian or something and get things working.
Proxmox is good if you know you want multiple VMs running for specialized needs. But multiple VMs isn’t happening on 4GB RAM.
Easily can have multiple LXCs, and being able to take snapshots for backup is probably a nice thing to have if you’re just learning.
And if they get more hardware, moving VMs to other clustered proxmox instances is a snap.
If you just want LXCs, use Docker or Podman on whatever Linux distro you’re familiar with. If you get extra hardware, it’s not hard to have one be the trunk and reverse proxy to the other nodes (it’s like 5 lines of config in Caddy or HAProxy).
If you end up wanting what Proxmox offers, it’s pretty easy to switch, but I really don’t think most people need it unless they’re going to run server grade hardware (i.e. will run multiple VMs). If you’re just running a few services, it’s overkill.
LXCs are not comparable to Docker, they do different things.
It’s the same underlying technology. Yes they’re different, but they are comparable.
They use some of the same kernel functions but they are not the same. They are not comparable. LXCs are used to host a whole separate system that shares kernel with its host, docker is used to bundle external requirements and configs for a piece of software for ease of downstream setup. Docker is portable, LXCs much less so.
Sure, Docker is more or less an abstraction layer on top of LXC. It’s the same tech underneath, just a different way of interacting with it.
If you’re just running a few services, and will only ever be running a few services, I agree with you.
The additional burden of starting with proxmox (which is really just debian) is minimal and sets you up for the inevitable deluge of additional services you’ll end up wanting to run in a way that’s extensible and trivially snapshotable.
I was pretty bullish on “I don’t need a hypervisor” for a long time. I regret not jumping all-in on hypervisors earlier, regardless of the services I plan to run. Is the physical MACHINEs purpose to run services and be headless? Hypervisor. That is my conclusion as for what is the least work overall. I am very lazy.
For snapshots, you can use filesystem features, like BTRFS or ZFS snapshots. If you make sure to encapsulate everything in the container, disaster recovery is as simple as putting configs onto the new system and starting services (use specific versions to keep things reasonable.
I think that’s also really lazy, it’s just a different type of lazy from virtualization.
My main issue with virtualization is maintenance. Most likely, you’re using system dependencies, and if you upgrade the system, there’s a very real chance of breakage. If you use containers, you can typically upgrade the host without breaking the containers, and you can upgrade containers without touching the host. So upgrades become a lot less scary since I have pretty fine-grained control and can limit breakage to only the part I’m touching, and I get all of that with minimal resource overhead (with VMs, each VM needs the whole host base system, containers don’t).
Obviously use what works for you, I just think it’s a bit overwhelming for a new user to jump to Proxmox vs a general purpose Linux distro.
I have a Dell Inspiron 1545, that has similar specs to yours running Debian with Docker and around 15 services in containers, so my recommendation would be to run Debian server (with no DE), install docker, and start from there.
I would not recommend proxmox or virtual machines to a newbie, and would instead recommend running stuff on a bare metal installation of Debian.
There are a bunch of alternatives to manage and ease the management of apps you could choose from like, yunohost, casaOS, Yacht, Cosmos Cloud, Infinite OS, cockpit, etc. that you can check out and use on top of Debian if you prefer, but I would still recommend spending time on learning how to do stuff yourself directly with Docker (using docker compose files), and you can use something like Portainer or Dockge to help you manage your containers.
My last recommendation would be that when you are testing and trying stuff, don’t put your only copy of important data on the server, in case something break you will lose it. Invest time on learning how to properly backup/sync/restore your data so you have a safety net in case that something happens, you have a way to recover.
As a counterpoint to no proxmox, I get a lot of utility in being able to entirely destroy and reprovision VMs. I get it adds a layer of complexity, but it’s not without its merits!
I get your point, and know it has its merits, I would actually recommend Proxmox for a later stage when you are familiar with handling the basics of a server, and also if you have hardware that can properly handle virtualization, for OP that has a machine that is fairly old and low specs, and also is a newbie, I think fewer layers of complexity would be a better starting point to not be overwhelmed and just quit, and then in the future they can build on top of that.
No disagree here. :)
So, it might not sound like much, but 4gb of ram is plenty enough to do quite a bit with self hosting.
If you want to self host, and use it as an opportunity to learn, I recommend you install Debian, and get your hands dirty. If just want to self host without much of a headache, yunohost seems cool, but I’ve never used it, so I can’t recommend it.
Install Debian as a server with no GUI, install docker on it and start playing around.
You can use Komodo or Portainer if you want a webUI to manage containers easily.
If you put any important data on it, set up backups first, follow the 3-2-1 rule by having at least 2 backups in place.
The problem with stuff like yunohost is when it breaks you have no idea how to fix it, because it hides everything in the background.