

You can absolutely run your own CA and even get your friends to trust it.
**beep ** bop.


You can absolutely run your own CA and even get your friends to trust it.


My fear is, that if i don’t document well or not use ansible, I will be hating my life once my server dies and I have to restore my data and also set um my services again in a few years.
I’ve been there plenty of times, you’re not alone. There are two solutions to that problem, really, and it boils down to the classic pet vs cattle.
Pets mean you care about every server. If it breaks, it’s cheaper for you to fix it than redeploy. The overwhelming majority of your setup will be pets. Why? It’s simpler. Things don’t break that often, and when they do, it’s okay to be low-effort in fixing them.
Write docs for yourself, even if it’s just notes on the sequences of commands to run to redeploy things. You will thank yourself when the server finally dies in two years and you have notes on how to bring everything back.
Cattle means there’s no difference between server A and B. Everything is replaceable. Ultimately, whatever you run can run to the same extent in AWS, your basement NAS, or on your desk PC.
Cattle is also a lot of work. You will learn an excruciating amount of things about storage, networking, visualisation, workload scheduling, and such. And it’s easy to be demotivated because of how much there is to learn.
So take it easy. Concur that your hobby world is full of pets, but learn how to do the cattle approach at your leisure. You’ll realise that in every practical cattle setup, there are still pets, and that automating yourself from complexity only means you add layers of it somewhere else.


I don’t think that’s plausible.


I’m in a same boat, honestly.
Matrix has decent clients but managing a matrix instance is a world of pain, especially if you federate. And its resource use is really bad then: a single user instance can easily demand 4gb ram if you are in a couple popular chatrooms. Key propagation is oftentimes broken. Clients all have mixed support of features.
Xmpp is a joy to host, but there are no decent clients for iOS.
IRC is easy to host, but the IRCv3 coverage for clients is also meh.
I was looking for something that I could throw at casual people with relative ease and there’s just not a thing. Even the “techy” chat is in discord nowadays.


I’ll chime in: simplicity. It’s much easier to keep a few patches that apply to local OS builds: I use Nix, so my Mastodon microVM config just has an extra patch line. If there’s a new Mastodon update, the patch most probably will work for it too.
Yes, I could build my own Docker container, but you can’t easily build it with a patch (for Mastodon specifically, you need to patch js pre-minification). It’s doable, but it’s quite annoying. And then you need to keep track of upstream and update your Dockerfile with new versions.
OP should have vibecoded the title, chatbots know how to use apostrophes.


Let’s be fair, OAuth is very hard. And requires a web server to make work :-)
This is not a password manager, this is IdP roughly like Authelia, Auth0, etc.


While it’s nice, lightweight, and simple, it still blows my mind that a security product has no means for logs audit and the logs themselves are very hard to deal with programmatically.
That’s not the best example, because CP2077 has its own launcher (at least the steam one)
If you want to go the “packaging way”, you could use nix’s nixCats-nvim to make a fully hermetic nvim installation where you track the origin of all the dependencies (LSPs too) and plugins, all with receipts and hashes and all the good stuff of a reproducible build system. The security industry likes reproducible build systems because there’s only one way you can go from source to the artifact.
Then, you package that in e.g. a docker container (which nix can build for you, too) and ship where you need it.
One thing about grafana, though, is that you get logs, metrics and monitoring in the same package. You can use loki as the actual log store and it’s easy to integrate it with the likes of journald and docker.
Yes, you will have to spend more time learning LogQL, but it can be very handy where you don’t have metrics (or don’t want to implement them) and still want some useful data from logs.
After all, text logs are just very raw, unstructured events in time. You may think that you only look into them very occasionally when things break and you would be correct. But if you want to alert on them, oftentimes that means you’re going from raw logs to structured data. Loki’s LogQL does that, and it’s still ten times easier to manage than the elastic stack.
VictoriaMetrics has its own logging product too, now, and while I didn’t try it yet, VM for metrics is probably the best thing ever happened since Prometheus. Especially for resource constrained homelabs.


I’m curious how it compares to Babashka, which is a scripting/task runner tool in clojure that uses SCM (a clojure dialect).


Storage box networking can be hit and miss. It’s ok for incremental uploads, but I went through hell and back to get the initial backup finish, which makes me wonder what it would take to download it in case I have to.
Scp breaks off once in a while, and WebDAV terminates the session. I didn’t try smb as I feel it’s a rather weird protocol for the public internet. In the end, I figured it’s not the networking per se, it’s something with the timeouts on the remote, and I was able to finish the backup using a Hetzner-hosted server as a jumpbox.
But it’s cheap, yeah.
Voyager pulls /.well-known/nodeinfo now, if you don’t proxy that to your backend (I didn’t), it will fail.
Isn’t kagi’s point that they store very little about you to the point there no search history and you have to pay for the service provided?


That’s not exactly true, synology doesn’t do anything you can’t access from an off the shelf linux (it’s your usual mdraid and btrfs). But you better know what you’re doing if you go that route.


What’s going to pay for the search part, then?


Conduit is in no way compact either. I tuned its caches because two gigs of ram seemed ridiculous for a single-user instance but I only got the mobile client sync lag as a result.
XMPP used to be so much nicer…


I think the point here is moving away from long-lived ssh keys and using whatever IdP you have (enterprise cloud or local oidc) to provide short-term ssh keys. It generally improves the security posture as it’s similar to ssh with certs but less painful to set up.
Next step is discovering atuin! https://atuin.sh/