farcaller

joined 1 year ago
[–] farcaller@fstab.sh 3 points 1 month ago

I’ll second conduit. You can tune up its caching, reducing the ram usage significantly. It does become a bit painful to sync the mobile clients, but at least it's not gigabytes of ram wasted.

[–] farcaller@fstab.sh 2 points 1 month ago (1 children)

In the context of my comments here, any mention of "S3" means "S3-compatible" in the way that's implemented by Garage. I hope that clarifies it for you.

[–] farcaller@fstab.sh 2 points 1 month ago (3 children)

Clearly I mean Garage in here when I write "S3." It is significantly easier and faster to run hugo deploy and let it talk to Garage, then to figure out where on a remote node the nginx k8s pod has its data PV mounted and scp files into it. Yes, I could automate that. Yes, I could pin the blog's pod to a single node. Yes, I could use a stable host path for that and use rsync, and I could skip the whole kubernetes insanity for a static html blog.

But I somewhat enjoy poking the tech and yes, using Garage makes deploys faster and it provides me a stable well-known API endpoint for both data transfers and for serving the content, with very little maintenance required to make it work.

[–] farcaller@fstab.sh 2 points 1 month ago (6 children)

S3 storage is simpler than running scp -r to a remote node, because you can copy files to S3 in a massively parallel way and scp is generally sequential. It's very easy to protect the API too, as it's just HTTP (and at it, it's also significantly faster than WebDAV).

[–] farcaller@fstab.sh 4 points 1 month ago

Of course it does AI now!

But seriously, the easiest guide for minio setup meant using their operator. The garage guide was: spin up this single deploy and it works from there.

[–] farcaller@fstab.sh 17 points 1 month ago (14 children)

I remember when minio just started and it was small and easy to run. Nowadays, it's a full-blown enterprise product, though, full of features you’ll never care about in a homelab eating on your cpu and ram.

Garage is small and easy to run. I’ve been toying with it for several months and I’m more than happy with its simple API and tiny footprint. I even run my (static html) blog off it because it's just easier to deploy it to a S3-compatible API.

[–] farcaller@fstab.sh 6 points 2 months ago

Specifically, use home.arpa, if you must use a private domain.

[–] farcaller@fstab.sh 1 points 2 months ago (1 children)

There’s a whole bunch of “it loses all your data” bugs in OpenZFS too, ironically, although it’s way way less fragile than btrfs in general.

That said, the latter is pretty much solid too, unless you do raid5-like things.

[–] farcaller@fstab.sh 1 points 2 months ago (1 children)

FWIW that java app isn’t much memory hungry and it's not cpu-intensive at all. There are no issues with running java apps at all if you spend 5 minutes figuring the basix flags on how to set the memory limits or run it in a memory-limited cgroup via some containers runtime.

[–] farcaller@fstab.sh 9 points 2 months ago (1 children)

I run k3s in my homelab as a single node cluster. I’m very familiar with kubernetes in general, so it's just easier for me to reason with a control plane.

Some of the benefits I find useful:

  • ArgoCD set to fire and forget will automatically update software versions as they happen. I use nix to lower the burden of maintaining my chart forks. Sometimes they break, but
  • VictoriaMetrics easily collects all the metrics from everything in the cluster with very little manual tinkering, so I am notified when things break, and
  • zfs-localpv provides in-cluster management for data snapshots, so when things do break I can easily roll back to a known good state.

k3s is, of course, a memory hog, I'd estimate it and cilium (my CNS of choice) eat up about 2Gb ram and a bit under one core. It's something you can tune to some extent, though. But then, I can easily do pod routing via VPN and create services that will automatically get a public IP from my endless IPv6 pool and get that address assigned a DNS name in like 10 lines of Yaml.

[–] farcaller@fstab.sh 0 points 5 months ago (1 children)

That’s what their docs say:

At an absolute minimum, Dendrite will expect 1GB RAM. For a comfortable day-to-day deployment which can participate in federated rooms for a number of local users, be prepared to assign 2-4 CPU cores and 8GB RAM — more if your user count increases.

That’s not accounting for Postgres.

[–] farcaller@fstab.sh 0 points 5 months ago (3 children)

I looked into matrix servers the other day for an unrelated reason and tbh the amount of resources they ask for is way more than you need for a webpage (dendrite asks for 1gb ram minimum for a number of users, and that's without accounting for postgres)

view more: ‹ prev next ›