tvcvt

joined 1 year ago
[–] tvcvt@lemmy.ml 2 points 6 days ago

The thing that immediately came to mind was mailpiler.org. It’s been on my list to stand up for a while, but I’ve never got around to it.

[–] tvcvt@lemmy.ml 1 points 1 week ago

Awesome. I’m glad it helps. I’d be a little weary of using the same directory in multiple containers. File systems may or may not behave well with multiple machines writing to them. Not saying anything bad will happen, but do keep an eye out for issues.

[–] tvcvt@lemmy.ml 3 points 1 week ago (2 children)

I’m making some assumptions, namely that you’re using an unprivileged LXC container and the mount point is a bind mount.

Unprivileged LXC shift user ID numbers so that an escape won’t result in root access to the host. The root user (uid 0) in the container is actually uid 100000 from the perspective of the Proxmox host.

What I usually do is set ownership of my bind mounts to that high-numbered ID (so something like chown -R 100000:100000 /path/to/bind/mount) from Proxmox. Then the root user in the container will be able to set whatever permissions you need directly.

[–] tvcvt@lemmy.ml 14 points 2 weeks ago (1 children)

Check out ersatztv.org. That software lefts you create custom, continuously playing channels that you can watch from Jellyfin as a live TV channel.

[–] tvcvt@lemmy.ml 1 points 3 weeks ago

Since you're interested in this kind of DIY, approach, I'd seriously consider thinking the whole process through and writing a simple script for this that runs from your desktop. That will make it trivial to do an automatic backup whenever you're active on the network.

Instead of cron, look into systemd timers and you can fire off your script after, say, one minute of being on your desktop, using a monotonic timer like OnUnitActiveSec=60.

Thinking through the script in pseudo code, it could look something like:

rsync -avzh $server_source $desktop_destination || curl -d "Backup failed" ntfy.sh/mytopic

This would pull the back from your server to your desktop and, if the backup failed, use a service such as ntfy.sh to notify you of the problem.

I think that would pretty much take care of all of your requirements and if you ever decided to switch systems (like using zfs send/recv instead of rsync), it would be a matter of just altering that one script.

[–] tvcvt@lemmy.ml 4 points 1 month ago (1 children)

I had never heard of this, but it sounds fascinating — thanks for sharing! Definitely going to try to set this up this weekend.

[–] tvcvt@lemmy.ml 3 points 1 month ago

Dokuwiki (dokuwiki.org) is my usual go-to. It’s really simple and stores entries in markdown files so you can get at them as plain text files in a pinch. Here’s a life lesson: don’t host your documentation in the machine you’re going to be breaking! Learned that the hard way once or twice.

For reverse proxies, I’m a fan of HAProxy. It uses pretty straightforward config files and is incredibly robust.

[–] tvcvt@lemmy.ml 4 points 1 month ago

I can’t give direct experience here, but this is exactly the use case I’ve been meaning to spin up mailpiler for: https://www.mailpiler.org/. One of these days that will rise to the top of the priority list.

[–] tvcvt@lemmy.ml 0 points 5 months ago (1 children)

I see a ton of price fluctuation in used drives. One way I’ve had some success is in seeking out drives sold in lots. Often I’ll also see SAS drives sell for less than a SATA drive of the same size.

[–] tvcvt@lemmy.ml 0 points 9 months ago (1 children)

A bind mount kind of shares a directory on the host with the container. To do it, unless something’s changed in the UI that I don’t remember, you have to edit the LXC config file and add something like:

mp0: /path/on/host,mp=/path/in/container

I usually make a sharing dataset and use that as the target.

[–] tvcvt@lemmy.ml 0 points 9 months ago (3 children)

How about option 3: let Proxmox manage the storage and don’t set up anything that requires drive pass through.

TrueNAS and OMV are great, and I went that same VM NAS route when I first started setting things up many years ago. It’s totally robust and doable, but it also is a pretty inefficient way to use storage.

Here’s how I’d do it in this situation: make your zpools in Proxmox, create a dataset for stuff that you’ll use for VMs and stuff you’ll use for file sharing and then make an LXC container that runs Cockpit with 45Drives’ file sharing plugin. Bind mount the filesharing dataset you made and then you have the best of both worlds—incredibly flexible storage and a great UI for managing samba shares.

[–] tvcvt@lemmy.ml 1 points 1 year ago (1 children)

This promises to be a fun project!

It sounds to me like you have above-average demands on your network and I’d agree that UniFi (and therefore probably Omada) are not what I’d consider great as routers/firewalls.

I’m a fan of pfSense/OPNSense for that purpose, which you can install on pretty much any x86_64 hardware. They’re both wonderful and you can fine tune to your heart’s content or get them set the way you like and leave them.

If you really like a dedicated router appliance, I do like the Mikrotiks, too, but you’d have to study their sometimes-peculiar way of doing things.

To my tastes, UniFi does great at switching and wireless, but any of you’re unhappy with that direction, I’ve heard good things about Omada and the Aruba stuff is fantastic. I recently have been playing with some used iap-325s from eBay. I picked them up for $25 and they’ve been terrific.

view more: next ›