Selfhosted

38834 readers
141 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
76
 
 

We are changing our system. We settled on git (but are open for alternatives) as long as we can selfhost it on our own machines.

Specs

Must have

  • hosted on promise
  • reliabile
  • unlikely to be discontinued in the next >5 years
  • for a group of at least 20 people

Plus

  • gui / windows integration
77
 
 

78
 
 

many times I find myself in situations where I’m on some computer which is totally isolated - friends computer, completely newly installed (VM), some corporate remote desktop where I can’t install anything — and I need to transfer some information to this computer.

Ideally what I would have is:

  • some sort of web chat
  • self hosted (so that I can spin it up only when I need it, and that I can “destroy” all the data after each session)
  • simple URL where you specify a room name edit it (eg.: domain.com/qck-321)
  • on open you specify username (no other authentication)
  • first person who joins needs to confirm all the others (so people can’t just “drop in”)
  • no fancy technology (web sockets,…)

what I found so far:

  • https://tlk.io/ - quite close but not all points
  • https://chitchatter.im/ - this one is quite promising but unfortunately it failed me on first windows remote machine (probably due to some firewall rule)

I’m more that prepared to develop something my self but first I would like to check if there is really nothing out there to solve this.

Bonus question- do you have any other approach - how do you transfer (potentially sensitive) information to a “isolated” machine?

79
 
 

I bought VPS and configured wg-easy, but speed for download is too slow, while speed for uploading is not so bad. Why this might be happening?

80
 
 
81
 
 

I'll be doing a self-hosting workshop at FOSSY 2024 tomorrow.

Details: https://2024.fossy.us/schedule/presentation/219/

If you bring a book, I'll sign it.

If you're stuck getting started with your homelab, see https://selfhostbook.com/videos/ . Any suggestions on other videos I might create? Should I stick with short and sweet, or do something longer? How much longer?

82
 
 

Hello, Im trying to host a backup solution on my k8s cluster for my linux and windows clients. I would like it to use https so its easy to manage ingress. Does someone have any recommendations? thanks

EDIT: a requirement i forgot is that it is meant for multiple users but idk if thats possible

83
 
 

I actually do this, minus rainloop, and it works pretty great.

84
 
 

It's fairly obvious why stopping a service while backing it up makes sense. Imagine backing up Immich while it's running. You start the backup, db is backed up, now image assets are being copied. That could take an hour. While the assets are being backed up, a new image is uploaded. The live database knows about it but the one you've backed up doesn't. Then your backup process reaches the new image asset and it copies it. If you restore this backup, Immich will contain an asset that isn't known by the database. In order to avoid scenarios like this, you'd stop Immich while the backup is running.

Now consider a system that can do instant snapshots like ZFS or LVM. Immich is running, you stop it, take a snapshot, then restart it. Then you backup Immich from the snapshot while Immich is running. This should reduce the downtime needed to the time it takes to do the snapshot. The state of Immich data in the snapshot should be equivalent to backing up a stopped Immich instance.

Now consider a case like above without stopping Immich while taking the snapshot. In theory the data you're backing up should represent the complete state of Immich at a point in time eliminating the possibility of divergent data between databases and assets. It would however represent the state of a live Immich instance. E.g. lock files, etc. Wouldn't restoring from such a backup be equivalent to kill -9 or pulling the cable and restarting the service? If a service can recover from a cable pull, is it reasonable to consider it should recover from restoring from a snapshot taken while live? If so, is there much point to stopping services during snapshots?

85
 
 

I'm proud to share a major development status update of XPipe, a new connection hub that allows you to access your entire server infrastructure from your local desktop. It works on top of your installed command-line programs and does not require any setup on your remote systems. So if you normally use CLI tools like ssh, docker, kubectl, etc. to connect to your servers, it will automatically integrate with them.

Here is how it looks like if you haven't seen it before:

Hub

Hub Alt

Browser

Local forwarding for services

Many systems run a variety of different services such as web services and others. There is now support to detect, forward, and open the services. For example, if you are running a web service on a remote container, you can automatically forward the service port via SSH tunnels, allowing you to access these services from your local machine, e.g., in a web browser. These service tunnels can be toggled at any time. The port forwarding supports specifying a custom local target port and also works for connections with multiple intermediate systems through chained tunnels. For containers, services are automatically detected via their exposed mapped ports. For other systems, you can manually add services via their port.

Markdown notes

Another feature commonly requested was the ability to create and share notes for connections. As Markdown is everywhere nowadays, it makes sense so to implement any kind of note-taking functionality with Markdown. So you can now add notes to any connection with Markdown. The full spec is supported. The editing is delegated to a local editor of your choice, so you can have access to advanced editing features and syntax highlighting there.

Markdown

Proxmox improvements

You can now automatically open the Proxmox dashboard website through the new service integration. This will also work with the service tunneling feature for remote servers.

You can now open VNC sessions to Proxmox VMs.

The Proxmox support has been reworked to support one non-enterprise PVE node in the community edition.

Scripting improvements

The scripting system has been reworked. There have been several issues with it being clunky and not fun to use. The new system allows you to assign each script one of multiple execution types. Based on these execution types, you can make scripts active or inactive with a toggle. If they are active, the scripts will apply in the selected use cases. There currently are these types:

  • Init scripts: When enabled, they will automatically run on init in all compatible shells. This is useful for setting things like aliases consistently
  • Shell scripts: When enabled, they will be copied over to the target system and put into the PATH. You can then call them in a normal shell session by their name, e.g. myscript.sh, also with arguments.
  • File scripts: When enabled, you can call them in the file browser with the selected files as arguments. Useful to perform common actions with files

Scripts

Native window styles

The application styling has been improved to fit in better with native window decorations:

Windows style

macOS style

A new HTTP API

For a programmatic approach to manage connections, XPipe 10 comes with a built-in HTTP server that can handle all kinds of local API requests. There is an openapi.yml spec file that contains all API definitions and code samples to send the requests.

To start off, you can query connections based on various filters. With the matched connections, you can start remote shell sessions and for each one and run arbitrary commands in them. You get the command exit code and output as a response, allowing you to adapt your control flow based on command outputs. Any kind of passwords and other secrets are automatically provided by XPipe when establishing a shell connection. You can also access the file systems via these shell connections to read and write remote files.

A note on the open-source model

Since it has come up a few times, in addition to the note in the git repository, I would like to clarify that XPipe is not fully FOSS software. The core that you can find on GitHub is Apache 2.0 licensed, but the distribution you download ships with closed-source extensions. There's also a licensing system in place as I am trying to make a living out of this. I understand that this is a deal-breaker for some, so I wanted to give a heads-up.

The system is designed to allow for unlimited usage in non-commercial environments and only requires a license for more enterprise-level environments. This system is never going to be perfect as there is not a very clear separation in what kind of systems are used in, for example, homelabs and enterprises. But I try my best to give users as many free features as possible for their personal environments.

Outlook

If this project sounds interesting to you, you can check it out on GitHub! There are more features to come in the near future.

Enjoy!

86
 
 

I'm using https://github.com/rhasspy/piper mostly to create some audiobooks and read some posts/news, but the voices available are not always comfortable to listen to.

Do you guys have any recommendation for a voice changer to process these audio files?
Preferably it'll have a CLI so I can include it in my pipeline to process RSS feeds, but I don't mind having to work through an UI.
Bonus points if it can process the audio streams.

87
 
 

Hello,

I am looking for recommendations for a service provider of immutable backup that has options for a homelab user.

My research has led me to services with expensive options, or no pricing at all unless you ask for a quote.

Thank you

88
 
 

So I've got a Consul cluster running for service discovery on a set of servers, some of which have public IP addresses. On some of these nodes I want to run Traefik (dynamically registered), which are registered on tfk.service.consul which holds a number of A and AAAA records. I want my address tfk.example.com to point at those A-records without revealing the consul address.

How would I do this?

Example:

Some application maps internal A-records to public A-records.

public             | internal               / xxx.xxx.xxx.xxx
tfk.example.com -- | -- tfk.service.consul -- yyy.yyy.yyy.yyy
                   |                        \ zzz.zzz.zzz.zzz
Expected result:

Public DNS resolvers never see the consul query.

public           / xxx.xxx.xxx.xxx
tfk.example.com -- yyy.yyy.yyy.yyy
                 \ zzz.zzz.zzz.zzz

I know I could use consul-template for this purpose by rendering config files to bind or similar, but I was wondering if there was some way to do this via DNS like some kind of bridge application.

89
 
 

I have a proxmox+Debian+docker server and I'm looking to setup my backups so that they get backed up (DUH) on my Linux PC whenever it comes online on the local network.

I'm not sure if what's best is backing up locally and having something else handling the copying, how to have those backup run only if they haven't run in a while regardless of the availability of the PC, if it's best to have the PC run the logic or to keep the control over it on the server.

Mostly I don't want to waste space on my server because it's limited...

I don't know the what and I don't know the how, currently, any input is appreciated.

90
 
 

I saw this post today on Reddit and was curious to see if views are similar here as they are there.

  1. What are the best benefits of self-hosting?
  2. What do you wish you would have known as a beginner starting out?
  3. What resources do you know of to help a non-computer-scientist/engineer get started in self-hosting?
91
 
 

I have multiple things running through a reverse proxy and I've never had trouble accessing them until now. The two hospitals are part of the same company, so their network setup is probably identical.

Curiously, it's not that the sites can't be found, but instead my browser complains that it's not secure.

So I don't think it's a DNS problem, but I wonder what the hospital is doing to the data.

All I could come up with in my research is this article about various methods of intercepting traffic. https://blog.cloudflare.com/performing-preventing-ssl-stripping-a-plain-english-primer/

Since my domain name is one that requires https (.app), the browser doesn't allow me to bypass the warning.

Is this just some sort of super strict security rules at the hospital? I doubt they're doing anything malicious, but it makes me wonder.

Thanks!

Also, if you know of any good networking Lemmy communities, feel free to share them.

92
 
 

Hi there,

What SFF machines do you recommend for a server to basically run opnsense (with a 4 port expansion NIC) and a bunch of extra disks to serve as a NAS? I was looking through Thinkcentre m720, m800 et al. I believe these allow for up to 3 disks

I know usually you'd run opnsense on a dedicated machine, but I'm a bit constrained on space so am trying to fit all in one. I don't want to stream Linux ISOs on this NAS just to store my own files.

93
 
 

I'm strongly considering adding another backup location in the form of an old Raspberry Pi and a USB HDD.

I want the Pi to exclusively use the available network to connect to my Wireguard Server, so other devices (local to the Wireguard Server and remote connected to the server) can use it as a secondary backup location.

I'm kind of worried about a scenario, where my network is compromised and, through the VPN connection of the Pi in the external network, the external network is as well.

What are the best practices to secure such a setup?

94
 
 

Any YUNOhost people here? What are your thoughts about it? Does it fulfil it's purpose of broadening accessibility to self-hosting? Is it secure? Interested to hear what you think.

95
 
 

Background: I am migrating from a Gen 1 Google WiFi mesh router and pulled the trigger and bought this router on prime day. TP-Link Tri-Band BE19000 WiFi 7 Router (Archer BE800) - https://a.co/d/en9OlMz

Huge upgrade, outside a few spots in my house where it's pretty spotty. I cannot easily move the router due to not having a basement, nor approval from the wife to break through a bunch of walls to wire it up how I want it.

So the question is... Do I get the BE11000 range extender that is currently $300

Or

TP-Link Tri-Band BE9300 WiFi 7 Router Archer BE550 - https://a.co/d/bUat5G4 which is currently $250. The speed difference isn't a deal breaker for me on the other devices. My computers are hard-line and happy next to the router.

Or do I just say screw it and return it and go back to a mesh system.

I am currently unable to connect the second node to a wired connection, but I have a plan on getting that done this coming year once I get wife buy-in...

Any help is appreciated, thanks in advance!

96
 
 

Here's what I currently have:

  • Ryzen 1700 w/ 16GB RAM
  • GTX 750 ti
  • 1x SATA SSD - 120GB, currently use <50GB
  • 2x 8TB SATA HDD
  • runs openSUSE Leap, considering switch to microOS

And main services I run (total disk usage for OS+services - data is :

  • NextCloud - possibly switch to ownCloud infinite scale
  • Jellyfin - transcoding is nice to have, but not required
  • samba
  • various small services (Unifi Controller, vaultwarden, etc)

And services I plan to run:

  • CI/CD for Rust projects - infrequent builds
  • HomeAssistant
  • maybe speech to text? I'm looking to build an Alexa replacement
  • Minecraft server - small scale, only like 2-3 players, very few mods

HW wishlist:

  • 16GB RAM - 8GB may be a little low longer term
  • 4x SATA - may add 2 more HDDs
  • m.2 - replace my SATA SSD; ideally 2x for RAID, but I can do backups; performance isn't the concern here (1x sata + PCIe would work)
  • dual NIC - not required, but would simplify router config for private network; could use USB to Eth dongle, this is just for security cameras and whatnot
  • very small - mini-ITX at the largest; I want to shove this under my bed
  • very quiet
  • very low power - my Ryzen 1700 is overkill, this is mostly for the "quiet" req, but also paying less is nice

I've heard good things about N100 devices, but I haven't seen anything w/ 4x SATA or an accessible PCIe for a SATA adapter.

The closest I've seen is a ZimaBlade, but I'm worried about:

  • performance, especially as a CI server
  • power supply - why couldn't they just do regular USB-C?
  • access to extra USB ports - its hidden in the case

I don't need x86 for anything, ARM would be fine, but I'm having trouble finding anything with >8GB RAM and SATA/PCIe options are a bit... limited.

Anyway, thoughts?

97
98
 
 

Hello all,

I have started experimenting again with a local server and I am facing a few issues, here is my case.

I run Debian o an old HP prebuilt without GUI. I do everything with ssh from my laptop (basic connection ssh user@addr)

I have installed docker. I have installed a few containers. I also installed portainer for easier management.

All good so far because everything is local.

I have purchased a domain with cloudflare and set up a tunnel as to avoid exposing any ports and having an easier time managing and deploying stuff.

I have set up jellyfin and vaultwarden but when I tried to install nextcloud AIO it was advised to add a local reverse proxy as to avoid many problems.

My questions are:

Is the tunnel solution appropriate for jellyfin?

I suppose it's OK for vaultwarden as there isnt much data being transfered?

Would it be better to run nginx proxy manager for everything or can I run both of the solutions?

Any general recommendations on the above and in general are appreciated!

99
 
 

When I go on vacation I prefer to keep my smartphone with WiFi and mobile data off, but I really don't like the way Spotify handle offline content. Most of the time it doesn't download everything and when I do a research, it show me even content that's not available offline (how can it do that?). Is there a selfhosted service that I can use to download my playlists and play them with an Android app that can download them?

100
 
 

Apologies if this post ain't right for this community! I'm admittedly not interested in self-hosting myself, but I've a close buddy who's wanting to get back to streaming, but rightfully hates Amazon. He's wanting to self-host with Owncast to do video streaming with his pals, but lives in a very small flat with very little free space - hence the request for a laptop.

Ideally he's needing something great for video encoding, and Linux friendly to boot. No Windows. Mate's got a budget of ~£1,000.

If there's a better community for this lemme know!

view more: ‹ prev next ›