Selfhosted

37927 readers
419 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
1
 
 

Hello everyone! Mods here 😊

Tell us, what services do you selfhost? Extra points for selfhosted hardware infrastructure.

Feel free to take it as a chance to present yourself to the community!

🦎

2
 
 

cross-posted from: https://lemmy.dbzer0.com/post/24154583

A new stable release is here with three major improvements and numerous smaller changes. Let's dive into the highlights:

Built-in Theme Explorer

Users can now browse, preview, and download themes directly within Kavita. Uploaded themes will update automatically.

Smart Collections for Kavita+

Users can import Interest Stacks/Restacks from their MAL accounts into Kavita, syncing every two days. These collections are read-only but can be promoted if the user has the promotion role.

Scanner Changes

Optimizations have been made to improve scanning performance for larger libraries, reducing the processing time significantly.

Misc Changes

  • WAL Enabled by Default: Fixes common database lock issues.
  • Double Scrollbar on Mobile: Fixed on all pages.
  • OPDS Improvements: Enhanced metadata and reading list support.
  • Manga Reader Tweaks: Improved fit-to-height/width functionality.

New Features

  • Smart Collections: Sync with MAL every 2 days.
  • Theme Downloads: Direct from the Theme Repo, with automatic updates.
  • Book Series Specials: Classified with specific filename markers.
  • OPDS-PS: Convert PDF files to images.
  • Random Sort Option: New sorting method for streams.
  • Manual Width Override: For manga/webtoon reader.

Changes

  • Password Reset: Works without email setup.
  • Reduced Memory Usage: When adding series to a collection.
  • Manga Reader Scaling: Improved to meet user expectations.
  • Search Improvements: Faster for larger libraries by default.
  • Scanner Optimizations: Less work on lower-level folders.
  • Updated Cover Generation: Better handling for webtoons.

Fixes

  • Hangfire Access: Corrected unauthorized access.
  • Theme Deletion: Admins can no longer delete themes in use.
  • Manga Reader Double Setting: Fixed cover and last page positioning.
  • Series Parsing: Improved handling of special cases.
  • Double Scrollbar: Fixed in various components.
  • Metadata Access: Corrected access issues for restricted libraries.
  • Event Widget: Enhanced responsiveness and localization.

@DieselTech has joined the Kavita team, contributing significant improvements for comic users.

Looking Ahead

Plans for the next release include a PDF rework, considering user feedback and holiday schedules.

Enjoy the new features and improvements, and please provide feedback for further enhancements.

3
 
 

Referencing: https://lemmy.world/post/17588348

I want to make a NAS with a 500GB boot drive and 2x16TB HDDs. Based on my previous post, btrfs is a good option. It also looks easy to get started. My plan for the NAS would be to purchase several 16TB drives, and only use 2 of them.

My first question is about different drives. Could I purchase two different brand drives and use them with btrfs? (I assume yes)

2nd question: how does the replacement process go? Like if drive A died, so I remove it, and put a brand new replacement in. What do I have to do with btrfs to get the raid 1 back going? Any links or guides would be amazing.

4
 
 

Hi guys I recently stumbled upon this website where you can get a eu.org sub domain (example.eu.org for instance).

I noticed though that domains aren't created instantly. I'm curious if there is human review to get domians processed and if it generally takes a long time to make domains using them.

5
 
 

I'll start by saying that I really love Tube Archivist. It works flawlessly in doing what it does (archiving YouTube videos), and the UI and UX are great.

However, no matter what browser I use (Edge, FF, Opera, Samsung mobile, FF mobile, etc...), I run into issues where the video will play, but the interface freezes... I can't do anything on the screen until I refresh.

I don't have it set to any strange codecs, so videos are in vp9. But I also tried a few different codecs to see if the quality/size could be better optimized, and had the same issues with freezing UI then.

If I run the videos through Jellyfin, they work fine. It's only through the TA web interface where things lock up.

Is this normal? Does anyone have any suggestions on how to get this working better?

6
 
 

Hello self hosters! I am hoping some of you wizards can help me troubleshoot my setup with authentik and traefik.

First about my setup. I have a synology nas that is running a docker compose stack. Synology is notoriously bad at keeping their docker version fresh, but hopefully that isn't relevant to this issue. I'm running traefik for reverse proxy, and authentik for auth. In authentik land I've split the outpost work into its own container, named authentikproxy. Any request to a service with the authentik-basic@file or authentik@file middleware labels applied should be routed through the authentikproxy service for auth. If it detects that one isn't authed, it will in turn send you to the authentik frontend for SSO.

The issue is that authentik randomly stops working for random routes, or randomly fails to start working for random routes. Every time this happens I need to restart my authentikproxy and traefik containers over and over until it randomly decides to work for all my routes. When this happens I am just sent straight to the app unauthenticated. I'll have to either input http basic credentials or use the app's login page, whichever it has. I have found nothing in the logs after months of this going on, neither authentik nor traefik seem to be aware that anything is amiss.

I suspect the issue is to do with the docker networks but that's honestly just a hunch.

My docker-compose file is hundreds of lines long, so I've stripped environment and volume info while preserving traefik labels to try to keep the info more or less concise. It is certainly still too much info but I did not want to accidentally delete something crucial. Here follows my setup.

docker-compose.yml

services:
  traefik:
    profiles:
      - prod
    container_name: traefik
    image: traefik:v2.11
    command:
      - "--entrypoints.websecure.http.tls.domains[0].main=${BASE_DOMAIN}"
      - "--entrypoints.websecure.http.tls.domains[0].sans=*.${BASE_DOMAIN}"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./traefik/middlewares.yml:/app/myconf/middlewares.yml
      - ./traefik/traefik.yml:/traefik.yml
    restart: unless-stopped
    networks:
      default:
        aliases:
          # Allow xcontainernet services to resolve authentik
          - "authentik.${BASE_DOMAIN-home}"
    ports:
      - 80:80
      - 443:443
    labels:
      - "traefik.enable=true"
      - "traefik.http.middlewares.redirectssl.redirectscheme.scheme=https"
      - "traefik.http.routers.traefik.rule=Host(`traefik.${BASE_DOMAIN-home}`)"
      - "traefik.http.routers.traefik.middlewares=redirectssl@docker"
      - "traefik.http.routers.traefiksecure.rule=Host(`traefik.${BASE_DOMAIN-home}`)"
      - "traefik.http.services.traefik.loadbalancer.server.port=8080"

  transmission:
    image: lscr.io/linuxserver/transmission
    container_name: transmission
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.torrents.rule=Host(`torrents.${BASE_DOMAIN-home}`)"
      - "traefik.http.routers.torrents.middlewares=redirectssl@docker"
      - "traefik.http.routers.torrentssecure.rule=Host(`torrents.${BASE_DOMAIN-home}`)"
      - "traefik.http.routers.torrentssecure.entrypoints=websecure"
      - "traefik.http.routers.torrentssecure.middlewares=authentik@file"

  sabnzbd:
    image: lscr.io/linuxserver/sabnzbd
    container_name: sabnzbd
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.nzb.rule=Host(`nzb.${BASE_DOMAIN-home}`)"
      - "traefik.http.routers.nzb.middlewares=redirectssl@docker"
      - "traefik.http.routers.nzbsecure.rule=Host(`nzb.${BASE_DOMAIN-home}`)"
      - "traefik.http.routers.nzbsecure.entrypoints=websecure"
      - "traefik.http.routers.nzbsecure.middlewares=authentik@file"
      - "traefik.http.services.nzb.loadbalancer.server.port=8080"

  sonarr:
    image: ghcr.io/linuxserver/sonarr:latest
    container_name: sonarr
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.sonarr.rule=Host(`sonarr.${BASE_DOMAIN-home}`)"
      - "traefik.http.routers.sonarr.middlewares=redirectssl@docker"
      - "traefik.http.routers.sonarrsecure.rule=Host(`sonarr.${BASE_DOMAIN-home}`)"
      - "traefik.http.routers.sonarrsecure.entrypoints=websecure"
      - "traefik.http.routers.sonarrsecure.middlewares=authentik-basic@file"
      - "traefik.http.services.sonarr.loadbalancer.server.port=8989"

  radarr:
    image: ghcr.io/linuxserver/radarr:latest
    container_name: radarr
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.radarr.rule=Host(`radarr.${BASE_DOMAIN-home}`)"
      - "traefik.http.routers.radarr.middlewares=redirectssl@docker"
      - "traefik.http.routers.radarrsecure.rule=Host(`radarr.${BASE_DOMAIN-home}`)"
      - "traefik.http.routers.radarrsecure.entrypoints=websecure"
      - "traefik.http.routers.radarrsecure.middlewares=authentik-basic@file"
      - "traefik.http.services.radarr.loadbalancer.server.port=7878"

  readarr:
    image: lscr.io/linuxserver/readarr:nightly
    container_name: readarr
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.readarr.rule=Host(`readarr.${BASE_DOMAIN-home}`)"
      - "traefik.http.routers.readarr.middlewares=redirectssl@docker"
      - "traefik.http.routers.readarrsecure.rule=Host(`readarr.${BASE_DOMAIN-home}`)"
      - "traefik.http.routers.readarrsecure.entrypoints=websecure"
      - "traefik.http.routers.readarrsecure.middlewares=authentik-basic@file"
      - "traefik.http.services.readarr.loadbalancer.server.port=8787"

  bazarr:
    image: ghcr.io/linuxserver/bazarr:latest
    container_name: bazarr
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.bazarr.rule=Host(`bazarr.${BASE_DOMAIN-home}`)"
      - "traefik.http.routers.bazarr.middlewares=redirectssl@docker"
      - "traefik.http.routers.bazarrsecure.rule=Host(`bazarr.${BASE_DOMAIN-home}`)"
      - "traefik.http.routers.bazarrsecure.entrypoints=websecure"
      - "traefik.http.routers.bazarrsecure.middlewares=authentik-basic@file"
      - "traefik.http.services.bazarr.loadbalancer.server.port=6767"

  prowlarr:
    image: lscr.io/linuxserver/prowlarr:latest
    container_name: prowlarr
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.prowlarr.rule=Host(`prowlarr.${BASE_DOMAIN-home}`)"
      - "traefik.http.routers.prowlarr.middlewares=redirectssl@docker"
      - "traefik.http.routers.prowlarrsecure.rule=Host(`prowlarr.${BASE_DOMAIN-home}`)"
      - "traefik.http.routers.prowlarrsecure.entrypoints=websecure"
      - "traefik.http.routers.prowlarrsecure.middlewares=authentik-basic@file"
      - "traefik.http.services.prowlarr.loadbalancer.server.port=9696"

  jellyfin:
    image: linuxserver/jellyfin:latest
    container_name: jellyfin
    networks:
      default:
      xcontainernet:
        ipv4_address: 192.168.0.201
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.jellyfin.rule=Host(`tv.${BASE_DOMAIN-home}`)"
      - "traefik.http.routers.jellyfin.middlewares=redirectssl@docker"
      - "traefik.http.routers.jellyfinsecure.rule=Host(`tv.${BASE_DOMAIN-home}`)"
      - "traefik.http.routers.jellyfinsecure.entrypoints=websecure"
      - "traefik.http.services.jellyfin.loadbalancer.server.port=8096"

  authentikserver:
    image: ghcr.io/goauthentik/server:2024.2.2
    command: server
    depends_on:
      - postgresql
      - redis
    labels:
      - "traefik.enable=true"
      ## HTTP Routers
      - "traefik.http.routers.authentik.rule=Host(`authentik.${BASE_DOMAIN-home}`)"
      - "traefik.http.routers.authentik.entrypoints=web"
      - "traefik.http.routers.authentik.middlewares=redirectssl@docker"
      - "traefik.http.routers.authentiksecure.rule=Host(`authentik.${BASE_DOMAIN:-home}`)"
      - "traefik.http.routers.authentiksecure.entrypoints=websecure"
      ## HTTP Services
      - "traefik.http.routers.authentiksecure.service=authentik-svc"
      - "traefik.http.services.authentik-svc.loadbalancer.server.port=9000"

  authentikproxy:
    image: ghcr.io/goauthentik/proxy:2024.2.2
    labels:
      - "traefik.http.routers.authentik-proxy-outpost.rule=HostRegexp(`{subdomain:[a-z0-9-]+}.${BASE_DOMAIN:-home}`) && PathPrefix(`/outpost.goauthentik.io/`)"
      - "traefik.http.routers.authentik-proxy-outpost.entrypoints=websecure"
      - "traefik.http.services.authentik-proxy-outpost.loadbalancer.server.port=9000"

  immich-server:
    container_name: immich_server
    image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
    depends_on:
      - redis
      - immich-database
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.immich.rule=Host(`photos.${BASE_DOMAIN-home}`)"
      - "traefik.http.routers.immich.middlewares=redirectssl@docker"
      - "traefik.http.routers.immichsecure.rule=Host(`photos.${BASE_DOMAIN-home}`)"
      - "traefik.http.routers.immichsecure.entrypoints=websecure"
      - "traefik.http.services.immich.loadbalancer.server.port=3001"

networks:
  default:
    ipam:
      config:
        - subnet: 172.22.0.0/24
  xcontainernet:
    name: xcontainernet
    driver: macvlan
    driver_opts:
      parent: eth0
    ipam:
      config:
        - subnet: "192.168.0.0/24"
          ip_range: "192.168.0.200/29"
          gateway: "192.168.0.1"

traefik/traefik.yml

providers:
  docker:
    exposedByDefault: false
    network: homeservices_default
  file:
    directory: /app/myconf
    watch: true

entryPoints:
  web:
    address: ":80"
  websecure:
    address: ":443"
    http:
      tls:
        certResolver: dnsresolver

traefik/middlewares.yml

http:
  middlewares:
    https-redirect:
      redirectScheme:
        scheme: https
        permanent: true

    authentik-basic:
      forwardAuth:
        address: "http://authentikproxy:9000/outpost.goauthentik.io/auth/traefik"
        trustForwardHeader: true
        authResponseHeaders:
          - Authorization

    authentik:
      forwardAuth:
        address: "http://authentikproxy:9000/outpost.goauthentik.io/auth/traefik"
        trustForwardHeader: true
        authResponseHeaders:
          - X-authentik-email
          - X-authentik-groups
          - X-authentik-jwt
          - X-authentik-meta-app
          - X-authentik-meta-jwks
          - X-authentik-meta-outpost
          - X-authentik-meta-provider
          - X-authentik-meta-version
          - X-authentik-name
          - X-authentik-uid
          - X-authentik-username
7
 
 

Note: I am not affiliated with this project in any way. I think it’s a very promising alternative to things like MinIO and deserves more attention.

8
 
 

Anyone want to help?

9
 
 

I've been around selfhosting most of my life and have seen a variety of different setups and reasons for selfhosting. For myself, I don't really self host as mant services for myself as I do infrastructure. I like to build out the things that are usually invisible to people. I host some stuff that's relatively visible, but most of my time is spent building an over engineered backbone for all the services I could theoretically host. For instance, full domain authentication and oversight with kerberized network storage, and both internal and public DNS.

The actual services I host? Mail and vaultwarden, with a few (i.e. < 3) more to come.

I absolutely do not need the level of infrastructure I need, but I honestly prefer that to the majority of possible things I could host. That's the fun stuff to me; the meat and potatoes. But I know some people do focus more on the actual useful services they can host, or on achieving specific things with their self hosting. What types of things do you host and why?

10
 
 

Goal:

  • 16TB mirrored on 2 drives (raid 1)
  • Hardware raid?
  • Immich, Jellyfin and Nextcloud. (All docker)
  • N100, 8+ GB RAM
  • 500gb boot drive ssd
  • 4 HDD bays, start with using 2

Questions:

  • Which os?
    • My though was to use hardware raid, and just set that up for the 2 hdds, then boot off an ssd with Debian (very familiar, and use it for current server which has 30+ docker containers. Basically I like and am good at docker so would like to stick to Debian+docker. But if hardware raid isn't the best option for HDDs now a days, I'll learn the better thing)
  • Which drives? Renewed or refurb are half the cost, so should I buy extra used ones, and just be ready to swap when the fail?
  • Which motherboard?
  • Which case?
11
 
 

Added

  • Create book share links with expiration (admin users only) #1768
  • Email settings option to enable/disable rejecting unauthorized certificates (default enabled) #3030
  • Support for disabling SSRF request filter with env variable (DISABLE_SSRF_REQUEST_FILTER=1) #2549
  • Support for custom backup path on backups config * page or with env variable (BACKUP_PATH=/path/to/backups) #2973
  • Epub ereader setting for font boldness #3020 by @BimBimSalaBim in #3040
  • Finnish translations

Fixed

  • Casting podcast episodes #3044
  • Match all authors hitting rate limit #1570 by @jfrazx in #2188
  • Scheduled library scans using old copy of library #3079 #2894
  • Changing author name in edit author modal not updating metadata JSON files #3060
  • AB merge tool not working in Debian pkg due to ffmpeg v7 #3029
  • Download file ssrfFilter URL by @dbrain in #3043
  • Overdrive mediamarkers incorrect timestamp parsing #3068 by @nichwall in #3078
  • Unhandled exception syncing user progress by @taxilian in #3086
  • Server crash from library scanner race condition by @taxilian in #3107
  • UI/UX: PDF reader flickering #2279
  • UI/UX: Audio player long author name overflowing #3038
  • UI/UX: Audio player long chapter name overflowing Changed
  • Replace Tone with Ffmpeg for embedding metadata by @mikiher in #3111
  • Playback sessions are closed after 36 hours of inactivity
  • User agent string for podcast RSS feed and file download requests by @mattbasta in #3099
  • Increased time delay between when watcher detects a file and when it scans the folder Prevent editing backup path if it is set using env variable by @nichwall in #3122
  • UI/UX: Show publish date in changelog modal #3124 by @nichwall in #3125
  • UI/UX: Chapters table "End" column changed to a "Duration" column #3093
  • UI/UX: Bookshelf refactor for consistent scaling by @mikiher in #3037
  • UI/UX: Cleaner error page for 404s
12
 
 

GoDaddy really lived up to its bad reputation and recently changed their API rules. The rules are simple: either you own 10 (or 50) domains, you pay $20/month, or you don't get the API. I personally didn't get any communication, and this broke my DDNS setup. I am clearly not the only one judging from what I found online. A company this big gating an API behind such a steep price... So I will repeat what many people said before me (being right): don't. use. GoDaddy.

13
 
 

I'm new to selfhosting and I find myself rarely using the server, only when I need to retrieve a document or something.

I was thinking of implementing something to make it power on, on demand, but I'm not sure if this might be harmful for the HDDs, and I'm not sure how to implment it if so.

What's your recommendation to do so? I'm running a dell optiplex 3050

14
18
submitted 3 days ago* (last edited 3 days ago) by shiftymccool@programming.dev to c/selfhosted@lemmy.world
 
 

Hey all! I'm having an issue that's probably simple but I can't seem to work it out.

For some history (just in case it matters): I have a simple server running docker and all services being defined in docker-compose files. Probably doesn't matter, but I've switched between a few management UIs (Portainer, Dokemon, currently Dockge). Initially, I set everything up in Portainer (including the main network) and migrated everything over to Dockge. I was using Traefik labels but was getting a bit annoying since I tend to tinker on a tablet. I wanted something a bit more UI-focused so I switched to NPM.

Now I'm going through all of my compose files and cleaning up a bunch of things like Traefik labels, homepage labels, etc... but I'm also trying to clean up my Docker network situation.

My containers are all on the same network, and I want to slice things up a little better, e.g. I have the Cloudflared container and want to be selective about what containers it has access to network-wise.

So, the meat of my issue is that my original network (call it old_main) seems to be the only one that can access the internet outbound. I added a new network called cloudflared and put just my Cloudflared container and another service on it and I get the 1033 ARGO Tunnel error when accessing the service and Cloudflare says the tunnel is down. Same thing for other containers I try to move from old_main, SearXNG can't connect, Audiobookshelf can't search for author info, etc... I can connect to these services but they can't reach anything on the web.

I have my docker daemon.json set to use my Pi-hole for DNS and I only see my services like audiobookshelf.old_main coming through. I also see the IP address of the old_main gateway coming into Pi-hole as docker-host. My goal is to add all of my services to new, more-specific networks then remove old_main but I don't want to drop the only network that seems to be able to communicate with the web until I have another that can.

I'm not sure what else to look for, any suggestions? Let me know if you need more info.

15
 
 

I've been using Cloudflare tunnels in my homelab. I'm wondering how well they resist subdomain discovery/enumeration by bots/malicious actors. I’m aware that security through obscurity isn’t a real strategy, but I am curious about this from a purely academic standpoint. Aside from brute force guessing, are there any other strategies that could be used to find the subdomains of services tunneled through cloudflare?

16
 
 

Being a noob and all I was wondering whats the real benefit of having a monolithic lets say proxmox instance with router, DNS, VPN but also home asssistant and NAS functionalitiy all in one server? I always thought dedicated devices are simpler to maintain or replace and some services are also more critical than others I guess?

17
 
 

a decentralized P2P todo list app to to demo the P2P framework used in the chat app.

https://github.com/positive-intentions/chat

This is a wrapper around peerjs. peerjs is good, but it can become complicated to use on complicated projects. This implementation is an attempt to create something like a framework/guideline for decentralized messaging and state management.

https://positive-intentions.github.io/p2p/?path=/story/demo-todo-list--basic

how it works:

  1. crypto-random ids are generated and used to connect to peerjs-server (to broker a webrtc connection)
  2. peer1 shares this ID to another browser/tab/person (use the storybook props)
  3. peers are then automatically connected.
  4. add todo item
  5. edit todo item

There are several things here to improve like:

  • general cleanup throughout (its early stage for this project and missing all the nice things like good-code and unit-tests)
  • adding extra encryption keys for messages comming in and going out (webrtc mandates encryption already)
  • handling message callbacks
  • key rotation

The goal is to create a private and secure messaging library in JavaScript running in a browser.

18
 
 

I have been using Nextcloud for over a year now. Started with it on Bare Metal, switched to the basic Docker Container and Collabora in its own Container. That was tricky to get running nicely. Now I have been using Nextcloud AIO for a couple of Months and am pretty happy. But it feels a little weird with all those Containers and all that overhead.

How do you guys host NC + Collabora? Some easy and best Solution?

19
 
 

After 3 years in the making I'm excited to announce the launch of Games on Whales, an innovative open-source project that revolutionizes virtual desktops and gaming. Our mission is to enable multiple users to stream different content from a single machine, with full HW acceleration and low latency.

With Games on Whales, you can:

  • Multi-user: Share a single remote host hardware with friends or colleagues, each streaming their own content (gaming, productivity, or anything else!)
  • Headless: Create virtual desktops on demand, with automatic resolution and FPS matching, without the need for a monitor or dummy plug
  • Advanced Input Support: Enjoy seamless control with mouse, keyboard, and joypads, including Gyro and Acceleration support (a first in Linux!)
  • Low latency: Uses the Moonlight protocol to stream content to a wide variety of supported clients.
  • Linux and Docker First: Our curated Docker images include popular applications like Steam, Firefox, Lutris, Retroarch, and more!
  • Fully Open Source: MIT licensed, and we welcome contributions from the community.

Interested in how this works under the hood? You can read more about it in our developer guide or deep dive into the code.

20
19
submitted 3 days ago* (last edited 3 days ago) by HumanPerson@sh.itjust.works to c/selfhosted@lemmy.world
 
 

I am currently out of town, and my server went down. All my services go through nginx, and suddenly started giving error 502. My SSH won't let me in. I had my sister reboot the server, and it still doesn't work. I apologize for the lack of details, but that is all I know, and I can't access logs. I've cleared cache, and used a VPN in case fail2ban got me. I recently got a tp link router, so it could be something with that, but it was working for a while. I will have her do another reboot, and if that doesn't work I will have her power off and unplug the server in case it was hacked.

Edit: I have absolutely no clue why, but it works now. I literally did nothing. As far as I know, my sister hasn't touched it today. It just started working. Computers, man...

Edit 2: Actually she said she did something. Not sure what, but it works now.

21
 
 

Just a bit or a wandering mind on my part but one of the issues in the back of my mind is what happens to whatever self hosting I setup if something happens to me.

Ideally I'd like to be able to know that in case of emergency Id be able rely on a good friend or two to keep things going.

My thought was that would require some common design patterns/ processes and standardisation.

I also have these thoughts because eventually Id like to support other family members with self hosted services at their places. Standardising hardware, configurations etc makes that much simpler.

How have others approached this?

22
 
 

I have the arr stack and immich running on a beelink S12 pro based on geekau mediastack on GitHub. Basically, and I'm sure my understanding is maybe a bit flawed, it uses docker-proxy to detect containers and passes that to swag, which then sets up subdomains via a tunnel to Cloudflaire. I have access to my services outside of my LAN without any port forwarding on my router. If I'm not mistaken, that access is via the encrypted tunnel between swag & Cloudflaire (please, correct me if I'm wrong).

That little beelink is running out of resources! It's running 20 containers, and when immich has to make any changes, it quickly runs low on memory. What I would like to do is set up a second box that would also run the same "infrastructure" containers (swag, docker-proxy), and connect to the same Cloudflaire account. I'm guessing I need to set up a second tunnel? I'm not sure how to proceed.

23
 
 

I got an home server that is running docker for all my self hosted apps. But sometimes I accidentally trigger Earlyoom by remotely starting expensive docker builds, which kill docker.

I don't have access to my server outside of my home network, so I can't manually restart docker in those situations.

What would be the best way to restart it automatically? I don't mind doing a full system restart if needed

24
 
 

I've run my own email server for a few years now without too many troubles. I also pay for a ProtonMail account that's been very good. But I've always struggled with PGP keys for encrypting messages to non-Proton users - basically everyone. The PGP key distribution setup just seemed half baked and a bit broken relying on central key servers.

Then I noticed that email I set from my personal email to my company provided email were being encrypted even though I wasn't doing anything to achieve this. This got me curious as to why that was happening which lead me to WKD (Web Key Directory). It's such a simple idea for providing discoverable downloads for public keys and it works really well having set it up for my own emails now.

It's basically a way of discovering the public key of someone's email by making it available over HTTPS at an address that can be calculated based on the email address itself. So if your email is name@example.com, then the public key can be hosted at (in this case) https://openpgpkey.example.com/.well-known/openpgpkey/example.com/hu/pmw31ijkbwshwfgsfaihtp5r4p55dzmc?l=name this is derived using a command like gpg-wks-client --print-wkd-url name@example.com. You just need an email client that can do this and find the key for you automatically. And when setting up your own server you generate the content using the keys in your gpg key ring using env GNUPGHOME=$(mktemp -d) gpg --locate-keys --auto-key-locate clear,wkd,nodefault name@example.com. Move this generated folder structure to your webserver and you're basically good to go.

I have this working with Thunderbird, which now prompts me to do the discoverability step when I enter an email that doesn't have an associated key. On Android, I've found OpenKeyChain can also do a search based just on the email address that apps like K9-Mail (to be Thunderbird mail) can then use.

Anyway, I thought this was pretty cool and was excited to see such an improvement in seamless encryption integration. It'd be nicer if on Thunderbird and K9 it all happened as soon as you enter an email address rather than a few extra steps to jump through to perform the search and confirm the keys. But it's a major improvement.

Does your email provider have WKD setup and working or do you use it already?

25
view more: next ›