this post was submitted on 29 Aug 2024
65 points (98.5% liked)

Selfhosted

40313 readers
249 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Hi all,

I found a hobby in trying to secure my Linux server, maybe even beyond reasonable means.

Currently, my system is heavily locked down with user permissions. Every file has a group owner, and every server application has its own user. Each user will only have access to files it is explicitly added to.

My server is only accessible from LAN or VPN (though I've been interested in hosting publicly accessible stuff). I have TLS certs for most everything they can use it (albeit they're self signed certs, which some people don't like), and ssh is only via ssh keys that are passphrase protected.

What are some suggestions for things I can do to further improve my security? It doesn't have to be super useful, as this is also fun for me.

Some things in mind:

  • 2 factor auth for SSH (and maybe all shell sessions if I can)
  • look into firejail, nsjail, etc.
  • look into access control lists
  • network namespace and vlan to prevent server applications from accessing the internal network when they don't need to
  • considering containerization, but so far, I find it not worth foregoing the benefits I get of a single package manager for the entire server

Other questions:

  • Is there a way for me to be "notified" if shell access of any form is gained by someone? Or somehow block all shell access that is not 2FA'd?
  • my system currently secures files on the device. But all applications can see all process PIDs. Do I need to protect against this?

threat model

  • attacker gains shell access
  • attacker influences server application to perform unauthorized actions
  • not in my threat model: physical access
top 36 comments
sorted by: hot top controversial new old
[–] scott@lem.free.as 15 points 2 months ago
[–] jjlinux@lemmy.ml 10 points 2 months ago

Maybe not 100% in the subject, but I just deployed a Wazuh instance to let me know how any of my hosts, containers and computers may have vulnerabilities. I found a crap load of holes in my services, and I'm halfway through squashing all of them.

If this is a hobby, that's sure to keep you entertained for quite some time.

[–] wildbus8979@sh.itjust.works 8 points 2 months ago* (last edited 2 months ago) (1 children)

AppArmor or SELinux, OSSEC, TPM and SecureBoot boot chain.

[–] possiblylinux127@lemmy.zip 0 points 2 months ago (1 children)

Skip the TPM and secureboot as those are pretty useless

[–] wildbus8979@sh.itjust.works -1 points 2 months ago (1 children)
[–] possiblylinux127@lemmy.zip 0 points 2 months ago* (last edited 2 months ago) (1 children)

How so? They clearly say physical access is not in there threat model. If someone has root it is game over.

[–] wildbus8979@sh.itjust.works 0 points 2 months ago

It can still prevent vectors of persistency.

[–] epyon22@programming.dev 6 points 2 months ago (2 children)

I would reconsider docker because if a specific application leaks some sort of shell access or system file access you'll be protected out side of container host escalation.

Unrelated to security, I prefer docker because it leaves the server very clean if you remove different apps. Can also save time configuring more complex applications or applications that conflict with system libraries.

Add fail2ban on your list of applications it watches logs for invalid logins and puts them on firewall block rules after so many failed attempts.

[–] henfredemars@infosec.pub 2 points 2 months ago* (last edited 2 months ago)

Docker performs some syscall filtering as well which may reduce the kernel attack surface. It can be pain to set up services this way, but it could help frustrate an attacker moving laterally in the system.

Processes in the container cannot see external processes for example as I think interested the OP.

[–] matcha_addict@lemy.lol 2 points 2 months ago (2 children)

I really wish there was a system wide package manager for docker containers, which would update software in all your containers at once similar to how a typical package manager would.

I did not completely rule out docker, but I wonder if I can obtain most of its benefits without this major con with package management. I mean I know it's possible, since its mostly kernel features, but it would be difficult to simulate and the tooling is probably lacking (maybe nsjail can get me closer).

[–] wildbus8979@sh.itjust.works 1 points 2 months ago (1 children)

You can have a look at systemd-nspawn and machinectl actually. Sounds like exactly what you're looking for :)

[–] matcha_addict@lemy.lol 1 points 2 months ago (2 children)

I am really interested in systemd-nspawn. Unfortunately I have openRC now (I liked it's simplicity) so can't try out systemd yet.

Is machinectl tied to systemd also?

[–] 486@lemmy.world 1 points 2 months ago* (last edited 2 months ago)

You could give bubblewrap a try instead. It is quite similar to systemd-nspawn.

[–] wildbus8979@sh.itjust.works 0 points 2 months ago

Yes machinectl is the interface for nspawn

[–] ramenu@lemmy.ml 4 points 2 months ago (1 children)

Absolutely essential is using a firewall and set it as strict as possible. Use MAC like SELinux or Apparmor. This is extremely overkill for a personal server, but you may also compile everything yourself and enable as many hardening flags as possible and compile your own kernel with as many mitigations and hardening flags enabled (also stripped out of features you don't need)

[–] matcha_addict@lemy.lol 4 points 2 months ago

This is extremely overkill...

I actually do all of that, thanks to Gentoo :')

[–] LunchMoneyThief@links.hackliberty.org 3 points 2 months ago (1 children)

Consider running some kind of file integrity monitoring. samhain, tiger, tripwire, to name a few.

considering containerization, but so far, I find it not worth foregoing the benefits I get of a single package manager for the entire server

Just do MAC with either AppArmor or SELinux.

[–] Neon@lemmy.world 5 points 2 months ago

just do MAC

cries in NixOS

[–] just_another_person@lemmy.world 2 points 2 months ago* (last edited 2 months ago) (1 children)

There are entire books dating back to the 80's that go into this, that are still fairly valid to this day.

If you want to take things further at your own risk, look into how to use TPM and Secure Boot to your advantage. It's tricky, but worth a delve.

For network security, you're only going to be as effective as the attack hitting you, and self-hosting is not where you want to get tested. Cloudflare is a fine and cheap solution for that. VLANS won't save you, and an on-prem attack won't save you here. Look into Crowdsec.

Disable any wireless comms. Use your BIOS to ensure things like Bluetooth is disabled...you get the idea. Use RFKill to ensure the OS respects the disablement of wireless devices.

At the end of the day, every single OS in existence is only as secure as the attack vectors you allow it to have. Eventually, somebody can get in. Just removing the obvious entry points is the best you can do.

[–] matcha_addict@lemy.lol 1 points 2 months ago (2 children)

What's the issue with VLAN?

[–] just_another_person@lemmy.world 1 points 2 months ago* (last edited 2 months ago) (1 children)

VLANs are for organizing traffic, not authorization of traffic.

Can be pretty easily spoofed by packet.

[–] possiblylinux127@lemmy.zip 3 points 2 months ago

Only if you don't set it up correctly. You should set which devices are allowed to set which vlans and then make sure client devices aren't authorized to send or receive tagged packets.

You then combine that with a firewall only needed traffic allowed.

[–] possiblylinux127@lemmy.zip 1 points 2 months ago

If you set it up incorrectly you can perform an attack called vlan hoping.

You also need to setup Firewall rules to properly isolate zones

[–] kurikai@lemmy.world 2 points 2 months ago

Cis level 2 hardening

[–] pyrosis@lemmy.world 2 points 2 months ago

Get your firewall right then maybe add fail2ban.

You could also consider IDs/IPs on your primary router/firewall if this is internal. If not you can install surricata on a public server. Obviously if you go with something as powerful as surricata you no longer need fail2ban.

Keep a sharp eye on any users with sudo. Beyond that consider docker as others have mentioned.

It does add to security because it allows the developers a bit more control of what packages are utilized for their applications. It creates a more predictable environment.

[–] jlh@lemmy.jlh.name 2 points 2 months ago

Is there a way for me to be "notified" if shell access of any form is gained by someone?

Falco is a very powerful tool for this.

[–] possiblylinux127@lemmy.zip 1 points 2 months ago
  • SElinux

  • monitoring

  • proper containers (ideally rootless)

  • separate accounts for each function and permission set. Your containers should run as a low privileged user

[–] funtrek@discuss.tchncs.de 1 points 2 months ago

Some people suggest SELinux which is great. But if you really want to take it to the maximum use its MLS.

[–] ctr1@fl0w.cc 1 points 2 months ago (1 children)

Like others have mentioned, SELinux could be a great addition. It can be a massive pain, but it's really effective at locking things down (if configured properly).

However, the difficulty will depend on the distro. I use it with Gentoo, which has plenty of support/docs for it and provides policies for many packages. Although (when running strict policy types) I usually end up needing to adjust them or write my own.

Obviously Red Hat would be another good choice, but I haven't tried it. Fedora also has good support, but I've only ever used the OOTB targeted policies.

That said, I've started relying on users/groups more often lately, since it really gets in the way of everything.

[–] matcha_addict@lemy.lol 2 points 2 months ago (1 children)

A fellow gentoo user in the wild! Do you have any thoughts on using containers with gentoo? It pains me the idea of foregoing all the awesome features of portage by using containers.

What exactly does SE Linux provide over users / groups?

[–] ctr1@fl0w.cc 1 points 2 months ago

👋 right on! I actually also have used containers as a key to my security layout before, but yeah you miss out on all the benefits of portage.

I was doing something crazy and actually running Gentoo inside each one! It was very difficult to stay up-to-date. But I basically had my host as barebones as possible and used LibVirt containers for everything, attempting to make a few templates that I could keep updated and base other VMs on. I was able to keep this up for about two years then I had to relax (was my main PC). But it was really secure, and it does work.

The benefit of encapsulation is that you have a lot of freedom inside each container, like install a different distro if you need to. Also as long as they are isolated you don't need to worry as much about their individual security. But it's still good to. I ran SELinux on the host and non-SELinux (but hardened) in the guests.

SELinux has a lot of advantages over users/groups, but I think the latter can be just as secure if you know what you're doing. For example with SELinux you can prevent certain applications from accessing the network, or restrict access to certain ports, etc. It's also useful for desktop environments where a lot of GUI apps run under one user- e.g. neither my main user nor any other program can access my keepassxc directory, only the keepassxc process (and root) can (even though the application is running under my main user). You can also restrict root quite a bit, especially if you compile in the option to prevent disabling SELinux at boot (I need to recompile my kernel to disable it).

But again while it is fun to learn, it is quite a pain and I've relaxed the setup on my new computer to use a different user for everything (including gui apps), which I think is secure enough for me. But this style relies on my ability to adhere to it, whereas with SELinux you can set it up to where you're forced to

[–] ancoraunamoka@lemmy.dbzer0.com 1 points 2 months ago

Great that you included your threat model, but you should have specified the type of services that you host/provide.

One thing i would look into is disabling any port that is not necessary (like 80 and 443) and disable ssh on the wider network.

Host a wireguard endpoint in the internal network that acts like a bastion and allows you to ssh-jump to any other host and VM on the network.

Wireguard is more secure than ssh, assuming sound crypto and hygiene for both, because you can't probe a host from the outside and know if wireguard is running or not

[–] ikidd@lemmy.world 0 points 2 months ago (2 children)

That sounds extremely painful to manage and prone to error if you aren't using containers.

[–] matcha_addict@lemy.lol 3 points 2 months ago

It does require some effort to manage, but I would argue it's easier to keep all packages (including dependencies) up-to-date across the system, which is a huge security benefit imo.

The permission system, once you set it up, you never need to change it unless you're changing something.

[–] ancoraunamoka@lemmy.dbzer0.com 1 points 2 months ago

I am not sure what you are talking about. None of the stuff OP talked about are related to containers. Also containers complicate networking a lot, so i would avoid them at all costs and use VMs