this post was submitted on 23 Nov 2024
88 points (92.3% liked)

Selfhosted

40359 readers
268 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

About a year ago I switched to ZFS for Proxmox so that I wouldn't be running technology preview.

Btrfs gave me no issues for years and I even replaced a dying disk with no issues. I use raid 1 for my Proxmox machines. Anyway I moved to ZFS and it has been a less that ideal experience. The separate kernel modules mean that I can't downgrade the kernel plus the performance on my hardware is abysmal. I get only like 50-100mb/s vs the several hundred I would get with btrfs.

Any reason I shouldn't go back to btrfs? There seems to be a community fear of btrfs eating data or having unexplainable errors. That is sad to hear as btrfs has had lots of time to mature in the last 8 years. I would never have considered it 5-6 years ago but now it seems like a solid choice.

Anyone else pondering or using btrfs? It seems like a solid choice.

you are viewing a single comment's thread
view the rest of the comments
[–] suzune@ani.social 8 points 2 days ago (1 children)

The question is how do you get a bad performance with ZFS?

I just tried to read a large file and it gave me uncached 280 MB/s from two mirrored HDDs.

The fourth run (obviously cached) gave me over 3.8 GB/s.

[–] possiblylinux127@lemmy.zip -2 points 2 days ago* (last edited 2 days ago) (3 children)

I have never heard of anyone getting those speeds without dedicated high end hardware

Also the write will always be your bottleneck.

[–] Moonrise2473@feddit.it 5 points 2 days ago (1 children)

I have similar speeds on a truenas that I installed on a simple i3 8100

[–] possiblylinux127@lemmy.zip 1 points 2 days ago (1 children)

How much ram and what is the drive size?

I suspect this also could be an issue with SSDs. I have seen a lot a posts around describing similar performance on SSDs.

[–] Moonrise2473@feddit.it 1 points 2 days ago (1 children)

64 gb of ecc ram (48gb cache used by zfs) with 2tb drives (3 of them)

[–] possiblylinux127@lemmy.zip 0 points 1 day ago (1 children)

Yeah it sounds like I don't have enough ram.

[–] sugar_in_your_tea@sh.itjust.works 1 points 1 day ago (1 children)

ZFS really likes RAM, so if you're running anything less than 16GB, that could be your issue.

[–] possiblylinux127@lemmy.zip 2 points 1 day ago* (last edited 1 day ago)

From the Proxmox documentation:

As a general rule of thumb, allocate at least 2 GiB Base + 1 GiB/TiB-Storage. For example, if you have a pool with 8 TiB of available storage space then you should use 10 GiB of memory for the ARC.

I changed the arc size on all my machines to 4GB and it runs a bit better. I am getting much better performance. I though I had changed it but I didn't regenerate initramfs so it didn't apply. I am still having issues with VM transfers locking up the cluster but that might be fixable by tweaking some settings.

16GB might be overkill or underkill depending on what you are doing.

[–] suzune@ani.social 4 points 2 days ago* (last edited 2 days ago)

This is an old PC (Intel i7 3770K) with 2 HDDs (16 TB) attached to onboard SATA3 controller, 16 GB RAM and 1 SSD (120 GB). Nothing special. And it's quite busy because it's my home server with a VM and containers.

[–] stuner@lemmy.world 2 points 2 days ago* (last edited 2 days ago) (1 children)

I'm seeing very similar speeds on my two-HDD RAID1. The computer has an AMD 8500G CPU but the load from ZFS is minimal. Reading / writing a 50GB /dev/urandom file (larger than the cache) gives me:

  • 169 MB/s write
  • 254 MB/s read

What's your setup?

[–] possiblylinux127@lemmy.zip 1 points 2 days ago (1 children)

Maybe I am CPU bottlenecked. I have a mix of i5-8500 and i7-6700k

The drives are a mix but I get almost the same performance across machines

[–] stuner@lemmy.world 2 points 2 days ago (1 children)

It's possible, but you should be able to see it quite easily. In my case, the CPU utilization was very low, so the same test should also not be CPU-bottlenecked on your system.

[–] possiblylinux127@lemmy.zip 0 points 1 day ago (1 children)

Is your machine part of a cluster by chance? Of so, when you do a VM transfer what performance do you see?

[–] stuner@lemmy.world 1 points 1 day ago

Unfotunately, I can help you with that. The machine is not running any VMs.