this post was submitted on 16 Jul 2024
29 points (93.9% liked)

Selfhosted

38834 readers
135 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Referencing: https://lemmy.world/post/17588348

I want to make a NAS with a 500GB boot drive and 2x16TB HDDs. Based on my previous post, btrfs is a good option. It also looks easy to get started. My plan for the NAS would be to purchase several 16TB drives, and only use 2 of them.

My first question is about different drives. Could I purchase two different brand drives and use them with btrfs? (I assume yes)

2nd question: how does the replacement process go? Like if drive A died, so I remove it, and put a brand new replacement in. What do I have to do with btrfs to get the raid 1 back going? Any links or guides would be amazing.

top 20 comments
sorted by: hot top controversial new old
[–] poVoq@slrpnk.net 7 points 1 month ago

This is a good guide: https://wiki.tnonline.net/w/Btrfs/Replacing_a_disk

Usually you want to replace drives before they fail (SMART monitoring will give you ample warning in most cases). The it is better to have an additional free SATA port to turn the failing raid temporarily into a three-way raid and use the btrfs built-in function to replace the disk in situ.

[–] tobogganablaze@lemmus.org 5 points 1 month ago* (last edited 1 month ago) (1 children)

My first question is about different drives. Could I purchase two different brand drives and use them with btrfs? (I assume yes)

You can.

2nd question: how does the replacement process go? Like if drive A died, so I remove it, and put a brand new replacement in. What do I have to do with btrfs to get the raid 1 back going? Any links or guides would be amazing.

Depends on what NAS/Software you have. If your NAS supports hot-swaps you can just pull out the defective drive and plug in another. Otherwise you'll have to shut it down, swap the drive and turn it back on.

If you have already have the spare drive ready and you have slots availible, you can run a "hot spare". This way you can even start the raid rebuild if you're not physically near your NAS (like when a drive fails while you're on holiday or sm).

[–] Dust0741@lemmy.world 2 points 1 month ago (2 children)

Hm okay. I was thinking of using Debian and likely a 4 bay case.

So the process for a dead HDD: Power off. Pull out dead drive and replace. Power on. Now what? Does Debian/a specific motherboard support auto rebuilding the raid 1? Or what are the commands to rebuild?

[–] Anafabula@discuss.tchncs.de 3 points 1 month ago

Btrfs has it's own build-in raid. From what I understand you should mount the filesystem with -o degraded and then use btrfs replace to switch to the new drive. I've never had to do that myself yet though.

[–] tobogganablaze@lemmus.org 2 points 1 month ago* (last edited 1 month ago) (1 children)

I'm using Synology/DMS and there you have a pretty neat GUI that lists newly detected drives and let's you assign them to your storage pool and rebuild the raid. I'd expect it to be quite similar on software like ~~TrueNAS~~ Rockstor.

[–] Tywele@lemmy.dbzer0.com 5 points 1 month ago (1 children)
[–] tobogganablaze@lemmus.org 1 points 1 month ago

Ah, my bad.

[–] eco_game@discuss.tchncs.de 5 points 1 month ago* (last edited 1 month ago) (2 children)

Could I purchase two different brand drives and use them with btrfs?

I don't quite remember the source for this, but I believe I read some time ago that it's actually a good thing to have separate drives. The reasoning is, if you buy two identical drives (at the same time), the likelyhood of both drives failing around the same time is severely higher.

This is then amplified by the fact that rebuilding a RAID puts a lot of strain on the non-dead drive, so if ie. drive 1 dies and drive 2 is about to die, the strain you put on drive 2 in order to rebuild your RAID onto drive 3 might kill drive 2 before you even finish rebuilding your RAID.

Again, this is just from my memory, it might be worth doing some more research on.

[–] mal3oon@lemmy.world 1 points 1 month ago (1 children)

if you buy two identical drives (at the same time), the likelyhood of both drives failing around the same time is severely higher.

I need sources, this sounds extremely unlikely. That's basically 2 "independent” probabilities.

[–] tobogganablaze@lemmus.org 2 points 1 month ago* (last edited 1 month ago)

The reasoning is that drives are produced and shipped in batches and if you order multiple at onces there is a higher chance you'll get drives from the same batch. If that batch had some fault during production or it was damaged during shipping, all your drives might be affected.

I don't have a source, but it's something multiple expirenced people have mentioned to me.

[–] WbrJr@lemmy.ml 1 points 1 month ago

I have read the same, but also read it is not very true anymore, specially with dedicated server drives. I would not worried too much about it honestly

[–] Decronym@lemmy.decronym.xyz 4 points 1 month ago* (last edited 1 month ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
NAS Network-Attached Storage
RAID Redundant Array of Independent Disks for mass storage
SATA Serial AT Attachment interface for mass storage
SSD Solid State Drive mass storage
ZFS Solaris/Linux filesystem focusing on data integrity

5 acronyms in this thread; the most compressed thread commented on today has 9 acronyms.

[Thread #873 for this sub, first seen 16th Jul 2024, 15:35] [FAQ] [Full list] [Contact] [Source code]

[–] hjpoijnerflkjn@feddit.de 3 points 1 month ago

Just if somebody else needs this for ZFS: https://blog.juhefa.de/posts/zfs-replace-disk/

[–] floofloof@lemmy.ca 1 points 1 month ago* (last edited 1 month ago) (3 children)

Is btrfs RAID stable yet? This article is three years old, so maybe things have improved, but it contains some pretty strong warnings about the dangers of btrfs RAID:

https://arstechnica.com/gadgets/2021/09/examining-btrfs-linuxs-perpetually-half-finished-filesystem/

To summarize, the article argues that btrfs is great for single-disk usage but its RAID implementations are idiosyncratic and unreliable.

(I use btrfs daily on several single-disk computers and it has been great, but I have never tried its RAID.)

[–] poVoq@slrpnk.net 7 points 1 month ago

Mirror (raid1) btrfs raids are fine and are more convenient anyways.

[–] czardestructo@lemmy.world 3 points 1 month ago

Been using it for over a year on two 8tb SSDs in strip and 14tb as mirror. This is on Debian and its flawless and wonderful. I run btrkbk hourly for snapshots, backups to remote locations and house keeping with 6 months of hourly snaps. Life is great.

[–] theit8514@lemmy.world 1 points 1 month ago

In days past some drive vendors had different sector layouts for drives and would cause issues with raid. Pretty sure most nowadays are all the same layout and you won't run into any issues. I still look to get the same drive model anyways just to be perfectly sure that there are no issues.

Even then you may run into weird issues like one of my 1.2 TB enterprise ssd drives was reporting 1.12 TiB rather than 1.09 TiB the other 7 drives had. TrueNas refused to build a vdev with that drive and I had to return it to get a new one.

[–] possiblylinux127@lemmy.zip 0 points 1 month ago (1 children)

It is better to have more drives and less apares. However, btrfs is only stable in raid 1. With data that big I would go ZFS raidz2 as you can lose up to 2 drives.

From a btrfs perspective it is pretty easy as you just can run btrfs replace with the path of the new drive. Btrfs also has the benefit of being native

[–] saiarcot895@programming.dev 7 points 1 month ago

BTRFS is stable for all RAID levels except for RAID 5 and 6 (because of the write hole). I'm using it with RAID 10.