this post was submitted on 10 Jun 2024
4 points (100.0% liked)

datahoarder

6687 readers
1 users here now

Who are we?

We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.

We are one. We are legion. And we're trying really hard not to forget.

-- 5-4-3-2-1-bang from this thread

founded 4 years ago
MODERATORS
 

Are they worth considering or only worth it at certain price points?

top 4 comments
sorted by: hot top controversial new old
[–] jet@hackertalks.com 2 points 4 months ago* (last edited 4 months ago)

What is a renewed drive? Do they have a datasheet with MTBF defined?

Spinning disks, or consumable flash?

What is the use case? RAID 5? Ceph? JBOD?

What is your human capital cost of monitoring and replacing bad disks?

Let's say you have a data-lake with Ceph or something, it costs you $2-5 a month to monitor all your disks for errors, predictive failure, debug slow io, etc. The human cost of identifying a bad disk, pulling it, replacing it, then destroying it - Something like 15-30m. The cost of destroying a drive $5-50 (depending on your vendor, onsite destruction, etc)

A higher predictive failure rate of "Used" drives, has to factor in your fixed costs, and human costs. If the drive only lasts 70% as long as a new drive, then the math is fairly easy.

If the drive gets progressively slower (i.e. older SSDs) then the actual cost of the used drive becomes more difficult to model (you have to have a metric for service responsiveness, etc).

  • if its a hobby project, and your throwing drives into a self-healing system, then take any near free disks you can get, and just watch your power bill.

  • If you make money from this, or the downside of losing data is bad, then model out the higher failure rate into your cost model.

[–] MangoPenguin@lemmy.blahaj.zone 1 points 4 months ago* (last edited 4 months ago)

I'm running several used ("renewed") enterprise SAS HDDs and enterprise SATA SSDs. They've been solid so far.

The HDDs came with about 30k hours each which is not bad at all, and the SSDs only had around 100 TB written out of the total 6.2 PB rating.

I'm not sure I would do used with standard consumer HDDs, they typically don't last as long and are likely abused a lot more in a desktop PC vs a datacenter server.

As always have proper backups in place, all drives fail eventually no matter where you buy them.

[–] Nogami@lemmy.world 1 points 4 months ago

I've been using renewed (refurbished) 8TB drives off of Ebay - SAS 8TB for $50-60 each. Not a single failure in over a year on the dozen or so drives I'm running right now. I'm running unRAID with a combination of unRAID's native array drives (for media and "disposable" stuff) in a dual parity config, and ZFS (with snapshots replicated to a live backup on a secondary server) for important personal stuff (and backed-up off-site a few times a year).

Even if something were to perish, I have enough spares to just chuck one in and let it resilver without worrying at all. I'm content with this as a homelabber and when I'm not supplying critical service for a business, etc.

[–] punkcoder@lemmy.world -1 points 4 months ago

Purchased 5 renewed drives from amazon, 10 months in 3 have had to be replaced because of escalating bad sectors, all three were outside of the refurbish guarantee… one by only a week. Save your money and go with the new drives.