this post was submitted on 21 Jan 2025
209 points (98.6% liked)

Technology

60677 readers
3652 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 6 hours ago

Wonderful. Storage is a great thing, and I'm happy to have it.

[–] [email protected] 4 points 11 hours ago* (last edited 11 hours ago)

me: torrents the entire spn series

[–] [email protected] 12 points 16 hours ago (4 children)

I would not risk 36TB of data on a single drive let alone a Seagate. Never had a good experience with them.

[–] [email protected] 4 points 12 hours ago (3 children)

Ignoring the Seagate part, which makes sense... Is there a reason with 36TB?

I recall IT people losing their minds when we hit the 1TB, when the average hard drive was like 80GB.

So this growth seems right.

[–] [email protected] 6 points 10 hours ago (1 children)

It's raid rebuild times.

The bigger the drive, the longer the time.

The longer the time, the more likely the rebuild will fail.

That said, modern raid is much more robust against this kind of fault, but still: if you have one parity drive, one dead drive, and a raid rebuild, if you lose another drive you're fucked.

[–] [email protected] 1 points 6 hours ago

Just rebuilt onto Ceph and it’s a game changer. Drive fails? Who cares, replace it with a bigger drive and go about your day. If total drive count is large enough, and depends if using EC or replication, it could mean pulling data from tons of drives instead of a handful.

[–] [email protected] 4 points 11 hours ago

I recall IT people losing their minds when we hit the 1TB

1TB? I remember when my first computer had a state of the art 200MB hard drive.

[–] [email protected] 2 points 11 hours ago (1 children)

It's so consistent it has a name: Moore's law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years. https://en.m.wikipedia.org/wiki/Moore%27s_law

I heard that we were at the theoretical limit but apparently there's been a break through: https://phys.org/news/2020-09-bits-atom.html

[–] [email protected] 10 points 10 hours ago

Quick note, HDD storage is not using transistors to store the data, so is not really directly related to Moore's law. SSDs do use transistors/nano structures (NAND) for storage and it's storage capacity is more related to Moore's law.

[–] [email protected] 9 points 14 hours ago* (last edited 14 hours ago) (1 children)

You couldn't afford this drive unless you are enterprise so there's nothing to worry about. They don't sell them by the 1. You have to buy enough for a rack at once.

[–] [email protected] 3 points 10 hours ago

100%. 36tb is peanuts for data centres

[–] [email protected] 3 points 11 hours ago

The only thing I want is reasonably cheap 3.5" SSDs. Sata is fine just let me pay $500 for a 12TB SSD please.

[–] [email protected] 8 points 16 hours ago (3 children)

They seem to be very hit and miss in that there are some models with very low failure rates, but then there are some with very high.

That said, the 36 TB drive is most definitely not meant to be used as a single drive without any redundancy. I have no idea what the big guys at Backblaze for an example, are doing, but I'd want to be able to lose two drives in an array before I lose all my shit. So RAID 6 for me. Still, I'd likely be going with smaller drives because however much a 36 TB drive costs, I don't wanna feel like I'm spending 2x the cost of one of those just for redundancy lmao

[–] [email protected] 2 points 10 hours ago (1 children)

I'd want to be able to lose two drives in an array before I lose all my shit. So RAID 6 for me.

Repeat after me: RAID is not a backup solution, RAID is a high-availability solution.

The point of RAID is not to safeguard your data, you need proper backups for that (3-2-1 rule of backups: 3 copies of the data on 2 different storage media, with 1 copy off-site). RAID will not protect your data from deletion from user error, malware, OS bugs, or anything like that.

The point of RAID is so everyone can keep working if there is a hardware failure. It’s there to prevent downtime.

[–] [email protected] 2 points 10 hours ago (1 children)

It's 36 TB drives. Most people are planning on keeping anything legal or self-produced there. It's going to be pirated media and idk about you but I'm not uploading that to any cloud provider lmao

[–] [email protected] 2 points 10 hours ago (1 children)

These are enterprise drives, they aren’t going to contain anything pirated. They are probably going to one of those cloud providers you don’t want to upload your data to.

[–] [email protected] 1 points 10 hours ago

I can easily buy enterprise drives for home use. What are you on about?

[–] [email protected] 1 points 10 hours ago

I use mirrors, so RAID 1 right now and likely RAID 10 when I get more drives. That's the safest IMO, since you don't need the rest of the array to resilver your new drive, only the ones in its mirror pool, which reduces the likelihood of a cascading failure.

[–] [email protected] 2 points 14 hours ago

Could you imagine the time it would take to resilver one drive.. Crazy.

[–] [email protected] 20 points 20 hours ago (1 children)

I’m going to remind you that these fuckers are LOUD, like ROARING LOUD, so might not be suitable for your living room server.

[–] [email protected] 6 points 10 hours ago

DON'T TELL ME WHAT I CAN HANDLE!! I HOPE YOU CAN HEAR ME, MY PC'S FANS ARE A LITTLE NOISY!!

[–] [email protected] 14 points 21 hours ago* (last edited 21 hours ago) (1 children)

Now you can store even more data unsafely!

[–] [email protected] 5 points 15 hours ago (2 children)

You are not supposed to use these in a non-redundant config.

[–] [email protected] 1 points 10 hours ago

Even in an array I'd be terrified of more drive fails in a rebuild that is gonna take a long time.

[–] [email protected] 3 points 14 hours ago

Especially these, ye

[–] [email protected] 16 points 23 hours ago (3 children)

What about the writing and reading speeds?

[–] [email protected] 11 points 18 hours ago (1 children)

If you care about that, spinning rust is not the right solution for you.

[–] [email protected] 5 points 16 hours ago

I mean, newer server-grade models with independent actuators can easily saturate a SATA 3 connection. As far as speeds go, a raid-5 or raid-6 setup or equivalent should be pretty damn fast, especially if they start rolling out those independent actuators into the consumer market.

As far as latency goes? Yeah, you should stick to solid state...but this breathes new life into the HDD market for sure.

[–] [email protected] 6 points 18 hours ago

It has some.

[–] [email protected] 2 points 16 hours ago

The speed usually increases with capacity, but this drive uses HAMR instead of CMR, so it will be interesting to see what effect that has on the speed. The fastest HDDs available now can max out SATA 3 on sequential transfers, but they use dual actuators.

[–] [email protected] 23 points 1 day ago (1 children)

OK...what's this HAMR technology and how does it play compared to the typical CMR/SMR performance differences?

[–] [email protected] 19 points 1 day ago (1 children)

Heat-Assisted Magnetic Recording. It uses a laser to heat the drive platter, allowing for higher areal density and increased capacity.

I am ignorant on the CMR/SMR differences in performance

[–] [email protected] 6 points 20 hours ago (2 children)

I fear HAMR sounds like a variation on the idea of getting a coarser method to prepare the data to be written, just like on SMR. These kind of hard drives are good for slow predictable sequential storage, but they suck at writing more randomly. They're good for surveillance storage and things like that, but no good for daily use in a computer.

[–] [email protected] 3 points 15 hours ago* (last edited 2 hours ago) (1 children)

That sounds absolutely fine to me.

Compared to an NVME SSD, which is what I have my OS and software installed on, every spinning disk drive is glacially slow. So it really doesn't make much of a difference if my archive drive is a little bit slower at random R/W than it otherwise would be.

In fact I wish tape drives weren't so expensive because I'm pretty sure I'd rather have one of those.

If you need high R/W performance and huge capacity at the same time (like for editing gigantic high resolution videos) you probably want some kind of RAID array.

[–] [email protected] 1 points 11 hours ago

These are still not good for a RAID array, was my point. Unless just storing sequentially, at a kinda slow rate. At least for SMR. I fear HAMR might be similar (it reminds me of Sony's minidisk idea but applied to a hard drive).

load more comments (1 replies)
load more comments
view more: next ›