Say Hello to the World's Largest Hard Drive, a Massive 36TB Seagate
-
What is the usecase for drives that large?
I 'only' have 12Tb drives and yet my zfs-pool already needs ~two weeks to scrub it all. With something like this it would literally not be done before the next scheduled scrub.
High capacity storage pools for enterprises.
Space is at a premium. Saving space should/could equal to better pricing/availability. -
I too, am old.
I'm older than that but didn't want to self report. the first hard disk i remember my father buying was 40mb.
-
there was a time i asked this question about 500 megabytes
I am not questioning the need for more storage but the need dor more storage without increased speeds.
-
no thanks Seagate. the trauma of losing my data because of a botched firmware with a ticking time bomb kinda put me off your products for life.
see you in hell.
Some of Seagate's drives have terrible scores on things like Blackblaze. They are probably the worst brand, but also generally the cheapest.
I have been running a raid of old Seagate barracuda's for years at things point, including a lot of boot cycles and me forcing the system off because Truenas has issues or whatnot and for some fucking reason they won't die.
I have had a WD green SSD that I use for Truenas boot die, I had some WD external drive have its controller die (the drive inside still work) and I had some crappy WD mismatched drives in a raid 0 for my Linux ISO's and those failed as well.
Whenever the Seagate start to die, I guess ill be replacing them with Toshiba's unless somebody has another suggestion.
-
What drives do you have exactly? I have 7x6TB WD Red Pro drives in raidz2 and I can do a scrub less than 24 hours.
I have 2*12TB whitelabel WD drives (harvested from external drives but Datacenter drives accourding to the SN) and one 16 TB Toshiba white-label (purchased directly also meant for datacenters) in a raidz1.
How full is your pool? I have about 2/3rds full which impacts scrubbing I think.
I also frequently access the pool which delays scrubbing. -
That's a use-case for a fuckton of total capacity, but not necessarily a fuckton of per-drive capacity. I think what the grandparent comment is really trying to say is that the capacity has so vastly outstripped mechanical-disk data transfer speed that it's hard to actually make use of it all.
For example, let's say you have these running in a RAID 5 array, and one of the drives fails and you have to swap it out. At 190MB/s max sustained transfer rate (figure for a 28TB Seagate Exos; I assume this new one is similar), you're talking about over two days just to copy over the parity information and get the array out of degraded mode! At some point these big drives stop being suitable for that use-case just because the vulnerability window is so large that the risk of a second drive failure causing data loss is too great.
Thats exactly what I wanted to say, yes :D.
-
What is the usecase for drives that large?
I 'only' have 12Tb drives and yet my zfs-pool already needs ~two weeks to scrub it all. With something like this it would literally not be done before the next scheduled scrub.
Jesus, my pool takes a little over a day, but I’ve only got around 100 tb how big is your pool?
-
This post did not contain any content.
my qbittorrent is gonna love that
-
Jesus, my pool takes a little over a day, but I’ve only got around 100 tb how big is your pool?
The pool is about 20 usable TB.
-
What is the usecase for drives that large?
I 'only' have 12Tb drives and yet my zfs-pool already needs ~two weeks to scrub it all. With something like this it would literally not be done before the next scheduled scrub.
Sounds like something is wrong with your setup. I have 20TB drives (x8, raid 6, 70+TB in use) .... scrubbing takes less than 3 days.
-
This post did not contain any content.
finally i'll be able to self-host one piece streaming
-
What is the usecase for drives that large?
I 'only' have 12Tb drives and yet my zfs-pool already needs ~two weeks to scrub it all. With something like this it would literally not be done before the next scheduled scrub.
Data centers???
-
I have 2*12TB whitelabel WD drives (harvested from external drives but Datacenter drives accourding to the SN) and one 16 TB Toshiba white-label (purchased directly also meant for datacenters) in a raidz1.
How full is your pool? I have about 2/3rds full which impacts scrubbing I think.
I also frequently access the pool which delays scrubbing.It's like 90% full, scrubbing my pool is always super fast.
Two weeks to scrub the pool sounds like something is wrong tbh.
-
This post did not contain any content.
Finally, a hard drive which can store more than a dozen modern AAA games
-
The pool is about 20 usable TB.
Something is very wrong if it's taking 2 weeks to scrub that.
-
Thats exactly what I wanted to say, yes :D.
I get it. But the moment we invoke RAID, or ZFS, we are outside what standard consumers will ever interact with, and therefore into business use cases. Remember, even simple homelab use cases involving docker are well past what the bulk of the world understands.
-
High capacity storage pools for enterprises.
Space is at a premium. Saving space should/could equal to better pricing/availability.Not necessarily.
The trouble with spinning platters this big is that if a drive fails, it will take a long time to rebuild the array after shoving a new one in there. Sysadmins will be nervous about another failure taking out the whole array until that process is complete, and that can take days. There was some debate a while back on if the industry even wanted spinning platters >20TB. Some are willing to give up density if it means less worry.
I guess Seagate decided to go ahead, anyway, but the industry may be reluctant to buy this.
-
well, it depends. if they were dropped just because they are smr and were writing slowly, I think they are fine. but otherwise...
what array system do you use? some raid software, or zfs?
Windows Server storage solutions. I took them out of the array and they still weren't recognized in Disk Management so I assume they're shot. It was just weird having 2 fail the same way.
-
Not necessarily.
The trouble with spinning platters this big is that if a drive fails, it will take a long time to rebuild the array after shoving a new one in there. Sysadmins will be nervous about another failure taking out the whole array until that process is complete, and that can take days. There was some debate a while back on if the industry even wanted spinning platters >20TB. Some are willing to give up density if it means less worry.
I guess Seagate decided to go ahead, anyway, but the industry may be reluctant to buy this.
I would assume with arrays they will use a different way to calculate parity or have higher redundancy to compensate the risk.
-
This post did not contain any content.
Can't wait to see this bad boy on serverpartdeals in a couple years if I'm still alive
-
-
Against AI: An Open Letter From Writers to Penguin Random House, HarperCollins, Simon & Schuster, Hachette Book Group, Macmillan, and all other publishers of America
Technology1
-
-
Senators Call for The FTC to Launch an Investigation into Spotify for Forcing Subscribers into Higher-Priced Subscriptions Without Their Consent.
Technology1
-
The National Association for the Advancement of Colored People (NAACP) is suing Elon's Musk xAI
Technology1
-
Mom sues porn sites (Including Chaturbate, Jerkmate, Superporn and Hentaicity) for noncompliance with Kansas age assurance law; Teen can no longer enjoy life after mom caught him visiting Chaturbate
Technology1
-
-