Say Hello to the World's Largest Hard Drive, a Massive 36TB Seagate
-
What is the usecase for drives that large?
I 'only' have 12Tb drives and yet my zfs-pool already needs ~two weeks to scrub it all. With something like this it would literally not be done before the next scheduled scrub.
Sounds like something is wrong with your setup. I have 20TB drives (x8, raid 6, 70+TB in use) .... scrubbing takes less than 3 days.
-
This post did not contain any content.
finally i'll be able to self-host one piece streaming
-
What is the usecase for drives that large?
I 'only' have 12Tb drives and yet my zfs-pool already needs ~two weeks to scrub it all. With something like this it would literally not be done before the next scheduled scrub.
Data centers???
-
I have 2*12TB whitelabel WD drives (harvested from external drives but Datacenter drives accourding to the SN) and one 16 TB Toshiba white-label (purchased directly also meant for datacenters) in a raidz1.
How full is your pool? I have about 2/3rds full which impacts scrubbing I think.
I also frequently access the pool which delays scrubbing.It's like 90% full, scrubbing my pool is always super fast.
Two weeks to scrub the pool sounds like something is wrong tbh.
-
This post did not contain any content.
Finally, a hard drive which can store more than a dozen modern AAA games
-
The pool is about 20 usable TB.
Something is very wrong if it's taking 2 weeks to scrub that.
-
Thats exactly what I wanted to say, yes :D.
I get it. But the moment we invoke RAID, or ZFS, we are outside what standard consumers will ever interact with, and therefore into business use cases. Remember, even simple homelab use cases involving docker are well past what the bulk of the world understands.
-
High capacity storage pools for enterprises.
Space is at a premium. Saving space should/could equal to better pricing/availability.Not necessarily.
The trouble with spinning platters this big is that if a drive fails, it will take a long time to rebuild the array after shoving a new one in there. Sysadmins will be nervous about another failure taking out the whole array until that process is complete, and that can take days. There was some debate a while back on if the industry even wanted spinning platters >20TB. Some are willing to give up density if it means less worry.
I guess Seagate decided to go ahead, anyway, but the industry may be reluctant to buy this.
-
well, it depends. if they were dropped just because they are smr and were writing slowly, I think they are fine. but otherwise...
what array system do you use? some raid software, or zfs?
Windows Server storage solutions. I took them out of the array and they still weren't recognized in Disk Management so I assume they're shot. It was just weird having 2 fail the same way.
-
Not necessarily.
The trouble with spinning platters this big is that if a drive fails, it will take a long time to rebuild the array after shoving a new one in there. Sysadmins will be nervous about another failure taking out the whole array until that process is complete, and that can take days. There was some debate a while back on if the industry even wanted spinning platters >20TB. Some are willing to give up density if it means less worry.
I guess Seagate decided to go ahead, anyway, but the industry may be reluctant to buy this.
I would assume with arrays they will use a different way to calculate parity or have higher redundancy to compensate the risk.
-
This post did not contain any content.
Can't wait to see this bad boy on serverpartdeals in a couple years if I'm still alive
-
This hard drive is so big, that astronomers thought it was a planet.
This hard drive is so big, it's got its own area code
-
Can't wait to see this bad boy on serverpartdeals in a couple years if I'm still alive
if I'm still alive
That goes without saying, unless you anticipate something.
-
A ZFS Scrub validates all the data in a pool and corrects any errors.
I'm not in the know of having your own personal data centers so I have no idea. ... But how often is this necessary? Does accessing your own data on your hard drive require a scrub? I just have a 2tb on my home pc. Is the equivalent of a scrub like a disk clean up?
-
Hello!
Howdy!
-
I wanna fuck this HDD. To have that much storage on one drive when I currently have ~30TB shared between 20 drives makes me very erect.
twenty!?
-
Windows Server storage solutions. I took them out of the array and they still weren't recognized in Disk Management so I assume they're shot. It was just weird having 2 fail the same way.
I don't have experience with windows server, but that indeed sounds like these are dead. you could check them with some pendrive bootable live linux, whether it sees them, like gparted's edition, in case windows just hides them because it blacklisted them or something
-
I would assume with arrays they will use a different way to calculate parity or have higher redundancy to compensate the risk.
If there's higher redundancy, then they are already giving up on density.
We've pretty much covered the likely ways to calculate parity.
-
I get it. But the moment we invoke RAID, or ZFS, we are outside what standard consumers will ever interact with, and therefore into business use cases. Remember, even simple homelab use cases involving docker are well past what the bulk of the world understands.
I would think most standard consumers are not using HDDs at all.
-
What is the usecase for drives that large?
I 'only' have 12Tb drives and yet my zfs-pool already needs ~two weeks to scrub it all. With something like this it would literally not be done before the next scheduled scrub.
It's like the petronas towers, everytime they're finished cleaning the windows they have to start again