Say Hello to the World's Largest Hard Drive, a Massive 36TB Seagate
-
Not necessarily.
The trouble with spinning platters this big is that if a drive fails, it will take a long time to rebuild the array after shoving a new one in there. Sysadmins will be nervous about another failure taking out the whole array until that process is complete, and that can take days. There was some debate a while back on if the industry even wanted spinning platters >20TB. Some are willing to give up density if it means less worry.
I guess Seagate decided to go ahead, anyway, but the industry may be reluctant to buy this.
I would assume with arrays they will use a different way to calculate parity or have higher redundancy to compensate the risk.
-
This post did not contain any content.
Can't wait to see this bad boy on serverpartdeals in a couple years if I'm still alive
-
This hard drive is so big, that astronomers thought it was a planet.
This hard drive is so big, it's got its own area code
-
Can't wait to see this bad boy on serverpartdeals in a couple years if I'm still alive
if I'm still alive
That goes without saying, unless you anticipate something.
-
A ZFS Scrub validates all the data in a pool and corrects any errors.
I'm not in the know of having your own personal data centers so I have no idea. ... But how often is this necessary? Does accessing your own data on your hard drive require a scrub? I just have a 2tb on my home pc. Is the equivalent of a scrub like a disk clean up?
-
Hello!
Howdy!
-
I wanna fuck this HDD. To have that much storage on one drive when I currently have ~30TB shared between 20 drives makes me very erect.
twenty!?
-
Windows Server storage solutions. I took them out of the array and they still weren't recognized in Disk Management so I assume they're shot. It was just weird having 2 fail the same way.
I don't have experience with windows server, but that indeed sounds like these are dead. you could check them with some pendrive bootable live linux, whether it sees them, like gparted's edition, in case windows just hides them because it blacklisted them or something
-
I would assume with arrays they will use a different way to calculate parity or have higher redundancy to compensate the risk.
If there's higher redundancy, then they are already giving up on density.
We've pretty much covered the likely ways to calculate parity.
-
I get it. But the moment we invoke RAID, or ZFS, we are outside what standard consumers will ever interact with, and therefore into business use cases. Remember, even simple homelab use cases involving docker are well past what the bulk of the world understands.
I would think most standard consumers are not using HDDs at all.
-
What is the usecase for drives that large?
I 'only' have 12Tb drives and yet my zfs-pool already needs ~two weeks to scrub it all. With something like this it would literally not be done before the next scheduled scrub.
It's like the petronas towers, everytime they're finished cleaning the windows they have to start again
-
I'm not in the know of having your own personal data centers so I have no idea. ... But how often is this necessary? Does accessing your own data on your hard drive require a scrub? I just have a 2tb on my home pc. Is the equivalent of a scrub like a disk clean up?
You usually scrub you pool about once a month, but there are no hard rules on that. The main problem with scrubbing is, that it puts a heavy load on the pool, slowing it down.
Accessing the data does not need a scrub, it is only a routine maintenance task.
A scrub is not like a disk cleanup. With a disk cleanup you remove unneeded files and caches, maybe de-fragment as well. A scrub on the other hand validates that the data you stored on the pool is still the same as before. This is primarily to protect from things like bit rot.There are many ways a drive can degrade. Sectors can become unreadable, random bits can flip, a write can be interrupted by a power outage, etc. Normal file systems like NTFS or ext4 can only handle this in limited ways. Mostly by deleting the corrupted data.
ZFS on the other hand is built using redundant storage. Storing the data spread over multiple drives in a special way allowing it to recover most corruption and even survive the complete failure of a disk. This comes at the cost of losing some capacity however.
-
Chinese Scientists Create Cyborg Bees That Can Be Controlled Like Drones for Undercover Military Missions
Technology1
-
-
-
Brain-computer interfaces: Brain implants are letting people move, speak, and interact with machines using only their thoughts. The first FDA approvals may arrive within five years.
Technology1
-
Twenty-seven states and DC sue 23andMe to oppose the sale of DNA data from its customers without their direct consent
Technology1
-
Mom sues porn sites (Including Chaturbate, Jerkmate, Superporn and Hentaicity) for noncompliance with Kansas age assurance law; Teen can no longer enjoy life after mom caught him visiting Chaturbate
Technology1
-
-