Say Hello to the World's Largest Hard Drive, a Massive 36TB Seagate
-
I would assume with arrays they will use a different way to calculate parity or have higher redundancy to compensate the risk.
If there's higher redundancy, then they are already giving up on density.
We've pretty much covered the likely ways to calculate parity.
-
I get it. But the moment we invoke RAID, or ZFS, we are outside what standard consumers will ever interact with, and therefore into business use cases. Remember, even simple homelab use cases involving docker are well past what the bulk of the world understands.
I would think most standard consumers are not using HDDs at all.
-
What is the usecase for drives that large?
I 'only' have 12Tb drives and yet my zfs-pool already needs ~two weeks to scrub it all. With something like this it would literally not be done before the next scheduled scrub.
It's like the petronas towers, everytime they're finished cleaning the windows they have to start again
-
I'm not in the know of having your own personal data centers so I have no idea. ... But how often is this necessary? Does accessing your own data on your hard drive require a scrub? I just have a 2tb on my home pc. Is the equivalent of a scrub like a disk clean up?
You usually scrub you pool about once a month, but there are no hard rules on that. The main problem with scrubbing is, that it puts a heavy load on the pool, slowing it down.
Accessing the data does not need a scrub, it is only a routine maintenance task.
A scrub is not like a disk cleanup. With a disk cleanup you remove unneeded files and caches, maybe de-fragment as well. A scrub on the other hand validates that the data you stored on the pool is still the same as before. This is primarily to protect from things like bit rot.There are many ways a drive can degrade. Sectors can become unreadable, random bits can flip, a write can be interrupted by a power outage, etc. Normal file systems like NTFS or ext4 can only handle this in limited ways. Mostly by deleting the corrupted data.
ZFS on the other hand is built using redundant storage. Storing the data spread over multiple drives in a special way allowing it to recover most corruption and even survive the complete failure of a disk. This comes at the cost of losing some capacity however.
-
Does it really matter that much if the first copy takes a while though? Only doing it once and you don't even have to do it all in 1 go. Just let it run over the weekend would do though.
It matters to me. I got stuff to back up regularly, and I ain't got all weekend.
-
And I’ve only ever had WD hard drives and sandisk flash drives die on me
Maybe it's confirmation bias but almost all memory that failed on me has been sandisk-flash storage. Zhe only exception being a corsair ssd which failed after 3 yrs as the main laptop drive + another 3 as a server boot and log-drive.
The only flash drive I ever had fail me that wasn’t made by sandisk was a generic microcenter one, which was so cheap I couldn’t bring myself to care about it.
-
twenty!?
Yeah, lots of drives of varrying capacity.
-
You usually scrub you pool about once a month, but there are no hard rules on that. The main problem with scrubbing is, that it puts a heavy load on the pool, slowing it down.
Accessing the data does not need a scrub, it is only a routine maintenance task.
A scrub is not like a disk cleanup. With a disk cleanup you remove unneeded files and caches, maybe de-fragment as well. A scrub on the other hand validates that the data you stored on the pool is still the same as before. This is primarily to protect from things like bit rot.There are many ways a drive can degrade. Sectors can become unreadable, random bits can flip, a write can be interrupted by a power outage, etc. Normal file systems like NTFS or ext4 can only handle this in limited ways. Mostly by deleting the corrupted data.
ZFS on the other hand is built using redundant storage. Storing the data spread over multiple drives in a special way allowing it to recover most corruption and even survive the complete failure of a disk. This comes at the cost of losing some capacity however.
Thank you for all this information. One day when my ADHD forces me into a making myself a home server I'll remember this and keep it in mind. I've always wanted to store movies but these days just family pictures and stuff. Definitely don't have terabytes but I'm getting up 100s of gb.
-
I'm older than that but didn't want to self report. the first hard disk i remember my father buying was 40mb.
I remember renting a game, and it was on a high density 5.25" inch floppy at a whopping 1.2MB; but or family computer only had a standard density 5.25".
So we went to the neighbors house, who was one of the first computer nerds (I'm not sure he's still alive now), who copied the game to a 3.5" high density 1.44MB disk, then we returned the rental because we couldn't play it on the 1.2 MB HD 5.25" floppy.
.... And that was the first time I was party to piracy.
-
It matters to me. I got stuff to back up regularly, and I ain't got all weekend.
It's only the first copy that takes such a long time. After that you only copy the changes.
-
It's only the first copy that takes such a long time. After that you only copy the changes.
That depends entirely on your usecase.
-
That depends entirely on your usecase.
Sure, if you have many TBs of data changes per day you probably want a different solution. But that would also suggest you don't need to keep it for very long.
-
Robot performs first realistic surgery without human help: System trained on videos of surgeries performs like an expert surgeon
Technology1
-
-
-
-
[JS Required] MiniMax M1 model claims Chinese LLM crown from DeepSeek - plus it's true open-source
Technology1
-
Hong Kong workers strike against the algorithmic exploitation of Keeta, a food delivery platform
Technology1
-
2
-