Say Hello to the World's Largest Hard Drive, a Massive 36TB Seagate
-
I would assume with arrays they will use a different way to calculate parity or have higher redundancy to compensate the risk.
If there's higher redundancy, then they are already giving up on density.
We've pretty much covered the likely ways to calculate parity.
-
I get it. But the moment we invoke RAID, or ZFS, we are outside what standard consumers will ever interact with, and therefore into business use cases. Remember, even simple homelab use cases involving docker are well past what the bulk of the world understands.
I would think most standard consumers are not using HDDs at all.
-
What is the usecase for drives that large?
I 'only' have 12Tb drives and yet my zfs-pool already needs ~two weeks to scrub it all. With something like this it would literally not be done before the next scheduled scrub.
It's like the petronas towers, everytime they're finished cleaning the windows they have to start again
-
I'm not in the know of having your own personal data centers so I have no idea. ... But how often is this necessary? Does accessing your own data on your hard drive require a scrub? I just have a 2tb on my home pc. Is the equivalent of a scrub like a disk clean up?
You usually scrub you pool about once a month, but there are no hard rules on that. The main problem with scrubbing is, that it puts a heavy load on the pool, slowing it down.
Accessing the data does not need a scrub, it is only a routine maintenance task.
A scrub is not like a disk cleanup. With a disk cleanup you remove unneeded files and caches, maybe de-fragment as well. A scrub on the other hand validates that the data you stored on the pool is still the same as before. This is primarily to protect from things like bit rot.There are many ways a drive can degrade. Sectors can become unreadable, random bits can flip, a write can be interrupted by a power outage, etc. Normal file systems like NTFS or ext4 can only handle this in limited ways. Mostly by deleting the corrupted data.
ZFS on the other hand is built using redundant storage. Storing the data spread over multiple drives in a special way allowing it to recover most corruption and even survive the complete failure of a disk. This comes at the cost of losing some capacity however.
-
Does it really matter that much if the first copy takes a while though? Only doing it once and you don't even have to do it all in 1 go. Just let it run over the weekend would do though.
It matters to me. I got stuff to back up regularly, and I ain't got all weekend.
-
And I’ve only ever had WD hard drives and sandisk flash drives die on me
Maybe it's confirmation bias but almost all memory that failed on me has been sandisk-flash storage. Zhe only exception being a corsair ssd which failed after 3 yrs as the main laptop drive + another 3 as a server boot and log-drive.
The only flash drive I ever had fail me that wasn’t made by sandisk was a generic microcenter one, which was so cheap I couldn’t bring myself to care about it.
-
twenty!?
Yeah, lots of drives of varrying capacity.
-
You usually scrub you pool about once a month, but there are no hard rules on that. The main problem with scrubbing is, that it puts a heavy load on the pool, slowing it down.
Accessing the data does not need a scrub, it is only a routine maintenance task.
A scrub is not like a disk cleanup. With a disk cleanup you remove unneeded files and caches, maybe de-fragment as well. A scrub on the other hand validates that the data you stored on the pool is still the same as before. This is primarily to protect from things like bit rot.There are many ways a drive can degrade. Sectors can become unreadable, random bits can flip, a write can be interrupted by a power outage, etc. Normal file systems like NTFS or ext4 can only handle this in limited ways. Mostly by deleting the corrupted data.
ZFS on the other hand is built using redundant storage. Storing the data spread over multiple drives in a special way allowing it to recover most corruption and even survive the complete failure of a disk. This comes at the cost of losing some capacity however.
Thank you for all this information. One day when my ADHD forces me into a making myself a home server I'll remember this and keep it in mind. I've always wanted to store movies but these days just family pictures and stuff. Definitely don't have terabytes but I'm getting up 100s of gb.
-
I'm older than that but didn't want to self report. the first hard disk i remember my father buying was 40mb.
I remember renting a game, and it was on a high density 5.25" inch floppy at a whopping 1.2MB; but or family computer only had a standard density 5.25".
So we went to the neighbors house, who was one of the first computer nerds (I'm not sure he's still alive now), who copied the game to a 3.5" high density 1.44MB disk, then we returned the rental because we couldn't play it on the 1.2 MB HD 5.25" floppy.
.... And that was the first time I was party to piracy.
-
It matters to me. I got stuff to back up regularly, and I ain't got all weekend.
It's only the first copy that takes such a long time. After that you only copy the changes.
-
It's only the first copy that takes such a long time. After that you only copy the changes.
That depends entirely on your usecase.
-
That depends entirely on your usecase.
Sure, if you have many TBs of data changes per day you probably want a different solution. But that would also suggest you don't need to keep it for very long.
-
Sure, if you have many TBs of data changes per day you probably want a different solution. But that would also suggest you don't need to keep it for very long.
Write speeds on SMR drives start to stagnate after mere gigabytes written, not after terabytes. As soon as the CMR cache is full, you're fucked, and it stagnates to utterly unusable speeds as it's desperately trying to balance writing out blocks to the persistent area of the disk and accepting new incoming writes. I have 25 year old consumer level IDE drives that perform better than an SMR drive in this thrashing state.
Also, I often use hard drives as a temporary holding area for stuff that I'm transferring around for one reason or another and that absolutely sucks if an operation that normally takes an hour or two is suddenly becoming a multi-day endeavour tying up my computing resources. I was burned once when Seagate submarined SMR drives into the Barracuda line, and I got a drive that was absolutely unfit for purpose. Never again.
-
-
-
-
Brain-computer interfaces: Brain implants are letting people move, speak, and interact with machines using only their thoughts. The first FDA approvals may arrive within five years.
Technology1
-
-
Reddit sues Anthropic, alleging its bots accessed Reddit more than 100,000 times since last July
Technology1
-
Australia could tax Google, Facebook and other tech giants with a digital services tax – but don’t hold your breath
Technology1
-