This new 40TB hard drive from Seagate is just the beginning—50TB is coming fast!
-
Having been burned many times in the past, I won't even trust 40 GB to a Seagate drive let alone 40 TB.
Even in enterprise arrays where they're basically disposable when they fail, I'm still wary of them.
Same here. Been burned by SSD's too though - a Samsung Evo Pro drive crapped out on me just months after buying it. Was under warranty and replaced at no cost, but I still lost all my data and config/settings.
-
Still, it's a good thing if it means energy savings at data centers.
For home and SMB use there's already a notable absence of backup and archival technologies to match available storage capacities. Developing one without the other seems short sighted.
I still wonder, what's stopping vendors from producing "chonk store" devices. Slow, but reliable bulk storage SSDs.
Just in terms of physical space, you could easily fit 200 micro SD cards in a 2.5" drive, have everything replicated five times and end up with a reasonably reliable device (extremely simplified, I know).
I just want something for luke-warm storage that didn't require a datacenter and/or 500W continuous power draw.
-
Don’t look at Backblaze drive reports then. WD is pretty much all good, Seagate has some good models that are comparable to WD, but they have some absolutely unforgivable ones as well.
Not every Seagate drive is bad, but nearly every chronically unreliable drive in their reports is a Seagate.
Personally, I’ve managed hundreds of drives in the last couple of decades. I won’t touch Seagate anymore due to their inconsistent reliability from model to model (and when it’s bad, it’s bad).
Don’t look at Backblaze drive reports then
I have.
But after personally having suffered 4 complete disk failures of WD drives in less then 3 years, it's really more like a "fool me once" situation.
-
This post did not contain any content.
I deal with large data chunks and 40TB drives are an interesting idea.... until you consider one failing
raids and arrays for these large data sets still makes more sense then all the eggs in smaller baskets
-
I deal with large data chunks and 40TB drives are an interesting idea.... until you consider one failing
raids and arrays for these large data sets still makes more sense then all the eggs in smaller baskets
You'd still put the 40TB drives in a raid? But eventually you'll be limited by the number of bays, so larger size is better.
-
I deal with large data chunks and 40TB drives are an interesting idea.... until you consider one failing
raids and arrays for these large data sets still makes more sense then all the eggs in smaller baskets
I guess the idea is you'd still do that, but have more data in each array. It does raise the risk of losing a lot of data, but that can be mitigated by sensible RAID design and backups. And then you save power for the same amount of storage.
-
I’ll finally have enough space for my meme screenshots.
Or the 8k photos of vacation dinners.
-
You'd still put the 40TB drives in a raid? But eventually you'll be limited by the number of bays, so larger size is better.
They're also ignoring how many times this conversation has been had...
We never stopped raid at any other increase in drive density, there's no reason to pick this as the time to stop.
-
I still wonder, what's stopping vendors from producing "chonk store" devices. Slow, but reliable bulk storage SSDs.
Just in terms of physical space, you could easily fit 200 micro SD cards in a 2.5" drive, have everything replicated five times and end up with a reasonably reliable device (extremely simplified, I know).
I just want something for luke-warm storage that didn't require a datacenter and/or 500W continuous power draw.
Cost. The speed of flash storage is an inherent quality and not something manufacturers are selecting for typically. I assure you if they knew how to make some sort of Super MLC they absolutely would.
-
This post did not contain any content.
i remember bragging when my computer had 40gb storage
-
This post did not contain any content.
So all the other hard drives will be cheaper now, right? Right?
-
Don’t look at Backblaze drive reports then
I have.
But after personally having suffered 4 complete disk failures of WD drives in less then 3 years, it's really more like a "fool me once" situation.
It used to be pertinent to check the color of WD drives. I can't remember all of them but of the top of my head I remember Blue dying the most. They used to have black, red and maybe a green model, now they have purple and gold as well. Each was designated for certain purposes / reliability.
Source: Used to be a certified Apple/Dell/HP repair tech, so I was replacing hard drives daily.
-
i remember bragging when my computer had 40gb storage
I bought my first HDD second hand. It was advertised as 40MB. But it was 120MB. How happy was young me?
-
It used to be pertinent to check the color of WD drives. I can't remember all of them but of the top of my head I remember Blue dying the most. They used to have black, red and maybe a green model, now they have purple and gold as well. Each was designated for certain purposes / reliability.
Source: Used to be a certified Apple/Dell/HP repair tech, so I was replacing hard drives daily.
Gold is the enterprise ones. Black is enthusiast, blue is desktop, red is NAS, purple is NVR, green is external. Green you almost certainly don't want (they do their own power management), red is likely to be SMR. But otherwise they're not too different. If you saw a lot of blues failing, it's probably because the systems you supported used blue almost exclusively.
-
You'd still put the 40TB drives in a raid? But eventually you'll be limited by the number of bays, so larger size is better.
Of course, because you don't want to lose the data if one of the drives dies. And backing up that much data is painful.
-
Same here. Been burned by SSD's too though - a Samsung Evo Pro drive crapped out on me just months after buying it. Was under warranty and replaced at no cost, but I still lost all my data and config/settings.
Any disk can and will fail at some point in time. Backup is your best friend. Some sort of disk redundancy is your second best friend.
-
Cost. The speed of flash storage is an inherent quality and not something manufacturers are selecting for typically. I assure you if they knew how to make some sort of Super MLC they absolutely would.
It's not inherent in terms of "more store=more fast".
You could absolutely take older, more established production nodes to produce higher quality, longer lasting flash storage. The limitation hardly ever is space, but heat. So putting that kind of flash storage, with intentionally slowed down controllers, into regular 2.5 or even 3.5" form factors should be possible.
Cost could be an issue because the market isn't seen as very large.
-
They're also ignoring how many times this conversation has been had...
We never stopped raid at any other increase in drive density, there's no reason to pick this as the time to stop.
Raid 5 is becoming less viable due to the increasing rebuild times, necessitating raid 1 instead. But new drives have better iops too so maybe not as severe as predicted.
-
I still wonder, what's stopping vendors from producing "chonk store" devices. Slow, but reliable bulk storage SSDs.
Just in terms of physical space, you could easily fit 200 micro SD cards in a 2.5" drive, have everything replicated five times and end up with a reasonably reliable device (extremely simplified, I know).
I just want something for luke-warm storage that didn't require a datacenter and/or 500W continuous power draw.
they make bulk storage ssds with QLC for enterprise use.
The reason why they're not used for consumer use cases yet is because raw nand chips are still more expensive than hard drives. People dont want to pay $3k for a 50tb SSD if they can buy a $500 50tb hdd and they don't need the speed.
For what it's worth, 8tb TLC pcie3 U.2 SSDs are only $400 used on ebay these days which is a pretty good option if you're trying to move away from noisy slow hdds. 4 of those in raid 5 plus a diy nas would get you 24tb of formatted super fast nextcloud/immich storage for ~$2k.
-
Gold is the enterprise ones. Black is enthusiast, blue is desktop, red is NAS, purple is NVR, green is external. Green you almost certainly don't want (they do their own power management), red is likely to be SMR. But otherwise they're not too different. If you saw a lot of blues failing, it's probably because the systems you supported used blue almost exclusively.
I thought green was "eco." At least the higher-end external ones tend to be red drives, which is famously why people shuck them to use internally because they're often cheaper than just buying a red bare drive directly, for some reason.