|
So....Can we be jerks to Alereon again?
|
# ¿ Sep 1, 2016 12:51 |
|
|
# ¿ May 3, 2024 11:00 |
|
Malcolm XML posted:i have a 256gb sandisk ultra II that i rediscovered Just think about those SD cards for a moment and then think what you can do with them.
|
# ¿ Sep 8, 2016 22:28 |
|
Ynglaur posted:That's...pretty cool. It's a cool idea in theory, but SD cards are slow. I'd hate to see how slow that setup would be.
|
# ¿ Sep 9, 2016 01:01 |
|
Saukkis posted:For long time I've wanted to know, what is the failure mode when SSDs reach the maximum write cycle? I would hope that the available writable space slowly decreases, but the the blocks that can't be written on still remain readable? How much variation is there on how many cycles different blocks can be written? Could one block last only 1000 cycles, but the block right next to it manges 5000? When a block dies, it's lost. You don't get to read what's there anymore. The SSD's firmware, when it detects a failure, will save the data to a reserved block that isn't accessible to anything but the drive. When it runs out of those spare blocks, it uses the non-reserved portion of the drive and the size of the drive decreases. At this point, the file system on top begins to get compromised and data loss is likely. In terms of cycles, the blocks tend to last about as long as their neighbors. So expect 90% to 110% use cycle for other parts of the nand once you see reallocated counts start to rise. Theoretically, one portion could last five times the write cycles of another portion of the drive, but I haven't read anything that would suggest this would happen in the real world.
|
# ¿ Sep 10, 2016 14:30 |
|
BobHoward posted:The model you describe is roughly how hard drives handle bad sectors, but no SSD I am aware of works quite like this. Unlike HDDs the extra capacity is substantial (most MLC consumer drives have 7.3% extra capacity over what the label claims, and most TLC drives even more than that). Also unlike HDDs, none of the extra space is treated as a special pool which goes unused until it's time to replace a bad block. I've had several SSDs that exhibited exactly that behavior. I have a 64GB SSD on my desk right now that reports only 48GB available space. You can continue to write to it and it doesn't have a disaster mode. It does, of course, largely depend on the firmware, but conceptually that's how it works. The fluidity of actual block space is true, but it really brings nothing to the discussion of how an SSD handles failing blocks. Also, if it has 7.3% extra capacity over the capacity reported to the operating system, why do you think that's not hidden to the user? Just because it uses them during wear leveling?
|
# ¿ Sep 19, 2016 04:02 |
|
BobHoward posted:Count me surprised. Is it one of the really old early generation consumer SSDs from circa 2010? Some of those were pretty bad and/or weird by modern standards. BobHoward posted:I think there is some miscommunication here. What I am saying is that both the OS and user think a SSD stores as much data as it reports it can. However, behind the scenes there is much more raw capacity, and the SSD doesn't split it into one fixed zone that serves as the actual storage and another that is spare. Instead it's one big pool, and over time as the drive processes write commands your data will end up stored anywhere and everywhere in this pool, even if the drive hasn't mapped out any bad blocks yet. Firmware has come a long way and I admit I didn't get into the tricks that firmware does these days to protect the data - mainly because I was talking to an elder drive and because I was giving the worst case scenario. Otherwise, you and I are saying the same thing except you are just getting into minute details that I was glossing over for a general overview.
|
# ¿ Sep 20, 2016 16:18 |
|
|
# ¿ May 3, 2024 11:00 |
|
I just came across this article about 1TB SDXC SD card prototypes being flaunted around. I'll bet it's not fast, but it sure would allow for extremely dense information storage.
|
# ¿ Sep 20, 2016 16:59 |