Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Arsten
Feb 18, 2003

So....Can we be jerks to Alereon again?

Adbot
ADBOT LOVES YOU

Arsten
Feb 18, 2003

Malcolm XML posted:

i have a 256gb sandisk ultra II that i rediscovered

today i found about about 256GB microSDXC cards

the future is now

Just think about those SD cards for a moment and then think what you can do with them. :v:

Arsten
Feb 18, 2003

Ynglaur posted:

That's...pretty cool. :stare:

It's a cool idea in theory, but SD cards are slow. I'd hate to see how slow that setup would be.

Arsten
Feb 18, 2003

Saukkis posted:

For long time I've wanted to know, what is the failure mode when SSDs reach the maximum write cycle? I would hope that the available writable space slowly decreases, but the the blocks that can't be written on still remain readable? How much variation is there on how many cycles different blocks can be written? Could one block last only 1000 cycles, but the block right next to it manges 5000?

When a block dies, it's lost. You don't get to read what's there anymore. The SSD's firmware, when it detects a failure, will save the data to a reserved block that isn't accessible to anything but the drive. When it runs out of those spare blocks, it uses the non-reserved portion of the drive and the size of the drive decreases. At this point, the file system on top begins to get compromised and data loss is likely.

In terms of cycles, the blocks tend to last about as long as their neighbors. So expect 90% to 110% use cycle for other parts of the nand once you see reallocated counts start to rise. Theoretically, one portion could last five times the write cycles of another portion of the drive, but I haven't read anything that would suggest this would happen in the real world.

Arsten
Feb 18, 2003

BobHoward posted:

The model you describe is roughly how hard drives handle bad sectors, but no SSD I am aware of works quite like this. Unlike HDDs the extra capacity is substantial (most MLC consumer drives have 7.3% extra capacity over what the label claims, and most TLC drives even more than that). Also unlike HDDs, none of the extra space is treated as a special pool which goes unused until it's time to replace a bad block.

HDDs work that way because the mapping between host visible sector numbers and physical sector addresses on disk platters is mostly fixed, with a small exception list. This works ok because most sectors will never go bad; the drive firmware only needs to handle on the order of 100-1000 bad blocks that have a special mapping.

SSDs have to do wear leveling, so no host visible block has a fixed flash media location, ever. Anything can be stored anywhere at any time. The point of having extra capacity over the user visible capacity is no longer to handle errors gracefully, it's now to provide both an any-to-any host-to-physical address mapping table and the minimum amount of guaranteed free (or quickly free-able) media capacity the wear leveling algorithm needs to avoid making GBS threads itself when the user is using all the user visible capacity. Dealing with bad blocks falls out of that; when a block goes bad and is marked as unusable, the drive just has slightly less free space to work with.

Any competently written SSD firmware should put itself into a read-only disaster recovery mode long before losing so many blocks that the usable media capacity drops even close to the user visible capacity. (And if that event ever did happen, it would be quite surprising if the drive didn't just brick itself, because the firmware is unlikely to handle not having enough media capacity very gracefully.)

I've had several SSDs that exhibited exactly that behavior. I have a 64GB SSD on my desk right now that reports only 48GB available space. You can continue to write to it and it doesn't have a disaster mode. It does, of course, largely depend on the firmware, but conceptually that's how it works. The fluidity of actual block space is true, but it really brings nothing to the discussion of how an SSD handles failing blocks.

Also, if it has 7.3% extra capacity over the capacity reported to the operating system, why do you think that's not hidden to the user? Just because it uses them during wear leveling?

Arsten
Feb 18, 2003

BobHoward posted:

Count me surprised. Is it one of the really old early generation consumer SSDs from circa 2010? Some of those were pretty bad and/or weird by modern standards.

The bad block strategy you described has been used with flash storage media, but usually only in older CF/SD cards and USB sticks. It's not good enough for (good) SSDs.


BobHoward posted:

I think there is some miscommunication here. What I am saying is that both the OS and user think a SSD stores as much data as it reports it can. However, behind the scenes there is much more raw capacity, and the SSD doesn't split it into one fixed zone that serves as the actual storage and another that is spare. Instead it's one big pool, and over time as the drive processes write commands your data will end up stored anywhere and everywhere in this pool, even if the drive hasn't mapped out any bad blocks yet.

Maybe it's best to give an example of how that might happen in practice. Let's say you take a SSD out of the box, hook it up, and run a program which writes from the first (visible) sector to the last, in order, and then writes to the first visible sector once more. This hypothetical drive has 128 physical sectors (label these P1 through P128, P for physical), but it tells the outside world there are 16 fewer sectors, labeled V(isible)1 through V112. During the initial full drive write, the SSD stores V1 in P1, V2 in P2, and so on, up to V112 in P112. But when the second write to V1 happens, the SSD does not store the new contents of V1 in P1. Instead it writes them to P113 (the first free physical sector), adjusts its mapping table so it knows V1 is now stored in P113, and marks P1 as what I'm going to call a "zombie": a sector which contains stale user data, and is now safe to erase and reuse whenever convenient.

During the first write pass there's actually no requirement that the SSD uses P1 to store V1, and so on. I just described it happening that way for clarity.

If you are having a "why the gently caress would they do that, it can't be true" reaction, you probably need to familiarize yourself with some key properties of NAND flash, and their ramifications. In particular, the distinction between program (write) and erase, and how that interacts with the block/page hierarchy.
It's a SanDisk from 2012.

Firmware has come a long way and I admit I didn't get into the tricks that firmware does these days to protect the data - mainly because I was talking to an elder drive and because I was giving the worst case scenario. Otherwise, you and I are saying the same thing except you are just getting into minute details that I was glossing over for a general overview.

Adbot
ADBOT LOVES YOU

Arsten
Feb 18, 2003

I just came across this article about 1TB SDXC SD card prototypes being flaunted around.

I'll bet it's not fast, but it sure would allow for extremely dense information storage.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply