Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
Here is why HDDs can easily have horrible load times where SSDs do not.

The average rotational latency to reach a random sector on a 15K RPM enterprise performance HDD is (1/(15000/60))/2 = 0.002 = 2.0 ms
The average seek latency on such a disk is about 3ms
Add these numbers up, a random access takes 5ms
Inverse of 5ms is 200 Hz => enterprise 15K HDDs can do about 200 random 4K IOPs

What you would actually buy for your desktop PC, a consumer 7200 RPM performance HDD, has an average seek time of 4.1667ms + 8ms = 12.1667ms giving a throughput of ~80 IOPs.

These numbers get much, much worse for 5400 RPM drives, especially laptop drives.

Any halfway decent SSD should be able to hit 5000 4K IOPs at QD1, good ones more like 10000. That's two orders of magnitude faster than a 7200 RPM 3.5" HDD.

If a game needs to load lots of small assets in groupings which aren't predictable enough to optimize on-disk layout for (or just because the developers didn't bother optimizing load times), it can have essentially random read patterns. Maybe not as random as a 4K random IOPs benchmark, but in the end, yes it is possible to see enormous performance differences. Decent performance on HDDs absolutely requires each seek to do a lot of work. You look at an HDD spec sheet and it'll give you these numbers for linear throughput that sound so good. I pulled up some WD and Seagate data sheets while writing this post and they're pushing 200 MB/s or more at the outer diameter now. But if each seek results in reading or writing 1MB, you'll actually get at most 80 IOPs * 1MB = 80 MB/s, and that calculation gets worse and worse as the average IO size goes down.

Adbot
ADBOT LOVES YOU

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
^^ lol what a useless piece of trash

I'm sure it will separate some fools from their money though

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Klyith posted:

I think that you must have had power plugged in and it got dislodged somehow. the sata data cable does not have a power pin. not even for something like standby -- just 2 pairs of signal and 3 grounds. but a quick google shows you're not completely alone, so who knows!

It's much more likely that OP and the person you googled were confused and something else was going on. SATA data connections don't pass power. Take a look at these high res shots of a couple different SSD PCBs from Anandtech reviews:




There are four traces linking the four data pins on the SATA data connector to the SSD controller IC. These are two differential pairs, one for transmit and the other for receive. You will always find four passive components in series with these traces (clearly visible in both photos above). These are capacitors, to provide DC isolation, and they're required by the SATA spec (and cheap as dirt so nobody is likely to bother leaving them out). Gigabit serial data zips right through a series coupling capacitor, but DC power won't.

In principle, it's possible to build a circuit to try to tap some AC power from the receive pair, but it's very doubtful anything like "mysteriously, the drive worked without a power connection!" could happen by accident. Even if you handwave something like "well there are some body diodes in the controller IC that end up rectifying the data signal into DC" etc, there's still only something like a 600mV differential swing available at the transmitter, and not much power being transmitted either.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Lambert posted:

Also, SSDs inherently don't profit from defragging. The sector alignment the OS sees doesn't correspond to the physical alignment of the sectors.

Yep. The only thing you accomplish by running a defrag tool on a SSD is to wear it out sooner. The layout of your data on physical media could actually become more fragmented, not less, and even if it works it's hard to imagine how it could improve performance.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

isndl posted:

Is this the SSD equivalent of an audiophile? Do we need to do double blind tests?

Their phrasing is awkward and over the top (it is not a direct pipe to NAND) but there are real performance benefits to NVME.

Basically the thing to understand is that the design for how a SATA command is created, submitted to a device, processed, and results returned to the software stack owes a great deal to the ancient 1980s Seagate ST-506 HDD controller, a card designed for CPUs that ran at single digit MHz and 5.25” full height HDDs. This design legacy means the interaction between device and host software is more complicated than it needs to be, and the protocol is inherently high latency.

In the SATA era the ATA standards body came up with bandaids to partially address these problems, such as NCQ. However, it’s still all designed for HDDs, e.g. NCQ can’t track more than 32 commands in flight and only supports a single queue. At the end of the day SATA has a theoretical upper limit of 200K IOPS, with a relatively high minimum latency.

NVME radically simplifies device to host communication, supports 64K commands per queue and up to 64K queues, and more. You can push IOPs to 1M and beyond, and command latency to 1us (ie limited by PCIe latency). It is a vastly superior protocol for talking to a SSD. Which it ought to be since that’s its whole reason to exist.

How relevant is all this to everyday desktop computing? It helps, but it’s not as big a leap as getting off a HDD and onto a SSD in the first place. It’s most beneficial for enterprise use of SSDs where people tend to care way more about super high IOPs.

In the end it’ll all be NVME. There isn’t anything inherently more expensive about NVME as an interface, it’s only more expensive to the end user due to economies of scale and extracting more profit by segmenting the market. Once there’s a high enough density of PCs with M.2 sockets the volume will start to shift towards NVME and prices should come down. (Part of that will just be manufacturers bothering to make cheap models for the NVME interface. Today they don’t because too many people who want a cheap SSD have nowhere to attach a m.2 PCIe.)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Combat Pretzel posted:

Heh, was the 840 Evo really that bad? I take it it's only related to the 840 Pro in name then, unlike the models after them? The Pro fared well over here in the past.

Afaik it’s closely related, it’s just that the 840 EVO was Samsung’s first TLC drive and they didn’t get the scrubbing algorithms completely right in the original firmware.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

DrDork posted:

As for the speed difference between the 960 and the 1920GB versions, I can only speculate that it's a difference in the performance of the NAND chips due to density: both versions use 4 NAND chips (two on each side), so the 1920GB version is using chips twice as dense, and as with most other forms of memory, going dense vs wide usually has speed penalties.

Fyi NAND "chips" are actually multi-die assemblies, so the number of NAND packages on the board doesn't tell you how many NAND die are attached. There could be a density difference, or there could be twice as many die per package.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Potato Salad posted:

fuuuuuuuck opal/tcg

Vulnerability and poo poo

Fuuuuuuuck fuuuuuuuck gently caress

The MX100 and MX200 are comically bad. Holy poo poo.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Potato Salad posted:

guess whose clients almost exclusively use those drives

fffffffff_fffffffffff_fffffffffff

Do they use Crucial or Samsung? Maybe I missed something while skimming the paper but it sounded like they found no method of decrypting Samsung SATA if using TCG Opal.

I always suspected that fear of / knowledge of bad vendor implementations was why Apple chose to avoid Opal and roll their own FDE. I’m still a bit floored by how bad some of those are. Wish someone would fund that research team to analyze a lot more drives...

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

endlessmonotony posted:

The slow decay of the cells is impossible to fix and will eat all drives, including mine, in a matter of years. I invested my entire fortune into a top-of-the-line SSD only to hear it will not even outlast me, nevermind be the family heirloom it was supposed to be.

Samsung didn't know how their cell technology worked in the long term, and hosed up on both cell durability and ability to store charge.

You are being far too dramatic about it. Literally all nand flash suffers from fade, since there is no known way to construct a perfect charge trap.

Fade becomes more significant with TLC since now you need to discriminate between eight different levels, not four as with MLC. What Samsung hosed up was failing to make the firmware’s scrubbing algorithm (finds and rewrites faded blocks in the background) sufficiently aggressive to maintain full read performance. Unfortunate, but it was their first attempt at implementing TLC, and you need to understand that scrubbing is A Thing on all SSDs, not just patched 840 EVO.

SSDs aren’t heirlooms. They have a finite lifespan and are not in any way guaranteed to retain data forever with power turned off. Just the opposite, hiding somewhere in every SSD’s datasheet is a maximum power off retention time. (Enterprise SSDs are typically rated for much worse power off retention times than consumer drives, btw, so don’t make the mistake of thinking enterprise is better than consumer in all ways.)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
NAND's power off retention gets degraded with each program/erase cycle. So, in reality, retention time is a downwards sloping curve where the x axis is P/E cycles and the y axis is retention time.

However, the guaranteed ratings are typically just two numbers, without reference to the curve. This means the foundry can (and does) choose to rate the exact same product one of two ways, depending on what market they're selling it into: (numbers are bullshit, for illustration purposes only)

1. 10000 P/E cycle write endurance, 30 day power off retention (the enterprise choice)
2. 3000 P/E cycle write endurance, 365 day power off retention (the consumer choice)

Enterprise NAND and drives mostly go into datacenters, where they're powered 24/7, and are definitely going to be backed up. It makes sense to target higher P/E endurance and lower retention (and to optimize the SSD's firmware for same). Consumer drives see way less write load, and are expected to be powered off quite a bit, so they get the opposite tradeoff.

Also note that flash with 0 P/E cycles has a retention time way better than the rating. The rating has to be valid all the way until the end of P/E lifespan, so it's very pessimistic on brand new flash.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

isndl posted:

Sometimes it's someone else's problems. I'd much rather overprovision a computer that I'm handing off to parents/spouse/family etc. than get a complaint about things running slowly followed by working tech support into my schedule for a preventable problem.

If their operating system supports TRIM, which it should because 2019, manual overprovisioning is completely pointless. The drive will never slow down under any load a consumer will generate. Unless maybe their computer gets hacked by someone who wants to run an enterprise DBMS on botnet computers, for Reasons???

All SSDs have a built in overprovision that can never be turned off (*). It should be more than enough as long as TRIM is on. It's only when there is no TRIM that you might want to consider doing extra overprovisioning.


* note: Samsung Magician is not a user interface for what I'm talking about, Magician just partitions the drive short of advertised capacity so that some of the advertised capacity will never be written to by the OS. What I'm talking about is internal capacity above the advertised capacity.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

SlayVus posted:

So they're using MLC as advertising for Multi Level Cells. It's a '3-Bit MLC' drive, so really it's just TLC.

TBH, "3-bit MLC" and "4-bit MLC" are a more logical system of descriptive terms than TLC/QLC. "Triple" and "Quad" combined with "Level Cell" implies three and four levels, but actually those would be eight and sixteen levels respectively. Quad is especially bad because hey guess what 2-bit MLC needs 4 levels.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
Most of the M.2 key specs which are Not B and Not M are for internal laptop peripherals like modems which manufacturers like to modularize because they need to ship a different modem in certain countries, or some similar reason. Vanishingly few end users will ever install or remove a card in a non-B/M slot.

I'm not sure that stuff needed to be part of the same standard as the variants of M.2 which are used for SSDs, but whatever.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Atomizer posted:

You can still get these things; I see one at Monoprice for $5. One of those and a CF card is probably sufficient for his application, actually.

I'm skeptical about the write lifespan of CF cards, USB sticks, SD cards, basically all the cheap removable flash drives. If it's some kind of expensive scientific instrument, better to do it the right way IMO.

If you aren't booting an OS from it, and it's only temporary storage, CF etc are fine of course. There's this product niche for devices which emulate a 3.5" floppy disk drive (same connectors on the back, same mechanical form factor) and provide sockets for a CF or SD card where the floppy opening would have been. The controller built into it accomplishes the interesting job of electrically emulating a FDD and redirecting the accesses to a floppy disk image file stored on the flash media. Very useful for anyone maintaining old equipment that had a floppy built in for data interchange, such as oscilloscopes which could save traces to a floppy, or old MIDI keyboards that used floppies to load and save things.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Naffer posted:

I had actually thought about this option, but in the end convenience won out over saving $20-30 since this transcend "SSD" could be sourced directly from one of our approved suppliers. I hadn't noticed the abysmal rated random read and write speeds. Still, it should feel like a dream compared to a 15 year old 40GB drive.

That's actually really cool and appears to have a real SSD controller. Unfortunately it'd be a bit of a hassle to order it with our procurement system.

Depending on how much space there is to accommodate the adapter, something like this plus any SATA SSD could have been an option:

https://www.amazon.com/dp/B01MU023LO

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
SSDs which claim 512 never are underneath, they just count on the OS having 4K file system allocation granularity and alignment. If the SSD controller "knows" writes will always be done in multiples of eight 512 byte LBAs, it can coalesce them into 4K and operate nearly the same as if it was 4Kn. There doesn't have to be protocol overhead for this either, the OS is going to issue multi-LBA commands to read or write contiguous groups of eight 512 byte LBAs rather than eight commands per group.

The number 4K isn't because that's what's best for the flash media, 4K is just the most common file system allocation unit by a huge margin. NTFS, HFS+, ZFS, APFS, UFS, EXTn, XFS, they all default to 4K allocation blocks, meaning that all accesses will be in chunks of 4K. There are many reasons for 4K, one of the most important being that decades ago most computers (including the PC) converged on a 4K virtual memory page size, and it's super convenient for an OS if the FS allocation unit is exactly the same as VM page size. So, when SSDs got rolling and they needed a number to optimize around, 4K was it. In a more storage device centric universe, the host OS would know about and exactly match the SSD's media page size, but we aren't in that universe so the people who engineer SSDs make sure they handle the common case (4K) efficiently.

Because of all this It's still true that a SSD which advertises 512 will not like it (bad performance, write amplification) if you use it such that writes are not done with 4K granularity and alignment. The only reason 4Kn hasn't taken over already is that (a) they did too good a job at making there be no penalty for advertising 512 as long as the FS is 4K anyways and (b) the PC industry is loving terrible at moving forward to new standards whenever a viable hack to let the old one keep limping along exists.

BobHoward fucked around with this message at 22:13 on Mar 3, 2019

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Naffer posted:

Bonus photograph of the embarrassing solution to the Molex power fiasco described above.



FYI as a rule you should avoid sticking packing tape to circuit boards like that. It’s an excellent generator of static electricity when being unrolled from a spool, handled, or pulled off a surface. (As with many other things there are ESD-safe tapes which try to minimize this sort of problem.)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Klyith posted:

If that's true it seems clunky as hell.

They're extrapolating all that off a mockup image photoshopped up by Intel's marketing department. I'd wait until the real product is released to make any conclusions.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Atomizer posted:

Alright here you go, you tell me what's going on:



What sticks out here is that CDI's clearly interpreting the raw value of attribute 0xF1, Lifetime Writes from Host, as a count of the number of gigabytes written. But raw values are not standardized at all in SMART, so any SMART tool that tries to interpret them needs to look up the drive model in a database to get the scale right. Some tools just blindly guess what the values mean if they don't have a DB entry, other tools do the right thing and don't try to interpret what they don't know how to.

We can get something out of the current / worst / threshold values for 0xF1. C/W/T is the result of the drive's own firmware normalizing the raw value to a simple, crude health scale. The scale is usually (but not always, because the designers of SMART weren't smart enough to fully standardize the normalization scale) 100 to 0, with 100 being perfectly healthy.

In this case, the drive is normalizing its 0xF1 raw value to 100, and judging by the rest of the attributes, it's uniformly using a scale of 100=perfect. This drive is actually reporting very little write wear, and CDI's interpretation of the raw value is wrong.

To get an idea of just how all over the map this kind of thing is, have a look at this. It's smartmontool's database of known drives, with attribute interpretation quirks:

https://www.smartmontools.org/static/doxygen/drivedb_8h_source.html

I see a bunch of different ideas about how to encode things in the raw value of attribute 241 (aka 0xF1). Gigabytes is common, which is perhaps why CDI assumes gigabytes. However, there's a bunch of drives that report the total number of LBAs written (an LBA presumably being 512 bytes). Intel seems to like units of 32 MB.

(As far as I've ever been able to tell, a lot of the data in the smartmontools DB comes from people who get a new drive that's not in the DB, experiment on it, and report the results to the project. Sometimes if you're lucky the manufacturer publishes a technical manual which actually documents this poo poo, but that's usually only enterprise-y SSDs.)

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Binary Badger posted:

I still wanna know how they came up with 1700 hours on the nose..

A garbage collection routine that overwrites volume directories?

Usually things like this are numeric overflow. If you have an event counter variable (or in this case a cumulative idle time counter?), and consumers of the counter are written to assume it can only go up, and it hits integer overflow and thus wraps around to zero (or highly negative if it’s a signed value), poo poo can get very real.

If something like that is the culprit, 1700 hours is simply how much idle time is required to advance the counter from 0 to the maximum positive value the variable can hold.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

lordfrikk posted:

Some local guy is selling 4 pieces of "MTFDDAK1T9TBY" for the equivalent of $150 a piece with a 2 year warranty. Looking it up it looks like a 2TB enterprise drive. Is it ok to buy or is there some catch to these drives that's not obvious?

Assuming you’re getting new unopened packages and they’re not hot, no catches. At work we have used a bunch of these Micron 5100 ECO drives (both the 1.92 and 3.84TB versions) with no issues.

They are enterprise 2-bit MLC drives with extra overprovisioning, which is why they’re 1.92 rather than 2.00 or 2.048TB as you might see on a more consumer oriented MLC SSD.

E: as a point of comparison to more familiar consumer SSDs, I would say 5100 ECO is similar to Samsung’s 860 Pro series. Should be better at things like handling power loss gracefully or being run without TRIM enabled, slightly worse capacity.

BobHoward fucked around with this message at 22:23 on May 1, 2019

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

BobHoward posted:

Micron 5100 ECO drives

They are enterprise 2-bit MLC drives

Hey it turns out I misspoke on this, 5100 ECO is a TLC product line. Still enterprise with extra overprovision though.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

endlessmonotony posted:

Technically speaking you cannot delete information. The data is never truly unrecoverable.

For practical purposes, even a hammer or an industrial shredder (or a high-power blender) will not guarantee it because someone could have the budget and inclination to have a bunch of very specialized technicians reassemble the device with some very expensive laboratory tools. As much as it may look like a million grains of dust, ultimately those bits go back together in only one configuration and that configuration is practically discoverable, if hideously expensive and time-taking to find.

lol that you believe this

like, even if we discount the amazing amount of effort it would take to figure out how a microscopic jigsaw puzzle consisting of a million grains of dust fits back together, how do you think the lab techs are going to reassemble the pieces into a functional integrated circuit? The technology to do this does not exist, and frankly never will.

Also every bit cell which gets physically fractured is erased, because flash memory is about trapping charge on a tiny island of conductive material surrounded by an insulator, destroy that barrier and the charge will escape.

Also even if all this spooky magic tech to accomplish the impossible was real, you could defeat it quite easily. After you grind up your flash chips, just scatter the dust in the wind. Oh wait I guess you'll invoke more magic to find all the pieces!

quote:

You can even melt hard drives and the data is still technically recoverable, though that would take more time than anyone is able to wait.

You think that if you melt a HDD the data is technically recoverable? Holy loving poo poo you have no idea how anything works do you?

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

BangersInMyKnickers posted:

Is there any kind of penalty in sequential performance for using IOs larger than 128k? All the drives are rated at that size, and my assumption was that it plateaued from there because it became bottlenecked on the controller or interface. My application works natively in 256k IOs but is configurable and I want to make sure performance doesn't start falling off due to some manner of architectural constraints (having to do IO splitting/combining internally for instance) when handling an IO larger than what they're saying on the specs.

Honestly, just test it on the operating system and hardware combos you intend to use. I doubt you're going to find a formal spec anywhere.

There's not even much of a way to know whether the OS is splitting your requests into smaller chunks -- that is, unless it isn't and you actually can measure a difference at particular sizes.

Also, I have done some work on high performance file write code in a Linux-based data acquisition system and can tell you a couple key issues if that happens to be the OS you're using.

One is that Linux wants to cache everything written to disk and perform the actual write out lazily, which creates problems when data is coming in so fast that you consume all of the machine's free RAM before it actually begins writing anything. Low free RAM equals the VM system actually trying to page stuff out, which is Bad. I was able to solve this by using the sync_file_range() API to force immediate writeout of data, followed up by posix_fadvise(..., POSIX_FADV_DONTNEED) to tell the kernel to drop the cache for pages known to be written out. (If you call posix_fadvise() on cached ranges that are still waiting for writeout, Linux will ignore you.)

The second issue is that, at least on the hardware we used, a single core was only capable of driving about 1.7-1.8 GB/s of write I/O. I had to multithread things to max out our LSI RAID controller at about 6.5 GB/s write throughput. If you ever run into a completely weird bottleneck, that might be it.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
The difference is that Samsung Pro line drives are MLC and should be able to sustain that measured write performance indefinitely, while QLC drives like the Inland Pro will suffer a steep drop off once you fill the SLC-mode write buffer.

On the other hand it is exceedingly unlikely you care about sustaining high speed writes for minutes or hours, so enjoy!

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Malcolm XML posted:

USB NVMe enclosures are $50 or so.

1 year retention is fine. Anything needing longer storage time is on a cloud host, so this is purely for time machine level backups

A 2TB 2.5” usb3 hdd runs about $60 and is massively more needs suiting as a time machine level backup device than a $200 qlc ssd plus a $50 enclosure.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
Marvell sells several NVME controllers on the open market. I know there are others too, I just can't remember which companies.

Phison seems to have the most compelling package of silicon, reference board design, reference firmware, and price right now, which is why you see them all over the place: anyone can make a decent SSD if they just avoid the temptation of doing anything beyond changing the logo silkscreened on the PCB.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

At work we’ve used a bunch of 5100 ECO and 5200 ECO. They’ve done quite well afaik, no failures despite very heavy use by consumer standards (we’re using them to record raw data streams from a high performance sensor chip; data rate is GB/s scale sustained for an hour or more).

Aside from that I wanted to point out a discrepancy in the seller’s website. Micron 5100/5200 is an enterprise-ish product tier with higher flash overprovisioning than consumer lines. Although it probably has as much NAND as a consumer 4.0 TB TLC drive, the actual capacity you will see is 3.840 TB. Micron lists it as 3.84 too (that’s why the model number includes “3T8”), so I’m not sure why OWC is listing it as a 4.0.

3840GB for $368 is still a hell of a deal, of course. I figure those must be old stock drives getting unloaded by some distributor. When we did a big buy about 1 year ago, we initially tried to buy 5100 to match what we’d bought before, but had to change the order to equivalent 5200 models after our reseller told us Micron would not let them place orders for 5100 series drives anymore.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

coke posted:

and lol @ selling the 480GB msata ssd (341 on newegg) for $1450

It's not truly a $341 drive, by the way. It seems to be discontinued (for the general public, at least, Micron might have long term supply contracts to companies like RED) and the listing for $341 is some tiny company selling new-old-stock through newegg.

NOS equipment is often listed for way more than it's worth, and this is no exception. You can buy a 500GB Samsung 860 EVO mSATA for $90 on amazon right now, or 1TB for $160. As a 3D NAND drive it would be worlds better for use in an 8K video camera than that Micron drive: the 500GB 860 EVO is rated for 300 TBW compared to the Micron's 72 TBW.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Combat Pretzel posted:

Isn't an option because job, or because paranoia?

I don't think on-device encryption can fulfill the sequential transfer rates that sequential IO can deliver. If a high-end x86 CPU with hardware AES can't do it, surely not those ARM SoCs on SSDs.

Why would you say that? It’s not the ARM core doing the encryption, it’s a dedicated hardware block.

Also it may surprise you to know that in virtually all SSDs made the past several years, the on-device encryption is on all the time. The only difference between enabling TCG Opal and not is whether the ssd attempts to secure the main encryption key. If security is enabled, to unlock the drive the host must provide a password which the drive hashes and salts to derive the Key Encryption Key, or KEK. The KEK is then used to decrypt the Drive Encryption Key (DEK), the key used to encrypt and decrypt user data. While Opal is off the drive just stores the DEK in clear text (no KEK) and auto unlocks itself on power up or wake from sleep.

Encrypting all the time has real benefits, which is why they do it. One is fast secure erase: destroy all copies of the DEK and you’ve effectively destroyed all user data. Another is that the output of good crypto looks like random noise, and modern flash media is a sufficiently non ideal storage medium that this is desirable (less chance of long runs of 1s or 0s or other patterns that might be more error prone).

E:f,b

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

TorakFade posted:

I have two options right now:

- Intel P660 m.2 2TB NVMe (QLC) for about 205€
- Crucial MX500 2.5" 2TB SATA (TLC) for about 200€

both seem to be well suited to my use case (high capacity with "normal" SSD performance), is there any reason I should choose one over the other?

May I suggest an alternative which could net better performance, at the cost of some of your time spent tracking down the right choice to buy for your locale? Some Phison E12 controller NVME SSDs are close to those prices. I just bought a E12 reference design 1.92 TB (brand: MyDigitalSSD BPX Pro) for $230 USD, which is apparently about the same as 205€.

The reason to go for one of these is that unlike the 660p, they’re typically (maybe always?) TLC. 3GB/s class read performance, similar write for as long as the SLC cache lasts. The BPX Pro (and most other brands of E12 reference designs I looked at) uses Toshiba 3D TLC NAND.

You do have to be willing to live with buying something that is likely to have less support than a major brand like Intel. One thing mitigating this is that most Phison SSD vendors seem to stick with the reference PCB design and firmware, meaning you can get firmware updaters direct from Phison’s website.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

priznat posted:

Whose controller do they use or their own, I wonder.

Their careers page doesn't list any positions I'd associate with controller development, and they list Phison as a partner, so presumably Phison.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Potato Salad posted:

if I remember right, wd reds are bad bins throttled in firmware to lower speed

the gut explanation would be "they're selling bad drives to people marketed as affordable nas drives"

That doesn't make any sense to my gut at all, was this throttling idea random internet speculation or something backed up by more? Because what "binning" could they possibly be doing? HDDs aren't much like ASICs in that way. For example, say the main spindle bearing has extra friction so the drive runs hot because the motor has to do more work to spin the platters. That's not something a HDD mfr could "bin" down and still sell, it's scrap, because it will come back as a warranty claim - excess friction implies something destructive is going on.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

peepsalot posted:

Maybe slightly OT, but I was recently in the market for a microSD card and I was shocked and amazed that you can now get 128GB on microSD for $20.

How do they cram all that NAND?

Die thinning and stacking. Even with 3D NAND, the active layer is extremely thin, so you can grind or etch away the backside of the wafer until the limits of mechanical strength, then glue a bunch of them on top of each other.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
I would not pick ADATA over Micron (or its consumer brand, Crucial) unless there was a substantial price advantage and I didn’t care about reliability etc. Micron makes its own flash memory and that gives them a substantial advantage over smaller companies which must acquire flash from other suppliers (such as Micron).

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Atomizer posted:

SSDs are quite reliable nowadays. I've literally never had one fail on me ever over the years, let alone a recent one like the SU800. I have no problem recommending the SU800, especially the 2 TB one.

What I'm saying is that when the price is the same (as it was in the post I was responding to), pick the fully integrated vendor like Micron/Crucial over a lower-tier company like ADATA.

I'm not saying you should never buy cheap SSDs. I have done that myself - a few months back I saved about $200 by choosing a 2TB MyDigitalSSD BPX Pro (lol at this name) over a Samsung 970. I have decent confidence in it, it's literally just the Phison E12 reference design and firmware with Toshiba flash. But if there hadn't been a price difference, the 970 would've been a no-brainer.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

BIG HEADLINE posted:

NVMe excels at processing large files, like massive video files and database stuff. The first-gen NVMe drives oft times performed worse than their SATA counterparts at processing small files (such as the kind used in general computing tasks/a lot of games). Intel's Optane and Samsung's Z-NAND bridge the gap, but neither company seems in a hurry to make the technologies financially viable for consumer/enthusiast use.

Wat. Nothing you said about small-file performance makes any sense, NVMe is inherently better at that. Also database performance is not like video file performance, databases are typically very small accesses in random order.

NVME doesn't require new memory technology like Optane or Z-NAND to achieve good small file performance, either. One of the most important design objectives for NVMe was to dramatically reduce the CPU and PCIe overhead per I/O compared to SATA. It wasn't hard for the standards body to do this since SATA is a terrible protocol which suffers greatly from literal 1980s baggage, but they put in the extra effort and designed NVMe to have probably close to the minimum theoretical overhead per I/O. If all else is as equal as possible (flash memory type & quantity, whether the drives have DRAM, implementation quality, etc), NVMe kills SATA on small file performance.

quote:

The newer ones have parity with/slightly exceed the performance of SATA 3 drives,

How on earth did you get this idea? NVMe drives have been blowing away SATA3 drives for a long time. The only case I'm aware of where there's parity or possibly even role reversal is when you compare a QLC NVMe like the Intel 660p to a good MLC or TLC SATA -- but such results aren't a problem with NVMe's performance, they're because QLC flash is really slow.

BobHoward fucked around with this message at 12:10 on Nov 19, 2019

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

https://www.anandtech.com/show/13761/the-samsung-970-evo-plus-ssd-review

I didn’t go over every part of the image but yours appears to match Anandtech’s review sample in PCB trace layout and markings. Both have a 15 digit serial number including the letter at the end.

The only thing I see on yours is some kind of smudge or scrape on the sticker at the top right. Perhaps they decided this was tampering?

Adbot
ADBOT LOVES YOU

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Klyith posted:

I don't work for a drive manufacturer so I got no real evidence. :shrug: I don't think that samsung would lie on press releases, but they do say "over" 85 billion so it could be 91 billion cells.

It’s not lying, more like glossing over a shitload of technical details most people won’t understand and giving them the number they will.

This is an example, that‘s definitely a load-bearing “over”. NAND flash pages always have extra storage capacity that’s not even overprovision, just more bits for storing error correcting codes. Everyday ECC DRAM is single error correct, double error detect, and needs 1.125x storage to do that (72/64). The BCH (or similar) ECC codes used in SSDs have have to detect and correct more than 2 bit errors. They should gain some efficiency from the larger block size but I wouldn’t be surprised if the overhead ends up being similar.


Fake edit, SUPER NERD JUNK BELOW: Looked up an ancient Micron planar MLC datasheet as an example (they want you to sign your soul away to see data for modern parts), here’s the details:

Page size x8: 4320 bytes (4096 + 224 bytes)
Block size: 256 pages (1024K + 56K bytes)
Plane size: 2 planes x 1024 blocks/plane
Device size: 16Gib: 2048 blocks, 32Gib: 4096 blocks

(Page is the minimum unit of write. Block is the minimum unit of erase. Planes are logical flash chips which cohabit the same die or package, and exist to provide parallelism, eg one plane can perform a block erase while another does a page read.)

The 224 bytes extra per page are what’s intended to be used for error correction. If you multiply things out, that means our nominally 16Gib part actually stores 16.875 Gib (binary gigabits) or ~18.12 Gb in decimal.

To return to the earlier topic, I think that before TLC it was standard to have powers-of-2 counts of blocks per plane and pages per block, for the same reasons why DRAM chips have power of 2 X and Y dimensions (treating the data bus width of the DRAM chip as a third Z dimension). I wouldn’t be terribly surprised if plane internals other than the page size are still powers of 2 today, it may just be the total number of planes per die that’s not a power of 2.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply