|
LLCoolJD posted:I hope so! The motherboard is an MSI X99A SLI PLUS. Seems to be NVME, both from online compatibility checks and from peeping into the chassis and seeing the connection. No, what he means is that a drive that connects to a m.2 slot can use either the SATA interface or the PCIe interface. Those that do the latter are NVMe drives and are much faster than those that do the former. What drive did you buy? It probably won't be an issue. There aren't many m.2 SATA drives floating around. If you search for an m.2 SSD online, you'll get nothing but NVMe drives at the top of the list.
|
# ? Nov 1, 2021 03:55 |
|
|
# ? May 13, 2024 17:56 |
|
Oh, it's a Western Digital Black 1TB. https://www.westerndigital.com/products/internal-drives/wd-black-sn750-nvme-ssd#WDS100T3X0C
|
# ? Nov 1, 2021 04:04 |
|
$240 for a 2TB 970 Evo Plus seems like a good price. It's still a fine drive even with some component changes right? Flight simulator 2020 takes up all of my 512GB 😂 Also which do storage manufacturers make them 128GB, 256GB, 512GB, etc if they're actually decimal values?
|
# ? Nov 1, 2021 04:17 |
|
The 4TB Samsung 870 QVO was around $300 for a week at the end of July. I'm hoping for a similar deal during black friday. Is there any reason to avoid the 870 as a fast mass storage drive to store my ever-growing game backlog on? I might get an NVMe instead if a 4TB one drops to $350 or so, but that seems less likely. It would be sweet being able to ditch all my 2.5" and 3.5" storage though. edit: Or not, because my m.2 slots are filled up already and the only extra x4 PCIe slot on my B550 board is three slots down from the x16 slot, which is uncomfortably close to my chonker GPU. This is one area where I'm jealous of the upcoming Z690 boards. Those things often have four or five M.2 slots built in. Just look at this thing gently caress sata, just m.2 all the way down baby. Dr. Video Games 0031 fucked around with this message at 05:27 on Nov 1, 2021 |
# ? Nov 1, 2021 04:49 |
|
Charles posted:Also which do storage manufacturers make them 128GB, 256GB, 512GB, etc if they're actually decimal values? All SSDs are overprovisioned for performance reasons so there's actually more space than advertised, though you can't use it all at once. Powers of two are iconic computing numbers so they work well enough for marketing when deciding how much space is going to be available to the end user.
|
# ? Nov 1, 2021 06:03 |
|
Also note that manufacturers will use the SI units (so Giga = 10 ^ 9), while Windows prints the SI symbol, but actually means the binary one (Gibi = 2 ^ 30).
|
# ? Nov 1, 2021 10:34 |
|
LLCoolJD posted:Edit: My BIOS is old (March 2015), I'll flash it to a newer version before this thing arrives. Looks like others have had issues with old BIOS not detecting the new drive. The other thing is that some of the first chipsets with NVMe support could use a NVMe drive, but couldn't boot from it. Quick googling suggests X99s are mostly able to boot, though you have to be in UEFI more and some boards have compatibility issues with some drives. IIRC it's Z97 that is the real problem. So if you want to install windows on your new drive, it may need some fiddling.
|
# ? Nov 1, 2021 13:52 |
|
Thanks for the warning. Mercifully, my SSD Windows is enough for me and I intend to use the new drive for games.
|
# ? Nov 1, 2021 14:36 |
|
lol, time to "enjoy the rapid" This is some weird poo poo, they are claiming 128GB/s (which is Gen5 x16) when they only show x8 contacts (so max 64GT/s) but only a single x4 M.2 Gen5 slot (so max 32GT/s).. Where are the rest of the x8 connectors going? Why make it a full x16 connector, is it for the latch support to steady the heatsink/fan? God drat this whole thing is dumb as hell! Oh and no Gen5 M.2 SSDs exist right now anyway and probably won't for a while at least. Phison and Samsung won't have anything immediately available afaik when Alder Lake ships and probably not til mid 2022 at the earliest when almost certainly there will be motherboards with an onboard Gen5 capable M.2 socket will be on them.
|
# ? Nov 2, 2021 06:15 |
|
priznat posted:
So anyways, one bad gimmick deserves another. Klyith fucked around with this message at 14:54 on Nov 2, 2021 |
# ? Nov 2, 2021 13:59 |
|
Klyith posted:Alder Lake's Gen5 is a bad gimmick that Intel jumped the gun on specifically because AMD embarrassed them by supporting Gen4 before they were ready. Particularly the way that a bunch of the 690s are splitting the Gen5 into 2 16x-size slots that are wired 8x, because there aren't Gen5 muxes to dynamically switch 16x/2x8x like normal mobos. Sooooo someone with a Gen3 GPU -- for example, a 2080 Super or Ti which *is* affected by being reduced to an 8x link -- is stuck with a slot that's permanently in dual-GPU mode. What? Gen5 Muxes exist and are shipping now, and I'm not aware of any boards that only have 8x electrical connections on the primary slot. Gigabyte will even let you overclock your PCIe slots
|
# ? Nov 2, 2021 14:32 |
|
BurritoJustice posted:What? Gen5 Muxes exist and are shipping now, and I'm not aware of any boards that only have 8x electrical connections on the primary slot. Jesus I totally misread a thing on techpowerup about the Z690 lineup where they were talking about pcie and the writing made it sound like they were always 8x/8x on boards with 2 slots. So I assumed that the muxes didn't exist or were too expensive or something. (I still think doing pcie 5 right now is kinda a gimmick to make sure AMD isn't first, but it's not a bad gimmick in that case. More like how PCIe 3 was on boards 2 years before consumer cards came out to use it. OTOH better early than late like PCIe 4.)
|
# ? Nov 2, 2021 14:54 |
|
I flashed away my March 2015 BIOS last night to a newer version, clenched tightly until it was done, but now see that Amazon delayed the Western Digital Black SN750 delivery another week. So I cancelled that and I now have a 1 TB SK hynix Gold P31 supposedly arriving tonight. Seems like it's marginally better than that WD drive, too, although they both look like good drives. My Windows already boots faster, the BIOS update notes weren't BS'ing
|
# ? Nov 6, 2021 15:56 |
|
Considering how much erratas various CPUs and microcontrollers get over their lifetime, I'd rather have mature PCIe 4 infrastructure over lol-first PCIe 5.
|
# ? Nov 6, 2021 18:35 |
|
I seem to have lost a 1TB M.2 drive Kind of like those super tiny USB drives. Always lose those fuckers.
|
# ? Nov 9, 2021 20:24 |
|
idk if this is due to the anticipated memory supply glut, but SSD prices seem to be really starting to drop. ~$180 2TB nvmes show up every other day. Here's a 2100/1700 MB/s read/write Kingston for $145 (when in cart) https://www.adorama.com/kgsnvs2000g.html no dram
|
# ? Nov 12, 2021 15:10 |
|
Combat Pretzel posted:Considering how much erratas various CPUs and microcontrollers get over their lifetime, I'd rather have mature PCIe 4 infrastructure over lol-first PCIe 5. you’re not wrong in a “would I buy this product today as a consumer” sense, but it actually makes complete sense in a product development sense for Intel. Intel got hosed extremely hard on PCIe 4.0 because they were so late to the game due to 10nm delays (the first server gen that supports it was Ice Lake-SP) that basically AMD became the reference platform that all the PCIe 4.0 devices were validated against, and Intel had to be in the position of being “the other one” whose “specific quirks needed to be validated” or whatever. Sapphire Rapids is basically Server Alder Lake and comes next year, so it’ll be sampling out already for a while (google says Q4 2020, presumably to limited audiences/hyperscalers at that point). Server products generally have longer leadtimes than consumer so they were likely validated about the same time, just consumer is a bit quicker to general market release. This is likely also advantageous in the sense of being a final smoke test, now they have “release ready” PCIe 5.0 and OEMs can validate on that and if there’s a big problem there they can hop on fixing it on the server platform ASAP. But anyway the point is that Intel wanted to jump the gun with PCIe 5, because they got screwed by 10nm delays delaying their PCIe 4 release and they paid a big price for that, and they absolutely do not want a repeat of that. Obviously Zen4 comes next year, and probably Epyc comes first / samples out significantly ahead of the general consumer launch, I’d imagine if their Q4 number is accurate then they’re probably sampling right now, but Intel is still around a year ahead of AMD on the timeline and that’s exactly what they wanted.
|
# ? Nov 12, 2021 20:08 |
|
Anyway but apropos of nothing, as a high-end user I really welcome a return to mobos with tons of PCIe muxes/switch chips. One of the things I’ve really whined about a lot is that as a power user who likes to do a lot of things with my systems, modern boards kinda loving suck. Even ignoring the problems of air-cooled GPUs getting bigger every single generation and covering all your slots, let’s just look purely at the CPU configuration. You’ve got x16/x8x8/x8x4x4 from the CPU itself, you have 4 NVMe lanes, and you have the chipset. Even ignoring non-optimal slot utilization you’ve got a GPU and a NVMe and then two “other” things that run at decent ish speeds. Obviously if you want more stuff you have to lean on the chipset, but that’s not really optimal either. For example my vive wireless needs a dedicated card that runs at 3.0x1, and the chipset lanes on my board are all 2.0x1. Does it work? Probably, but do I really want to find out if I don’t have to? And what if I want to connect something that’s x4 or something? Also, my impression is that the chipset is substantially slower than a CPU-direct lane, and although I guess I don’t know for sure my impression is that it’s substantially slower than a dedicated mux as well. I’ve always heard “connect fast ethernet/optane SSDs directly to the CPU” as a notional bit of advice even if the chipset is generally fast enough to support it, because the latency is higher. Meanwhile I emailed Highpoint Tech and asked about their NVMe hba cards and the answer there was “it will perform at whatever the IOPS/latency of the underlying device is”. I don’t know how true either of those two bits of arcana really are, but notionally it’s better to have things not running through the CPU - that is part of why thunderbolt eGPU is worse than a direct-connect 3.0x4 GPU as well. All the stops and buffers add latency. As a power user, I’d really like to have my GPU, my vive wireless adapter, my optane PCIe AIC ssd, a 10gbe SFP ethernet adapter, etc etc all in one system, in their optimal slots (which more or less means CPU lanes wherever possible, with enough lanes at enough speed to saturate them). I realize the real answer at this point is “buy a HEDT system” and yes, that’s the long term plan, but right now HEDT is an ugly set of compromises all of its own. Skylake-X/Cascade Lake-X sucked and Zen2 Threadripper was overpriced and still underperformed Coffee Lake, and Zen3 is MIA for over a year at this point with no sign of an imminent release (hopefully soon and hopefully with Vcache). Well, with PCIe 5, 16 lanes is actually a lot and you can split that out with muxes and that’s a very acceptable compromise. Take the x8x8 configuration and that can be muxed out to x8x8x8x8, or x8x4x4 could become x8x8x4x4x4x4, and so on. So for the cost of 2 or 3 muxes you have a “pseudo-HEDT” system with 4-6 reasonably fast slots. And with the improvements in chipset, that can be expanded further (dunno if I’d expect to see the “3-mux” configuration in practice). This niche used to be filled by the “supercarrier” style boards (I think that was an asus or asrock brand name?) and it’s kinda unfortunate it’s gone away, because it went away at the same time the HEDT market became overly expensive and compromised. Right now the most practical solution is really “have a gaming rig and then have an Epyc system with Asrock ROMED8 with the 7x PCIe 4.0x16 slots for everything else” and that kinda sucks.
|
# ? Nov 12, 2021 20:30 |
|
It really sucked having to plan for validationing a gen4 ssd but being told we were Intel platform only lab and the only other solution to some buggy A0 Intel board was some buggy PCIe gen4 switch.
|
# ? Nov 12, 2021 20:43 |
|
WhyteRyce posted:It really sucked having to plan for validationing a gen4 ssd but being told we were Intel platform only lab and the only other solution to some buggy A0 Intel board was some buggy PCIe gen4 switch. Which switch was this and what were the bugs? Also for Gen4 AMD had its host of its own fun issues that would show up, it was nice once the Ice Lake platforms started showing in the lab to compare against.
|
# ? Nov 12, 2021 20:56 |
|
So I got a raw read error rate SMART alert for my 1 TB Samsung SSD. I read online some conflicting info about how bad it is. Some people say that only matters for normal HDDs others say my drive is going to poo poo the bed immediately. Which side is right? I had to disable SMART to get past the bios check
Chin Strap fucked around with this message at 11:47 on Nov 15, 2021 |
# ? Nov 15, 2021 11:40 |
|
Chin Strap posted:So I got a raw read error rate SMART alert for my 1 TB Samsung SSD. I read online some conflicting info about how bad it is. Some people say that only matters for normal HDDs others say my drive is going to poo poo the bed immediately. Which side is right? I had to disable SMART to get past the bios check I believe the more important SMART items for SSDs are uncorrected read errors, and reassigned blocks / available reserve. I'm not sure that raw read errors (value 01 in the SMART list) is even particularly meaningful for SSDs -- 2 of my SSDs don't report it at all, and one repurposes it as "Critical Warning" according to CrystalDisk. 1. You have backups right? If not, get on that. Most drives don't have the courtesy to give warning. 2. Get CrystalDiskInfo and see what that thinks. You can also get Samsung's Magician software and see if it says anything different -- but whatever you do, don't allow Magician to "optimize" your system. 3. Personally I'd shrug and move on even if the diagnostics like crystaldisk say the drive isn't good, and just wait until the thing kicks the bucket. Stuff like the TR endurance experiment had drives that said "help I'm about to die" then went on to 2x more lifespan. But I can get a replacement in 2 days so I'm not seriously put out by a failure.
|
# ? Nov 15, 2021 13:43 |
|
Yeah nothing I need to backup here. It is just windows and game installs basically. I'll get a bigger ssd like I've wanted anyway, install windows fresh on that and relegate this to just extra random install space. I tried Magician and it just said it was a "Critical Error" but didn't say any more. I'll try CrystalDisk when I'm home again. Thanks!
|
# ? Nov 15, 2021 14:39 |
|
LOL I was confused and it turned out it was actually my media non SSD throwing errors. So in trying to get it backed up asap but it is copying over at a glacial pace. Oh well. If I lose it whatever.
|
# ? Nov 16, 2021 11:45 |
|
Could someone try to explain in laymen's terms what's going on here? A PCIe gen 3 drive (SN570) beating an expensive gen 4 (SN850) in load times specifically TweakTown review of the SN570
The other somewhat less common approach to reducing production costs is what we have here today in the Western Digital WD Blue SN570 DRAMless SSD. Going without onboard DRAM reduces costs considerably, as does the fact that the drive only has only one flash package and a power-sipping 4-channel HMB (Host Memory Buffer) enabled controller. Eliminating the cost of onboard DRAM offsets, TLC flash creates arguably a better overall value for the customer. As we alluded to earlier, the newly minted WD Blue SN570 is special in that it gives our first look at BiCS 5 flash. WD is tight-lipped and doesn't even mention the SN570 is arrayed with 112-layer BiCS 5. Well, we know it is because our gaming test results say it is. How else could this DRAMless value drive load game levels faster than the legendary WD Black SN850 Gen4 performance juggernaut? It has to be better flash; that's the only way it can happen. We will tell you right now that the WD Blue SN570 1TB is the first DRAMless SSD we've tested that we can call a legit gaming SSD. And it's not just good at gaming; it's outstanding at gaming. Amazing really. Rinkles fucked around with this message at 06:25 on Nov 20, 2021 |
# ? Nov 20, 2021 06:22 |
|
Raw transfer bandwidth is not important for game loading, so if western digital has some secret awesome new kind of nand flash, then it makes sense that a gen 3 drive could outperform a gen 4 one. The extra pcie bandwidth is unnecessary anyway.
|
# ? Nov 20, 2021 06:42 |
|
Rinkles posted:Could someone try to explain in laymen's terms what's going on here? I wouldn't say "beats" here, at least not significantly, that's pretty much on par in general. Still though you're right that both are TLC, the SN850 almost certainly has a better controller, it has DRAM, the only thing that really leaves is better flash. game load times and general windows/application performance are extremely heavily dominated by latency. 4K Random Read QD1T1 is the test that usually measures that/manifests the difference, but the SN850 still wins there (76 vs 98 MB/s). Which is what you'd expect from a better controller and all (although of course with QD1T1 there's not a ton for the better controller to get its teeth into). I concur with TT, it's gotta be better flash, maybe better latency? But it's very odd that it doesn't show up in QD1T1 either.
|
# ? Nov 20, 2021 06:47 |
|
Rinkles posted:Could someone try to explain in laymen's terms what's going on here? Too many digits of precision is what's going on there.
|
# ? Nov 20, 2021 13:38 |
|
This is the most bullshit chart ever. The fact the time measurement difference is within 2 seconds is insane. Plus some of the testing could, as someone had stated, be influenced of what is happening outside of the game in the OS. Did windows just put a bunch of stuff into cache? did it just clear cache? is chrome creating it's first 10 processes or is it up to it's normal process count of 50 ? As long as it's not spinning rust drives, then don't worry is all I'll say . EVIL Gibson fucked around with this message at 16:55 on Nov 22, 2021 |
# ? Nov 22, 2021 16:51 |
|
lol why do they bother making the sub 500gb model? I guess the margins are much better, but who buys them? OEMs?
|
# ? Nov 24, 2021 00:23 |
|
OEMs do so they can charge $200 more for the 500GB spec.
|
# ? Nov 24, 2021 00:32 |
|
Rinkles posted:I guess the margins are much better, but who buys them? OEMs? Let me tell you about the government procurement clerk who I almost strangled when I found out that she had ordered 50 brand new 17" 1336p monitors for $250/ea (because they're super-low volume parts now that NO ONE IN THEIR RIGHT MIND WANTS THEM) instead of 24" 1080p monitors for $150/ea because, and I quote, "we had the order form already filled out from when we ordered a batch 4 years ago so we just re-used it." Or let me tell you about the military IT organization which decided that it was a fantastic idea in 2019 to buy an entire battalion's worth of replacement laptops with 1.8" spinning HDDs in them instead of SSDs--despite the requirement for mandatory FDE on all laptops--because it saved them $5/laptop? Or let me tell you about the Leidos contract acquisitions woman who I had to pull all tech-purchasing power from because the Engineer that submitted the order just listed "must be 17" sized laptop" and she had selected one from Dell which had a 17" 1280x800 screen for GOD KNOWS WHAT REASON in 2020 because, again, it was $15 cheaper than a 1080p version. You would not believe the number of people who make purchasing decisions for massive enterprises based on the difference of a few dollars because they have absolutely no loving clue what any of the tech specs actually mean.
|
# ? Nov 24, 2021 00:34 |
|
DrDork posted:You would not believe the number of people who make purchasing decisions for massive enterprises based on the difference of a few dollars because they have absolutely no loving clue what any of the tech specs actually mean. They are hired and evaluated by finance, not the operating unit.
|
# ? Nov 24, 2021 00:37 |
|
If you’re a hyperscaler or high volume user looking for a small boot drive, dollars matter and why the gently caress pay for twice the storage you’re never going to need or use
|
# ? Nov 24, 2021 01:07 |
|
No hyperscaler is buying the 256 version of that because their requisition plans are driven by Engineering and everyone knows that more local buffer will be handy at some point.
|
# ? Nov 24, 2021 01:33 |
|
Why are you sweating over using your gen3 m.2 drive for a future maybe when you’ve got a dozen or more gen4 e1.s drives for your actual important work
|
# ? Nov 24, 2021 01:47 |
|
Best Buy has the 1tb m.2 San disk ultra for 79
|
# ? Nov 24, 2021 02:34 |
|
Subjunctive posted:No hyperscaler is buying the 256 version of that because their requisition plans are driven by Engineering and everyone knows that more local buffer will be handy at some point. I don't think hyperscalers really buy anything under 8TB SSDs for their normal racks these days. Even "small" stuff like an AWS Outpost needs pretty dense storage these days. But Dell / HP / Random Chinese Brand trying to hit a price point? gently caress yeah they're gonna scrimp that $5 on you--that's what you get for trying to buy a pre-built under $1000!
|
# ? Nov 24, 2021 03:50 |
|
Quite a few server designs have a dedicated boot slot that is an m.2 hanging off the chipset or even a sata ssd. Putting a fast and dense drive or demanding applications on it doesn’t make a ton of sense and, yes, spec’d down boot drives are a thing because if I can shave $10 off the BOM for a hundred thousand units that’s a really good thing.
WhyteRyce fucked around with this message at 04:21 on Nov 24, 2021 |
# ? Nov 24, 2021 04:16 |
|
|
# ? May 13, 2024 17:56 |
|
Interesting that because NVMes are optimized around operating at a specific temperature, heatsinks might be counter productive sometimes. Cooling the controller makes sense, but not the flash memory in a lot of cases. Though I'm not sure how much difference this would make in the lifetime of the SSD. https://www.youtube.com/watch?v=xH1EmzqK5Ek&t=132s [answer to the second part of the first question, starts around 2:10] Idk if any of this changed with gen 4 drives.
|
# ? Nov 24, 2021 09:05 |