|
DrDork posted:I mean, sure, but it's also actually slower than a 970 Pro in pretty much everything but heavily multi-threaded random reads, and costs 3x as much. No, there's no point to Optane for consumer workloads unless you're trying to use it to cover for the fact that you've only got like 8GB RAM or something. Seems like SSDs are only faster when they have ramp-up time or there's a large queue (which is unusual for standard use). In a standard work environment, the faster access times make for a meaningful difference in responsiveness whereas sequential reads/writes are "fast enough" in both cases. "At very low queue depths, where most desktop and workstation work actually happens, the Optane SSD 905P is more than three times faster than the best flash-based NVMe SSDs. At QD2 the 905P is nearly at peak performance with a single CPU core workload and the flash-based drives have barely started to ramp up performance in relation to peak performance. Which only comes later on in the flash drives at queue depths well beyond what a normal workload can reach." https://www.tweaktown.com/articles/8775/intel-optane-ssd-905p-5tb-review-massive-3d-xpoint/index2.html Also, no degradation when the drive fills up as Optane can write just as fast on non-empty bits Giving the 480gb one a shot. It doesn't get faster with the 960 and 1.5tb version. At worst, it won't be worse than any other SSD. At best, my desktop will feel goooooood
|
# ? May 12, 2020 06:06 |
|
|
# ? May 11, 2024 16:13 |
|
If you have any long, sustained write or mixed workloads then Optane will beat the crap out of NAND drives once the laters media management tricks run out not to mention random low queue depth workloads. But yes those are either not consumer workloads or have a poor value proposition for most regular people
|
# ? May 12, 2020 06:12 |
|
Klyith posted:The cable incompatibility situation is a mess, but it's also one where a single standard might be harder than the "12v here, 5v here, ground here" you'd think a first glance. They're not always just plain dumb wires, different PSUs do have differences in their cables. Many have in-line capacitors on the wire down at the plug end, which I'm not sure should be swapped willy-nilly. And some super-high-end units actually have extra sense wires for monitoring voltage or something. Eh, if the capacitors are electrolytic they're going to care about polarity, which wouldn't be a problem for standardized cables. And their inclusion isn't going to do anything but reduce ripple a little. Extra sense wires would at least require a different plug on the PSU side. I don't think there's much of anything other than laziness preventing standardization.
|
# ? May 12, 2020 08:04 |
|
Why is PCIE 4 exciting considering no SSD saturates even PCIE 3? Is it just hype/future-looking or am I missing something?
|
# ? May 12, 2020 09:06 |
|
My luck with SSDs has been terrible this week... After the power supply debacle frying all of my SSDs in my gaming machine. The Intel SSD boot drive in my plex server/lab PC failed after a power outage Sunday night. I forgot I had one of those time bomb Intel 5xx series SSDs in that computer. Anyways, the disk gets detected as "Sandforce 20002BEAB" and does not function. I went ahead and reinstalled windows on another SSD. After playing around with the dead drive a bit in a USB3 dock, I found I could get it to come back to life by turning the power on/off a few times in short succession. I wanted to copy my Plex data directory off it, as I really didn't want to setup Plex from scratch all over again. (rescanning my library and resharing with everyone would take days). Launched robocopy and let it go to work. The drive died a couple times during the copy but I could get it going again by just cycling the power to it, and robocopy just continued where it left off. Managed to copy the 90GB worth of Plex data and a few other folders this way.
|
# ? May 12, 2020 14:21 |
|
stevewm posted:My luck with SSDs has been terrible this week... Yeah you just got the dreaded Sandforce controller problem from back in those days. Nice you got data off it though.
|
# ? May 12, 2020 14:59 |
|
Kane posted:Why is PCIE 4 exciting considering no SSD saturates even PCIE 3? Is it just hype/future-looking or am I missing something? You can fit multiple NVMe drives on a card and have them all go full speed. If you're not in a position to be using PCIe cards to attach NVMe drives its really not.
|
# ? May 12, 2020 15:25 |
|
Kane posted:Why is PCIE 4 exciting considering no SSD saturates even PCIE 3? Is it just hype/future-looking or am I missing something? Well it's one faster, innit? It's not 3, goes to 4. No PCIe version other than the first has made any difference to consumer uses at introduction. I'm guessing it's the datacenter people who are pushing for more PCIe bandwidth more than anyone else. Geemer posted:I don't think there's much of anything other than laziness preventing standardization. And greed, don't forget that! Your existing modular cables only working with a new PSU from the same company encourages you to buy from them again. (edit: maybe, reusing cables even with the same company needs to be checked for compatibility before plugging in) So yeah, they could but didn't. And this example didn't prevent them from doing the exact same thing with RGB crap -- everyone make their own standard in the rush to fill a new market, then doubles down to make their own ecosystem. Klyith fucked around with this message at 16:50 on May 12, 2020 |
# ? May 12, 2020 15:34 |
|
Klyith posted:(At least for Corsair .... who keep things consistent Guess again... they don't. They have at least 3 different pinouts that I could find. One of the PSU cables I fried my SSDs with was of the "Type 4" variety.
|
# ? May 12, 2020 15:43 |
|
stevewm posted:Guess again... they don't. They have at least 3 different pinouts that I could find. One of the PSU cables I fried my SSDs with was of the "Type 4" variety. Type 3 and type 4 are interchangeable, except for the 24pin cable, and I'd guess that Corsair will be sticking to that pinout for the long term. I'm not quite sure what the type 1 is, it might be the 4-pin connector they launched their original PSUs with. I'm fairly certain they date way back anyways. Type 2 is I believe the thing now labeled "AX Gold only", which are from like a decade ago. None of them work in a EVGA power supply though. Corsair cables are mostly compatible with other corsair PSUs. However, someone could easily still be using a 7-8 year old AX today that can't put their modular into a new Corsair unit, so yeah I'll edit that out of my post to be more conservative.
|
# ? May 12, 2020 16:50 |
|
Some Goon posted:You can fit multiple NVMe drives on a card and have them all go full speed. If you're not in a position to be using PCIe cards to attach NVMe drives its really not. It also helps future expansion propositions if Intel keeps being stingy with PCIe lanes. Coffee Lake still only has 16 lanes off the CPU itself, with everything else shared over the chipset via DMI 3. But, yeah, probably not gonna make a huge difference to consumers for a bit.
|
# ? May 12, 2020 18:00 |
|
Kane posted:Why is PCIE 4 exciting considering no SSD saturates even PCIE 3? Is it just hype/future-looking or am I missing something? Enterprise exists. For home use, none of it matters at all. A "fine" tier or better SATA SSD wont feel any different than a 970 Pro or PCIe 4.0 SSD.
|
# ? May 12, 2020 18:04 |
|
I think i saw a recent Linus video testing nvme vs sata drives loading games and the SATA drive won.
|
# ? May 12, 2020 18:07 |
|
Kane posted:Also, no degradation when the drive fills up as Optane can write just as fast on non-empty bits No degradation is, indeed, a pretty nice bonus if you're the type to fill drives nearly to capacity. But the thing TT kinda misses is that an average user workload isn't >30k IOPS or anything crazy like that, so the potential random/4k performance benefit of Optane will rarely be realized in actual use. Which is the real problem with Optane in the consumer space: most of its best qualities will just never get used in a meaningful way. I mean, enjoy playing with it--you're right that it's faster for random reads, and it's not so much worse as sequential workloads that it probably won't really matter too much. I mean, what, copying a movie around is gonna take a few extra seconds? Oh no. Still, for $500 you'd be better off with a 1TB 970 Evo and 64GB RAM, but you do you.
|
# ? May 12, 2020 18:11 |
|
redeyes posted:I think i saw a recent Linus video testing nvme vs sata drives loading games and the SATA drive won. Linus also tried to make a whole-home water loop involving his bathtub that ended extremely poorly because he never thought about half the details needed, and borked his entire forums database because he was running some crazy RAID1 solution or somesuch, IIRC. His vidoes are sometimes entertaining, but rarely should be considered authoritative in their actual content.
|
# ? May 12, 2020 18:12 |
|
DrDork posted:Linus also tried to make a whole-home water loop involving his bathtub that ended extremely poorly because he never thought about half the details needed, and borked his entire forums database because he was running some crazy RAID1 solution or somesuch, IIRC. It seemed he controlled the variables pretty well actually.
|
# ? May 12, 2020 18:14 |
|
redeyes posted:It seemed he controlled the variables pretty well actually. It's always possible he's learned from past mistakes and gotten better. Got a link for the video?
|
# ? May 12, 2020 18:15 |
|
DrDork posted:It's always possible he's learned from past mistakes and gotten better. Got a link for the video? Hopefully! here https://www.youtube.com/watch?v=4DKLA7w9eeA On the subject of are NVMe drives worth it. My original Intel 750 PCIe SSD 400GB is still rocking along at top speeds. After 6-7 years at about %90 wear level. I don't intend to replace it for the foreseeable future. Cost $380 bux. I consider it a 'good' investment.
|
# ? May 12, 2020 18:18 |
|
redeyes posted:I think i saw a recent Linus video testing nvme vs sata drives loading games and the SATA drive won. It was a perceptual test (which is what matters for the normal folks) but yes, they thought the SATA drive was the fastest, or maybe second place, its been a while since I watched it.
|
# ? May 12, 2020 18:19 |
|
DrDork posted:It's always possible he's learned from past mistakes and gotten better. Got a link for the video? His staff seem pretty smart but I think his 'brand' already has the connotation of being haphazard so they probably deliberately sabotage their projects just to be on brand.
|
# ? May 12, 2020 18:28 |
|
Maybe so; after all, they're trying to run a business, and that means getting people to watch the videos in the first place. Blowing their own stuff up occasionally to keep those clicks a'coming may make more business sense than being boring but correct.
|
# ? May 12, 2020 18:33 |
|
What's the minimum SATA SSD you would put in a server to move a database off 15k SAS? Setup is just mirrored disks that get backed up nightly. 860 pro or could I get away with something like a pair of BX500s? Can't really throw in a PCIe SSD at the moment otherwise I'd get something more enterprisey, and SAS SSDs seem ludicrously overpriced. More than willing to be wrong on the latter if there's a good one in that space.
|
# ? May 12, 2020 22:26 |
|
Harik posted:What's the minimum SATA SSD you would put in a server to move a database off 15k SAS? Setup is just mirrored disks that get backed up nightly. 860 pro or could I get away with something like a pair of BX500s? If you have a significant amount of write activity you should do 860 pro at the least. If it's for something piddly the generic consumer drive would be fine. Probably not BX500s though, those are now QLC at 1TB plus size.
|
# ? May 12, 2020 22:36 |
|
ugh, definitely not then. Hate when companies make massive changes without renaming the product. What about SAS SSDs? The ones I see are tiny compared to consumer drives, 200gb but SLC.
|
# ? May 12, 2020 22:49 |
|
Harik posted:ugh, definitely not then. Hate when companies make massive changes without renaming the product. That again depends on your write tolerance needs are. If it's getting hammered constantly and writing TB of data a day, SLC-based drives make sense unless you're ok with replacing dead drives every so often. If it's mostly read activity, then it's probably not needed. Curious, though: how is it you've got the option of SAS SSDs but not PCIe ones?
|
# ? May 12, 2020 22:52 |
|
Harik posted:What's the minimum SATA SSD you would put in a server to move a database off 15k SAS? Setup is just mirrored disks that get backed up nightly. 860 pro or could I get away with something like a pair of BX500s? Intel DC something or another
|
# ? May 12, 2020 23:07 |
|
DrDork posted:No degradation is, indeed, a pretty nice bonus if you're the type to fill drives nearly to capacity. But the thing TT kinda misses is that an average user workload isn't >30k IOPS or anything crazy like that, so the potential random/4k performance benefit of Optane will rarely be realized in actual use. Which is the real problem with Optane in the consumer space: most of its best qualities will just never get used in a meaningful way. Yeah, my main drive tends to fill up with Windows, a few games, and all the various caches. I'm getting a 480gb 905p + 32gb of mem (doesn't seem like there's anything I can do that will require more any time soon) and the latest 2tb Sabrent for standard storage. Woo! Thanks everyone for the answers re: consumer use of PCIE 4 and for everything else.
|
# ? May 12, 2020 23:52 |
|
Kane posted:I'm getting a 480gb 905p + 32gb of mem (doesn't seem like there's anything I can do that will require more any time soon) and the latest 2tb Sabrent for standard storage. Woo! Living the dream, there! I'd have bumped up to 64GB RAM for shits and giggles (aka a 32GB RAM drive cache), but even with "just" 32GB I think we can safely say that disk I/O will not be much of a limiting factor for you going forward.
|
# ? May 13, 2020 00:39 |
|
Klyith posted:No PCIe version other than the first has made any difference to consumer uses at introduction. I'm guessing it's the datacenter people who are pushing for more PCIe bandwidth more than anyone else. PCIE4 will also help laptops/portables. PCB real estate is a huge premium in those, and being able to squeeze more out of less traces can make a noticeable improvement in performance and expandability. We are probably coming up on diminishing returns there, as PCIE5 will probably have tight EM requirements.
|
# ? May 13, 2020 01:00 |
|
DrDork posted:That again depends on your write tolerance needs are. If it's getting hammered constantly and writing TB of data a day, SLC-based drives make sense unless you're ok with replacing dead drives every so often. If it's mostly read activity, then it's probably not needed. That's easy, I've got open 2.5" SAS bays but not enough free PCIe slots to put in pairs for redundancy. So something like the Intel SSD D3 S4510 series?
|
# ? May 13, 2020 01:52 |
|
Harik posted:That's easy, I've got open 2.5" SAS bays but not enough free PCIe slots to put in pairs for redundancy. So something like the Intel SSD D3 S4510 series? Depending on what all you have available, you could get 2x NVMe PCIe adapter cards. I think the cheap ones are like $200 these days. Either way, yeah, if you're doing a lot of write-intensive workloads where disk wear would be a concern, something like the Intel DS line would be a good pick.
|
# ? May 13, 2020 03:06 |
|
Look on my works, ye Mighty, and despair!
|
# ? May 13, 2020 21:49 |
|
But....but....WHY?
|
# ? May 13, 2020 22:41 |
|
Put more chips on it
|
# ? May 13, 2020 22:41 |
|
not sure if this is the right thread for this, but Tim Sweeney talks about the PS5 storage system and its implications on the future PC market:quote:“We’ve been working super close with Sony for quite a long time on storage,” he says. “The storage architecture on the PS5 is far ahead of anything you can buy on anything on PC for any amount of money right now. It’s going to help drive future PCs. [The PC market is] going to see this thing ship and say, ‘Oh wow, SSDs are going to need to catch up with this.” I wonder what that's all about.
|
# ? May 13, 2020 23:19 |
|
eames posted:not sure if this is the right thread for this, but Tim Sweeney talks about the PS5 storage system and its implications on the future PC market: sounds like some bullshit to me, Sweeney won't even cop to making Simusex, so he's just a liar through and through.
|
# ? May 13, 2020 23:33 |
|
eames posted:I wonder what that's all about. It's about how Sony has baked a decompression engine into the SoC. Games are apparently going to be delivered as pre-compressed blobs. The SSD (which is fast on its own) just spits out compressed data onto the PCIe interface, then it gets decompressed going into RAM. The result of all that is a theoretical effective maximum read bandwidth of up to 9GB/s, which is indeed considerably faster than what you can get out of a normal consumer PC SSD. The part that's not mentioned is how and if it's actually going to make a big difference. The XBox is limited to <5GB/s, so presumably most games are going to be built with that in mind. It might still run a little better on the PS5, but I'd expect a bunch of that bandwidth to not really be utilized in game-changing ways outside of PS5 exclusives. Regardless, using a fast SSD at all should make a huge difference in how devs can approach level design, loading, etc., that should massively benefit everyone, regardless of platform.
|
# ? May 13, 2020 23:42 |
|
Doing hardware compression at the drive level is pretty neat and is already a big thing for video storage. With the complexity of the storage controllers just shoehorning some extra compute power for compression/decompression isn’t that much of a heavy lift.
|
# ? May 13, 2020 23:59 |
|
priznat posted:Doing hardware compression at the drive level is pretty neat and is already a big thing for video storage. With the complexity of the storage controllers just shoehorning some extra compute power for compression/decompression isn’t that much of a heavy lift. Doing it at the drive level isn't ideal though, since you'd need to shift uncompressed data over PCIe in that case. To match what the consoles are doing we'd need decompression built into the CPU so it can stream compressed data in over PCIe and dump it straight into RAM.
|
# ? May 14, 2020 00:11 |
|
|
# ? May 11, 2024 16:13 |
|
repiv posted:Doing it at the drive level isn't ideal though, since you'd need to shift uncompressed data over PCIe in that case. To match what the consoles are doing we'd need decompression built into the CPU so it can stream compressed data in over PCIe and dump it straight into RAM. Well, you can do the ~5GB/s over a normal PCIe 4 link without too much effort, and such drives exist already. But yeah, to hit 9GB/s you'd need to build a SSD with a whole lot more channels that intentionally targeted that sort of bandwidth.
|
# ? May 14, 2020 00:16 |