Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Potato Salad
Oct 23, 2014

nobody cares


Not to anyone's knowledge, but it's sure as hell on my radar now too, next to "When is the next Intel DC card going to try to kill itself" and "When is the next speculative execution issue requiring a fix costing 10% of my cluster performance going to drop" and I don't like that

Potato Salad fucked around with this message at 04:48 on Mar 1, 2023

Adbot
ADBOT LOVES YOU

repiv
Aug 13, 2009

vweeeeeeeeeeeeeeeeeeeeeeeeee

https://twitter.com/VideoCardz/status/1630928577565687810

https://www.youtube.com/watch?v=UsEGqhAPjQU&t=729s

totally worth it

Shumagorath
Jun 6, 2001
I found out the hard way that Samsung 870 QVOs are very, very bad at what I need them to do. Who makes a good 4-8TB 2.5" SSD that will write ~50% of a fresh drive at full SATA3 speeds? The Samsung immediately ate poo poo at 75GB and stayed significantly slower than the spinning disk it was copying from. I'm leaning toward WD Blue or Red as my boss is paying.

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"
A 4TB Crucial MX500 is $240 at the moment. Not the newest drive by far but it'll get the job done you need it to.

If you want closer to 8TB, look at the Micron 5300 Pro: https://www.cdw.com/product/micron-...&cm_ite=7076053

And the Samsung PM893 (not QLC): https://www.amazon.com/dp/B0B83T9KNB

BIG HEADLINE fucked around with this message at 04:15 on Mar 3, 2023

redeyes
Sep 14, 2002

by Fluffdaddy

Shumagorath posted:

I found out the hard way that Samsung 870 QVOs are very, very bad at what I need them to do. Who makes a good 4-8TB 2.5" SSD that will write ~50% of a fresh drive at full SATA3 speeds? The Samsung immediately ate poo poo at 75GB and stayed significantly slower than the spinning disk it was copying from. I'm leaning toward WD Blue or Red as my boss is paying.

See, according to the other posters, you are fine, normal people don't need SSDs to maintain high performance.

WD Red is my vote.

redeyes fucked around with this message at 16:27 on Mar 3, 2023

orcane
Jun 13, 2012

Fun Shoe
There's a huge difference between a QLC drive and the DRAM-less modern PCIe 4.0 budget SSDs people were recommending. The disadvantages of QLC drives are well known and trying to fill such a drive immediately is neither a thing they're good at nor a thing "normal people" do all the time, so no, other posters didn't actually recommend that.

But you knew all that :thumbsup:

Cygni
Nov 12, 2005

raring to post

redeyes posted:

See, according to the other posters, you are fine, normal people don't need SSDs to maintain high performance.

I dunno if doing half drive writes of multiple drives out of the box is "normal people" stuff tbh.


Shumagorath posted:

I found out the hard way that Samsung 870 QVOs are very, very bad at what I need them to do. Who makes a good 4-8TB 2.5" SSD that will write ~50% of a fresh drive at full SATA3 speeds? The Samsung immediately ate poo poo at 75GB and stayed significantly slower than the spinning disk it was copying from. I'm leaning toward WD Blue or Red as my boss is paying.

I think this graph might be the most useful for that use case:



Can deff see the QVO deficit there. I'm personally a MX500 lover and would recommend them.

Shumagorath
Jun 6, 2001
I went with WD Red because my patience with Samsung is extremely thin, and I couldn’t ask management to pay for the same brand after accepting those QVO lemons. Can’t knock my T7 though; that’s been solid.

Instant Grat
Jul 31, 2009

Just add
NERD RAAAAAAGE
Is the WD SN850X one of those "oh no there's a huge flaw, it'll eat its own rear end if you don't update the firmware" drives? because I just bought one and stuck it in my PS5.

Dogen
May 5, 2002

Bury my body down by the highwayside, so that my old evil spirit can get a Greyhound bus and ride
No, it’s fine. I just installed one as a new system drive.

Instant Grat
Jul 31, 2009

Just add
NERD RAAAAAAGE
Rad, thanks

Potato Salad
Oct 23, 2014

nobody cares


Will it eat its own rear end?

strangehamster
Sep 21, 2010

dance the night away


Is that considered good or bad?!?!

Instant Grat
Jul 31, 2009

Just add
NERD RAAAAAAGE
certainly not something i want happening in my playstation 5 video games console

redeyes
Sep 14, 2002

by Fluffdaddy
WD blacks are trusted in my neck of the woods.... UNTIL???!

Anime Schoolgirl
Nov 28, 2002

it'll take 3 months for these manchild adult's toys to get here but anyone had any experience with HMB drives like the SN570 on things as old as say launch day x370/b350 boards? Wondering if I have to worry about a board being too old for HMB to work or whether it's done at the OS level and I don't have to worry as a result.

WhyteRyce
Dec 30, 2001

All my sata and HDD drives have been removed

Welcome to the future

Instant Grat
Jul 31, 2009

Just add
NERD RAAAAAAGE

WhyteRyce posted:

All my sata and HDD drives have been removed

Welcome to the future

my future includes a 20tb spinner in a prebuilt HP """gamer""" shitbox. highly recommend

makere
Jan 14, 2012
Anyone has experience with NVMe drives and AMD Raid?
My future PC build plan includes 6x 2TB SN770 SSDs in raid0, with 4 on the motherboard and 2 on a PCIE addon card and I'm trying to figure out if I will run into issues with this.

Klyith
Aug 3, 2007

GBS Pledge Week

makere posted:

Anyone has experience with NVMe drives and AMD Raid?
My future PC build plan includes 6x 2TB SN770 SSDs in raid0, with 4 on the motherboard and 2 on a PCIE addon card and I'm trying to figure out if I will run into issues with this.

motherboard raid sucks and shouldn't be used on either AMD or intel versus OS/filesystem methods
raid 0 sucks and shouldn't be used for a PC and especially not in a 6(!) drive configuration
there is no conceivable reason where someone who needs that type of bandwidth should be considering mobo raid


But if you want do something immensely stupid, then yes ryzens support 6 NVMe drives (docs for 300-500 say up to 8 drives per array).

The main issue you might have is "2 on a PCIe addon card" -- what PCIe card, you probably need one that isn't dependent on the mobo supporting bifurcation.

makere
Jan 14, 2012

Klyith posted:

motherboard raid sucks and shouldn't be used on either AMD or intel versus OS/filesystem methods
raid 0 sucks and shouldn't be used for a PC and especially not in a 6(!) drive configuration
there is no conceivable reason where someone who needs that type of bandwidth should be considering mobo raid


But if you want do something immensely stupid, then yes ryzens support 6 NVMe drives (docs for 300-500 say up to 8 drives per array).

The main issue you might have is "2 on a PCIe addon card" -- what PCIe card, you probably need one that isn't dependent on the mobo supporting bifurcation.

I am thinking about the Asus ProArt X670E motherboard and the Asus Hyper M.2 Adapter or other similar bifurcation adapter.
Already checked that I should be able to use 2 NVMes on the second PCIe slot (or 4 in first slot) with the adapter.

My main goal is to get all the discs into single volume (12TB minus overhead), and to be able to expand it in the future. I run weekly/daily backups and I'm not really worried about losing data and the Windows will be on a seperate SATA drive.

I'm mostly worried that if I will even be able to mix the mobo SSDs and expansion slot SSDs into same array, and if the AMD drivers will cause instability.

Klyith
Aug 3, 2007

GBS Pledge Week

makere posted:

I am thinking about the Asus ProArt X670E motherboard and the Asus Hyper M.2 Adapter or other similar bifurcation adapter.
Already checked that I should be able to use 2 NVMes on the second PCIe slot (or 4 in first slot) with the adapter.

My main goal is to get all the discs into single volume (12TB minus overhead), and to be able to expand it in the future.

Expanding in the future is almost certainly a no. Or at least not without getting all data off, recreating the array with more drives, and then putting everything back.


Also the mobo only supports using the Hyper M.2 Adapter with all 4 drives in PCIe slot 1, with the unspoken addendum that slot 2 needs to be empty to do that -- slot 1 and 2 split to 2x8 if you put stuff in both. And slot 3 shares lanes with m.2 #3, so your GPU would be running at x2 speed lmao. So 6 drives is kinda your max.

As I said, you're doing something pretty stupid. If this was for a real purpose, this setup should be on a threadripper or epyc system, because those have plenty of PCIe lanes.


makere posted:

I'm mostly worried that if I will even be able to mix the mobo SSDs and expansion slot SSDs into same array, and if the AMD drivers will cause instability.

Should be no difference between the mobo m.2 slots and the PCIe slots, they're all just PCIe. Drivers are fine.

Two of your m.2 slots are behind the 2nd chipset and thus will bottleneck across the PCIe x4 connection between the two chipset chiplets, for whatever that's worth.

Klyith fucked around with this message at 18:01 on Mar 10, 2023

makere
Jan 14, 2012

Klyith posted:

Expanding in the future is almost certainly a no. Or at least not without getting all data off, recreating the array with more drives, and then putting everything back.
Thanks, I've mainly dealt with raid1/5/6 before, so didn't realise that one can't expand raid0 easily. This might actually make me go with JBOD or Windows storage spaces instead, need to do some research on the storage spaces first.

Klyith posted:

Also the mobo only supports using the Hyper M.2 Adapter with all 4 drives in PCIe slot 1, with the unspoken addendum that slot 2 needs to be empty to do that -- slot 1 and 2 split to 2x8 if you put stuff in both. And slot 3 shares lanes with m.2 #3, so your GPU would be running at x2 speed lmao. So 6 drives is kinda your max.
Right after posting, I noticed myself as well that the first slot will limit itself to 8x if the second one is populated, but I could add 7th drive to the PCIE#3.

Klyith posted:

As I said, you're doing something pretty stupid. If this was for a real purpose, this setup should be on a threadripper or epyc system, because those have plenty of PCIe lanes.
Personally I don't really see the stupid part, I have 6x 2TB NVMe drives and a need for 12TB storage for games and other not that critical stuff, this setup is to replace my current 6x3TB Raid6 HDD setup.

Klyith posted:

Should be difference between the mobo m.2 slots and the PCIe slots, they're all just PCIe. Drivers are fine.
Thanks, this is kinda what I've assumed, but having no experience with AMD platform is making me question myself.

Klyith
Aug 3, 2007

GBS Pledge Week

makere posted:

Personally I don't really see the stupid part, I have 6x 2TB NVMe drives and a need for 12TB storage for games and other not that critical stuff, this setup is to replace my current 6x3TB Raid6 HDD setup.

Mostly using raid 0 (almost always bad) and mobo raid (bad compared to alternatives).

I kinda assumed that you wanted raid 0 for "performance", which 9/10 times is why someone wants to use it and a classic stupid enthusiast idea. If that was not your reasoning it's still a mistake, but not quite so stupid. You should definitely look at storage spaces instead if you want to put drives together.



But also, if you need 12TB of storage for games and non-critical stuff, you probably have better choices than 6 2TB drives which require a weird & constraining setup. For example, the Crucial P3 4TB is $220.

If I really wanted a bigass $800 all-solid setup like this, I would probably choose:
1x 2TB SN770 to host your OS ($120)
3x 4TB P3 as bulk & game storage ($660)

Final price is the same (you can ditch the hyper m.2), you get 14TB instead of 12, and a much simpler & more portable configuration. Downside is those P3 drives are QLC, so dumping all your data onto them the first time can take a while. In a 3-drive storage spaces config they'll be writing at ~1/2 GB per second, so several hours to fill. (Though if sending data over the network, for anything slower than 10GbE the network will be the limit.)

And if you need to expand in the future, then you can do the Hyper M.2 and add 2 more drives.

makere
Jan 14, 2012

Klyith posted:

If I really wanted a bigass $800 all-solid setup like this, I would probably choose:
1x 2TB SN770 to host your OS ($120)
3x 4TB P3 as bulk & game storage ($660)

Thanks for the suggestion, but the drives are 0$ as I already have them for various reasons, if I were buying stuff new I wouldn't do it at all, or would go for the 4-8TB drives.

Yaoi Gagarin
Feb 20, 2014

The other reason not to use raid0 - yeah you have backups, but if one drive fails do you really want to sit there and copy all 12 TB of data back to the array?? If you really want no redundancy then it's better to use something like a jbod so you only need to recover one drive's data. I think storage spaces or stablebit drivepool can do that.

makere
Jan 14, 2012

VostokProgram posted:

The other reason not to use raid0 - yeah you have backups, but if one drive fails do you really want to sit there and copy all 12 TB of data back to the array?? If you really want no redundancy then it's better to use something like a jbod so you only need to recover one drive's data. I think storage spaces or stablebit drivepool can do that.

Some hours over 10Gbit/s ethernet link, it's not business critical data so I can take that downtime.

I might try out raid0 just to see what the performance is like at first, then if there's issues will fall back to storage spaces.

Dogen
May 5, 2002

Bury my body down by the highwayside, so that my old evil spirit can get a Greyhound bus and ride
Just out of curiosity is doing 2x10gb (identical) using windows storage spaces reasonable just because I want one drive letter or is it a bad idea for some reason?

CopperHound
Feb 14, 2012

You can also use ntfs directory mount points.

Cygni
Nov 12, 2005

raring to post

i was thinkin JBOD/Storage Spaces probably makes more sense than dealing with AMD's raid 0.

I use Storage Spaces on both my ITX htpc/NAS seedbox and my gaming computer and its been good. Have used different heterogenous and non-heterogeneous drive configs over the years, and it has even handled a drive failure when in a mirrored pool with grace.

Currently have 2x12tb and 2x2tb spinners in it in a mirrored pool on the NAS and 2x2tb MX500s on my gaming computer in a simple array for games. If your needs are more home gamer than pro, I think they work pretty well.

Klyith
Aug 3, 2007

GBS Pledge Week

Dogen posted:

Just out of curiosity is doing 2x10gb (identical) using windows storage spaces reasonable just because I want one drive letter or is it a bad idea for some reason?

The Storage Space configurations are simple, mirror, and parity. 'Simple' has no redundancy and writes data across drives in a raid0 type way. IE if one drive dies, you should expect to lose most of everything.

It's still a vastly better way to do it than mobo raid, because if your mobo dies you can put those drives in a different PC and it'll work fine.

CopperHound posted:

You can also use ntfs directory mount points.

This is what I always used to do. Simple, manual, minimal.

It does have one downside -- a ntfs volume mounted into a directory sometimes works poorly with the recycle bin. But I pretty much use only shift-del to delete stuff so it never bothered me.

CopperHound
Feb 14, 2012

Klyith posted:

It does have one downside -- a ntfs volume mounted into a directory sometimes works poorly with the recycle bin. But I pretty much use only shift-del to delete stuff so it never bothered me.
Does it seriously actually move the file to the root directory recycle bin? That is hilarious and totally on brand for Microsoft.

Yaoi Gagarin
Feb 20, 2014

Does storage spaces do block striping across the drives? I thought it was a file level thing

redeyes
Sep 14, 2002

by Fluffdaddy

VostokProgram posted:

Does storage spaces do block striping across the drives? I thought it was a file level thing

Yeah its block level, not file level. And DONT USE IT FOR THE LOVE OF GOD. Its loving trash, lacks basic features.

makere
Jan 14, 2012

redeyes posted:

Yeah its block level, not file level. And DONT USE IT FOR THE LOVE OF GOD. Its loving trash, lacks basic features.

So AMD raid is trash, storage spaces is trash.

What do we use at Windows without expensive dedicated raid-controller?

Cygni
Nov 12, 2005

raring to post

redeyes posted:

Yeah its block level, not file level. And DONT USE IT FOR THE LOVE OF GOD. Its loving trash, lacks basic features.

Why? What features are missing for a home user? Honest question, ive been using it for years.

redeyes
Sep 14, 2002

by Fluffdaddy

Cygni posted:

Why? What features are missing for a home user? Honest question, ive been using it for years.

The issue becomes what happens when it goes wrong? Have you hade a failure?

Sydney Bottocks
Oct 15, 2004
Probation
Can't post for 19 days!

VostokProgram posted:

The other reason not to use raid0 - yeah you have backups, but if one drive fails do you really want to sit there and copy all 12 TB of data back to the array?? If you really want no redundancy then it's better to use something like a jbod so you only need to recover one drive's data. I think storage spaces or stablebit drivepool can do that.

Not familiar with Windows Storage Spaces, but I can confirm that it does work under Stablebit Drivepool. I had two spinning disks, a 4 TB WD and a 3 TB Seagate that I've used for a good long while that I pooled together to have a roughly 7 TB "drive" just for video game installs. The other day the 3 TB drive finally took a poo poo. I managed to resurrect it just long enough to get the game install data off of it and onto the 4 TB drive, which was actually pretty simple since it was just a matter of going into the hidden folder Drivepool puts on the drive (that's where the "pool" resides, so any files you move into there will show up as being under the new pooled drive letter), then cutting and pasting whatever game folders to the other drive (not counting the nervous moments when I was afraid the dying drive was going to finally give up the ghost and kill the file transfer).

Cygni
Nov 12, 2005

raring to post

redeyes posted:

The issue becomes what happens when it goes wrong? Have you hade a failure?

Yeah, i had a failure in my two-way mirrored pool. Got a Windows warning message that i had lost redundancy. I swapped it for another drive (thankfully the failure was just an old 1tb 2.5in laptop spinner I had salvaged which had horrific stats even when new), added the new drive to the pool, and deleted the old drive out of it and let it fix itself. Same pool is still running fine years later. I admittedly might be skipping some steps I did, I can't really remember.

I've also disconnected/reconnected drives from the pool multiple times without issues, added and removed drives from the pool permanently, added USB drives/NVMe, etc and havent had any probs.

On the "simple" array in my gaming computer, I haven't had a failure yet... but I am fully aware that a failure in that pool will mean losing everything since its striped, but thats why its the games install drive.

Adbot
ADBOT LOVES YOU

redeyes
Sep 14, 2002

by Fluffdaddy

Cygni posted:

Yeah, i had a failure in my two-way mirrored pool. Got a Windows warning message that i had lost redundancy. I swapped it for another drive (thankfully the failure was just an old 1tb 2.5in laptop spinner I had salvaged which had horrific stats even when new), added the new drive to the pool, and deleted the old drive out of it and let it fix itself. Same pool is still running fine years later. I admittedly might be skipping some steps I did, I can't really remember.

I've also disconnected/reconnected drives from the pool multiple times without issues, added and removed drives from the pool permanently, added USB drives/NVMe, etc and havent had any probs.

On the "simple" array in my gaming computer, I haven't had a failure yet... but I am fully aware that a failure in that pool will mean losing everything since its striped, but thats why its the games install drive.

This is great info then. Last Time I tried this crap was when win 10 came out and I simulated a drive failure by just disconnecting a drive from the pool and it wouldnt let me either remove the drive that failed or add a new one. If they at least got that working, thats bare minimum. I still use Stablebit Drive pool now however. I prefer file level logical raid for potato home stuff.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply