|
Not to anyone's knowledge, but it's sure as hell on my radar now too, next to "When is the next Intel DC card going to try to kill itself" and "When is the next speculative execution issue requiring a fix costing 10% of my cluster performance going to drop" and I don't like that
Potato Salad fucked around with this message at 04:48 on Mar 1, 2023 |
# ? Mar 1, 2023 04:46 |
|
|
# ? May 13, 2024 11:32 |
|
vweeeeeeeeeeeeeeeeeeeeeeeeee https://twitter.com/VideoCardz/status/1630928577565687810 https://www.youtube.com/watch?v=UsEGqhAPjQU&t=729s totally worth it
|
# ? Mar 1, 2023 15:11 |
|
I found out the hard way that Samsung 870 QVOs are very, very bad at what I need them to do. Who makes a good 4-8TB 2.5" SSD that will write ~50% of a fresh drive at full SATA3 speeds? The Samsung immediately ate poo poo at 75GB and stayed significantly slower than the spinning disk it was copying from. I'm leaning toward WD Blue or Red as my boss is paying.
|
# ? Mar 3, 2023 03:09 |
|
A 4TB Crucial MX500 is $240 at the moment. Not the newest drive by far but it'll get the job done you need it to. If you want closer to 8TB, look at the Micron 5300 Pro: https://www.cdw.com/product/micron-...&cm_ite=7076053 And the Samsung PM893 (not QLC): https://www.amazon.com/dp/B0B83T9KNB BIG HEADLINE fucked around with this message at 04:15 on Mar 3, 2023 |
# ? Mar 3, 2023 04:00 |
|
Shumagorath posted:I found out the hard way that Samsung 870 QVOs are very, very bad at what I need them to do. Who makes a good 4-8TB 2.5" SSD that will write ~50% of a fresh drive at full SATA3 speeds? The Samsung immediately ate poo poo at 75GB and stayed significantly slower than the spinning disk it was copying from. I'm leaning toward WD Blue or Red as my boss is paying. See, according to the other posters, you are fine, normal people don't need SSDs to maintain high performance. WD Red is my vote. redeyes fucked around with this message at 16:27 on Mar 3, 2023 |
# ? Mar 3, 2023 16:10 |
|
There's a huge difference between a QLC drive and the DRAM-less modern PCIe 4.0 budget SSDs people were recommending. The disadvantages of QLC drives are well known and trying to fill such a drive immediately is neither a thing they're good at nor a thing "normal people" do all the time, so no, other posters didn't actually recommend that. But you knew all that
|
# ? Mar 3, 2023 18:58 |
|
redeyes posted:See, according to the other posters, you are fine, normal people don't need SSDs to maintain high performance. I dunno if doing half drive writes of multiple drives out of the box is "normal people" stuff tbh. Shumagorath posted:I found out the hard way that Samsung 870 QVOs are very, very bad at what I need them to do. Who makes a good 4-8TB 2.5" SSD that will write ~50% of a fresh drive at full SATA3 speeds? The Samsung immediately ate poo poo at 75GB and stayed significantly slower than the spinning disk it was copying from. I'm leaning toward WD Blue or Red as my boss is paying. I think this graph might be the most useful for that use case: Can deff see the QVO deficit there. I'm personally a MX500 lover and would recommend them.
|
# ? Mar 3, 2023 22:18 |
|
I went with WD Red because my patience with Samsung is extremely thin, and I couldn’t ask management to pay for the same brand after accepting those QVO lemons. Can’t knock my T7 though; that’s been solid.
|
# ? Mar 4, 2023 06:48 |
|
Is the WD SN850X one of those "oh no there's a huge flaw, it'll eat its own rear end if you don't update the firmware" drives? because I just bought one and stuck it in my PS5.
|
# ? Mar 7, 2023 17:39 |
|
No, it’s fine. I just installed one as a new system drive.
|
# ? Mar 7, 2023 17:43 |
|
Rad, thanks
|
# ? Mar 7, 2023 17:45 |
|
Will it eat its own rear end?
|
# ? Mar 8, 2023 01:18 |
|
Is that considered good or bad?!?!
|
# ? Mar 8, 2023 03:23 |
|
certainly not something i want happening in my playstation 5 video games console
|
# ? Mar 8, 2023 12:47 |
|
WD blacks are trusted in my neck of the woods.... UNTIL???!
|
# ? Mar 9, 2023 15:02 |
|
it'll take 3 months for these
|
# ? Mar 10, 2023 05:22 |
|
All my sata and HDD drives have been removed Welcome to the future
|
# ? Mar 10, 2023 07:53 |
|
WhyteRyce posted:All my sata and HDD drives have been removed my future includes a 20tb spinner in a prebuilt HP """gamer""" shitbox. highly recommend
|
# ? Mar 10, 2023 09:27 |
|
Anyone has experience with NVMe drives and AMD Raid? My future PC build plan includes 6x 2TB SN770 SSDs in raid0, with 4 on the motherboard and 2 on a PCIE addon card and I'm trying to figure out if I will run into issues with this.
|
# ? Mar 10, 2023 13:57 |
|
makere posted:Anyone has experience with NVMe drives and AMD Raid? motherboard raid sucks and shouldn't be used on either AMD or intel versus OS/filesystem methods raid 0 sucks and shouldn't be used for a PC and especially not in a 6(!) drive configuration there is no conceivable reason where someone who needs that type of bandwidth should be considering mobo raid But if you want do something immensely stupid, then yes ryzens support 6 NVMe drives (docs for 300-500 say up to 8 drives per array). The main issue you might have is "2 on a PCIe addon card" -- what PCIe card, you probably need one that isn't dependent on the mobo supporting bifurcation.
|
# ? Mar 10, 2023 15:16 |
|
Klyith posted:motherboard raid sucks and shouldn't be used on either AMD or intel versus OS/filesystem methods I am thinking about the Asus ProArt X670E motherboard and the Asus Hyper M.2 Adapter or other similar bifurcation adapter. Already checked that I should be able to use 2 NVMes on the second PCIe slot (or 4 in first slot) with the adapter. My main goal is to get all the discs into single volume (12TB minus overhead), and to be able to expand it in the future. I run weekly/daily backups and I'm not really worried about losing data and the Windows will be on a seperate SATA drive. I'm mostly worried that if I will even be able to mix the mobo SSDs and expansion slot SSDs into same array, and if the AMD drivers will cause instability.
|
# ? Mar 10, 2023 15:55 |
|
makere posted:I am thinking about the Asus ProArt X670E motherboard and the Asus Hyper M.2 Adapter or other similar bifurcation adapter. Expanding in the future is almost certainly a no. Or at least not without getting all data off, recreating the array with more drives, and then putting everything back. Also the mobo only supports using the Hyper M.2 Adapter with all 4 drives in PCIe slot 1, with the unspoken addendum that slot 2 needs to be empty to do that -- slot 1 and 2 split to 2x8 if you put stuff in both. And slot 3 shares lanes with m.2 #3, so your GPU would be running at x2 speed lmao. So 6 drives is kinda your max. As I said, you're doing something pretty stupid. If this was for a real purpose, this setup should be on a threadripper or epyc system, because those have plenty of PCIe lanes. makere posted:I'm mostly worried that if I will even be able to mix the mobo SSDs and expansion slot SSDs into same array, and if the AMD drivers will cause instability. Should be no difference between the mobo m.2 slots and the PCIe slots, they're all just PCIe. Drivers are fine. Two of your m.2 slots are behind the 2nd chipset and thus will bottleneck across the PCIe x4 connection between the two chipset chiplets, for whatever that's worth. Klyith fucked around with this message at 18:01 on Mar 10, 2023 |
# ? Mar 10, 2023 17:01 |
|
Klyith posted:Expanding in the future is almost certainly a no. Or at least not without getting all data off, recreating the array with more drives, and then putting everything back. Klyith posted:Also the mobo only supports using the Hyper M.2 Adapter with all 4 drives in PCIe slot 1, with the unspoken addendum that slot 2 needs to be empty to do that -- slot 1 and 2 split to 2x8 if you put stuff in both. And slot 3 shares lanes with m.2 #3, so your GPU would be running at x2 speed lmao. So 6 drives is kinda your max. Klyith posted:As I said, you're doing something pretty stupid. If this was for a real purpose, this setup should be on a threadripper or epyc system, because those have plenty of PCIe lanes. Klyith posted:Should be difference between the mobo m.2 slots and the PCIe slots, they're all just PCIe. Drivers are fine.
|
# ? Mar 10, 2023 17:22 |
|
makere posted:Personally I don't really see the stupid part, I have 6x 2TB NVMe drives and a need for 12TB storage for games and other not that critical stuff, this setup is to replace my current 6x3TB Raid6 HDD setup. Mostly using raid 0 (almost always bad) and mobo raid (bad compared to alternatives). I kinda assumed that you wanted raid 0 for "performance", which 9/10 times is why someone wants to use it and a classic stupid enthusiast idea. If that was not your reasoning it's still a mistake, but not quite so stupid. You should definitely look at storage spaces instead if you want to put drives together. But also, if you need 12TB of storage for games and non-critical stuff, you probably have better choices than 6 2TB drives which require a weird & constraining setup. For example, the Crucial P3 4TB is $220. If I really wanted a bigass $800 all-solid setup like this, I would probably choose: 1x 2TB SN770 to host your OS ($120) 3x 4TB P3 as bulk & game storage ($660) Final price is the same (you can ditch the hyper m.2), you get 14TB instead of 12, and a much simpler & more portable configuration. Downside is those P3 drives are QLC, so dumping all your data onto them the first time can take a while. In a 3-drive storage spaces config they'll be writing at ~1/2 GB per second, so several hours to fill. (Though if sending data over the network, for anything slower than 10GbE the network will be the limit.) And if you need to expand in the future, then you can do the Hyper M.2 and add 2 more drives.
|
# ? Mar 10, 2023 18:15 |
|
Klyith posted:If I really wanted a bigass $800 all-solid setup like this, I would probably choose: Thanks for the suggestion, but the drives are 0$ as I already have them for various reasons, if I were buying stuff new I wouldn't do it at all, or would go for the 4-8TB drives.
|
# ? Mar 10, 2023 18:22 |
|
The other reason not to use raid0 - yeah you have backups, but if one drive fails do you really want to sit there and copy all 12 TB of data back to the array?? If you really want no redundancy then it's better to use something like a jbod so you only need to recover one drive's data. I think storage spaces or stablebit drivepool can do that.
|
# ? Mar 10, 2023 18:39 |
|
VostokProgram posted:The other reason not to use raid0 - yeah you have backups, but if one drive fails do you really want to sit there and copy all 12 TB of data back to the array?? If you really want no redundancy then it's better to use something like a jbod so you only need to recover one drive's data. I think storage spaces or stablebit drivepool can do that. Some hours over 10Gbit/s ethernet link, it's not business critical data so I can take that downtime. I might try out raid0 just to see what the performance is like at first, then if there's issues will fall back to storage spaces.
|
# ? Mar 10, 2023 19:30 |
|
Just out of curiosity is doing 2x10gb (identical) using windows storage spaces reasonable just because I want one drive letter or is it a bad idea for some reason?
|
# ? Mar 10, 2023 19:35 |
|
You can also use ntfs directory mount points.
|
# ? Mar 10, 2023 20:00 |
|
i was thinkin JBOD/Storage Spaces probably makes more sense than dealing with AMD's raid 0. I use Storage Spaces on both my ITX htpc/NAS seedbox and my gaming computer and its been good. Have used different heterogenous and non-heterogeneous drive configs over the years, and it has even handled a drive failure when in a mirrored pool with grace. Currently have 2x12tb and 2x2tb spinners in it in a mirrored pool on the NAS and 2x2tb MX500s on my gaming computer in a simple array for games. If your needs are more home gamer than pro, I think they work pretty well.
|
# ? Mar 10, 2023 20:02 |
|
Dogen posted:Just out of curiosity is doing 2x10gb (identical) using windows storage spaces reasonable just because I want one drive letter or is it a bad idea for some reason? The Storage Space configurations are simple, mirror, and parity. 'Simple' has no redundancy and writes data across drives in a raid0 type way. IE if one drive dies, you should expect to lose most of everything. It's still a vastly better way to do it than mobo raid, because if your mobo dies you can put those drives in a different PC and it'll work fine. CopperHound posted:You can also use ntfs directory mount points. This is what I always used to do. Simple, manual, minimal. It does have one downside -- a ntfs volume mounted into a directory sometimes works poorly with the recycle bin. But I pretty much use only shift-del to delete stuff so it never bothered me.
|
# ? Mar 10, 2023 20:21 |
|
Klyith posted:It does have one downside -- a ntfs volume mounted into a directory sometimes works poorly with the recycle bin. But I pretty much use only shift-del to delete stuff so it never bothered me.
|
# ? Mar 10, 2023 20:54 |
|
Does storage spaces do block striping across the drives? I thought it was a file level thing
|
# ? Mar 10, 2023 21:48 |
|
VostokProgram posted:Does storage spaces do block striping across the drives? I thought it was a file level thing Yeah its block level, not file level. And DONT USE IT FOR THE LOVE OF GOD. Its loving trash, lacks basic features.
|
# ? Mar 10, 2023 21:53 |
|
redeyes posted:Yeah its block level, not file level. And DONT USE IT FOR THE LOVE OF GOD. Its loving trash, lacks basic features. So AMD raid is trash, storage spaces is trash. What do we use at Windows without expensive dedicated raid-controller?
|
# ? Mar 10, 2023 22:00 |
|
redeyes posted:Yeah its block level, not file level. And DONT USE IT FOR THE LOVE OF GOD. Its loving trash, lacks basic features. Why? What features are missing for a home user? Honest question, ive been using it for years.
|
# ? Mar 10, 2023 22:21 |
|
Cygni posted:Why? What features are missing for a home user? Honest question, ive been using it for years. The issue becomes what happens when it goes wrong? Have you hade a failure?
|
# ? Mar 11, 2023 01:33 |
|
VostokProgram posted:The other reason not to use raid0 - yeah you have backups, but if one drive fails do you really want to sit there and copy all 12 TB of data back to the array?? If you really want no redundancy then it's better to use something like a jbod so you only need to recover one drive's data. I think storage spaces or stablebit drivepool can do that. Not familiar with Windows Storage Spaces, but I can confirm that it does work under Stablebit Drivepool. I had two spinning disks, a 4 TB WD and a 3 TB Seagate that I've used for a good long while that I pooled together to have a roughly 7 TB "drive" just for video game installs. The other day the 3 TB drive finally took a poo poo. I managed to resurrect it just long enough to get the game install data off of it and onto the 4 TB drive, which was actually pretty simple since it was just a matter of going into the hidden folder Drivepool puts on the drive (that's where the "pool" resides, so any files you move into there will show up as being under the new pooled drive letter), then cutting and pasting whatever game folders to the other drive (not counting the nervous moments when I was afraid the dying drive was going to finally give up the ghost and kill the file transfer).
|
# ? Mar 11, 2023 01:36 |
|
redeyes posted:The issue becomes what happens when it goes wrong? Have you hade a failure? Yeah, i had a failure in my two-way mirrored pool. Got a Windows warning message that i had lost redundancy. I swapped it for another drive (thankfully the failure was just an old 1tb 2.5in laptop spinner I had salvaged which had horrific stats even when new), added the new drive to the pool, and deleted the old drive out of it and let it fix itself. Same pool is still running fine years later. I admittedly might be skipping some steps I did, I can't really remember. I've also disconnected/reconnected drives from the pool multiple times without issues, added and removed drives from the pool permanently, added USB drives/NVMe, etc and havent had any probs. On the "simple" array in my gaming computer, I haven't had a failure yet... but I am fully aware that a failure in that pool will mean losing everything since its striped, but thats why its the games install drive.
|
# ? Mar 11, 2023 01:53 |
|
|
# ? May 13, 2024 11:32 |
|
Cygni posted:Yeah, i had a failure in my two-way mirrored pool. Got a Windows warning message that i had lost redundancy. I swapped it for another drive (thankfully the failure was just an old 1tb 2.5in laptop spinner I had salvaged which had horrific stats even when new), added the new drive to the pool, and deleted the old drive out of it and let it fix itself. Same pool is still running fine years later. I admittedly might be skipping some steps I did, I can't really remember. This is great info then. Last Time I tried this crap was when win 10 came out and I simulated a drive failure by just disconnecting a drive from the pool and it wouldnt let me either remove the drive that failed or add a new one. If they at least got that working, thats bare minimum. I still use Stablebit Drive pool now however. I prefer file level logical raid for potato home stuff.
|
# ? Mar 11, 2023 01:57 |