With SAS expanders and any of the venerable LSI controllers that this thread is so fond of, you can hang up to 1024 drives off a single SFF-8087 or SFF-8088 port, whereas the most number of drives you can hang off a SATA port is 15 drives using a SATA Port Multiplier (although there's no guarantee you'll be able to even use SATA PM since it's an optional part of the spec that not every vendor implements). Naturally, all devices share bandwidth, so it's dubious whether it's smart to hang that many disks off one port, but given that spinning rust tops out at around 160MBps in the real world (with streaming I/O) and even SATA3 is capable of doing ~550MBps while SAS3 can do 1500MBps and SAS2 can do 550MBps, there's some wiggle-room. One of the biggest advantages of SAS, though, is SCSI Enclosure Services. It lets you get drive and enclosure temperatures, fan rotation speeds, and more importantly it lets you toggle whether disks are faulted so that you can make the LED on the SAS enclosure light up to indicate which disk it is you want to replace. This can even be done automatically with ZFS and zfsd on FreeBSD.
|
|
# ? Apr 9, 2021 01:27 |
|
|
# ? May 16, 2024 12:32 |
|
Buff Hardback posted:As it is, a SAS card is using breakout cables (there's 2 physical ports on the card, each physical port connects to a cable that has one SFF-8087 to either 4 SATA connectors or (in my opinion the more versatile cable) 4 SAS SFF-8482s. The expander card (with a total of 8 SFF-8087 ports on it) is connected with two SFF-8087 to SFF-8087 cables, and then acts as basically a switch to allow for way more SAS/SATA drives to be connected. Yup, and all this is possible because SAS isn’t just fancy SATA, where you’re just reading or writing from an address, it’s actually SCSI being sent over the same physical connector as SATA. Since SCSI is a higher level protocol that works on the concept of “commands” there is a place to slot in concepts like “multiple drives per channel”. (Or the fancy stuff previously referenced) Sir Bobert Fishbone posted:I have a 3600 in that ASRock motherboard and was never able to get any kind of ECC RAM to post in it. I think I bought and returned 3 sets, both on the supported list and not, before giving up. That’s really weird, and kind of a bummer since that one was high on my shortlist. I don’t want to second-guess you there but maybe it was just a problem with one specific bios or something? Seems really really weird.
|
# ? Apr 9, 2021 05:19 |
Speaking of SCSI, I just remembered this:xtal posted:Someone let me know if there is a better place to ask this. The issue you'll probably find is that your motherboards vendor didn't want to pay for the cost of implementing UAS, so you're likely going to have to invest in a daughterboard with a USB controller on it that explicitly supports it.
|
|
# ? Apr 9, 2021 12:36 |
|
Paul MaudDib posted:Yup, and all this is possible because SAS isn’t just fancy SATA, where you’re just reading or writing from an address, it’s actually SCSI being sent over the same physical connector as SATA. Since SCSI is a higher level protocol that works on the concept of “commands” there is a place to slot in concepts like “multiple drives per channel”. (Or the fancy stuff previously referenced) In the research I did before just throwing my hands up and giving up on ECC, it sounds like it wasn't necessarily the mobo that was the issue, but the fact that the 3600 doesn't support ECC. You may have better luck with a different CPU that does. It's also definitely definitely possible that it was something else I just didn't have the patience to diagnose.
|
# ? Apr 9, 2021 13:23 |
|
Paul MaudDib posted:Yup, and all this is possible because SAS isn’t just fancy SATA, where you’re just reading or writing from an address, it’s actually SCSI being sent over the same physical connector as SATA. Since SCSI is a higher level protocol that works on the concept of “commands” there is a place to slot in concepts like “multiple drives per channel”. (Or the fancy stuff previously referenced) My friend has it working with 128 GB of ECC RAM with 3-series consumer CPU.
|
# ? Apr 9, 2021 15:03 |
Kivi posted:My friend has it working with 128 GB of ECC RAM with 3-series consumer CPU. I would love to know if it does anything other than POST with ECC, because I have severe doubts that it implements proper error handling with NMIs, and if it isn't implemented properly, it's damned hard to figure out if it is implemented properly.
|
|
# ? Apr 9, 2021 16:12 |
|
I added another drive to my array in Unraid. Is there any benefit to balancing out the current data so it is more evenly spread out or does it not matter that much if everything new gets saved there?
|
# ? Apr 9, 2021 17:36 |
|
Teabag Dome Scandal posted:I added another drive to my array in Unraid. Is there any benefit to balancing out the current data so it is more evenly spread out or does it not matter that much if everything new gets saved there? I don't think so unless you have split levels manually configured. That setting could try to place new files on drives regardless of their available space.
|
# ? Apr 9, 2021 17:46 |
|
I prefer having the least amount of drives needed to be spun up so I fill each drive 1 by 1 with the 'fill up' share setting, but in the end it doesn't really matter too much.
|
# ? Apr 10, 2021 00:15 |
|
Speaking of - I've read a lot of mixed opinions on keeping drives spun up vs. letting them sleep. Is there a general time guideline for how often a drive is accessed that makes it better to leave spun up? Some people have said with modern drives the spin up doesn't wear that much so old arguments about spin ups drastically reducing life of a drive are outdated. Is there a consensus here about that?
|
# ? Apr 10, 2021 04:58 |
|
the general advice is that spinups are not good for a drive and so you should minimize it as much as possible. I'd say maybe one spinup per day for 5 years is an incredibly safe level and maybe 10 per day is a slightly risky level and 100 per day is dangerous. Shift those down one tier at 10 years, so 1 per day is slightly risky at 10 years and so on. since you can't know how long it's going to be until next access for the most part (obviously it's trivial to know how long it's been "since the last access" but you would have to have some kind of a schedule to spin them down in advance when you are likely to be idle), the advice is just to not spin them down ever. It's maybe 3w per drive to spin them idle, it's just not worth caring about. In terms of drive life it's better for the drives just to spin always Paul MaudDib fucked around with this message at 05:19 on Apr 10, 2021 |
# ? Apr 10, 2021 05:08 |
|
When I had my server up, I set my spindown time to like 10 hours or something. It would come up in the evening for TV and stuff, and then spin down during the day when I was at work.
|
# ? Apr 10, 2021 05:59 |
|
Buff Hardback posted:When I had my server up, I set my spindown time to like 10 hours or something. It would come up in the evening for TV and stuff, and then spin down during the day when I was at work. Yeah, same. Long enough that they'll keep spinning in normal everyday use, but still spin down if I'm gone for half a day or more.
|
# ? Apr 10, 2021 11:11 |
|
Who is outside my house yelling spin down? I will never spin down!
|
# ? Apr 10, 2021 17:08 |
|
WD Support: Do you need Standard RMA or Advanced? ME: Advanced WD Support: Are you getting any error? ME: Yes bad sectors ME: Bad sector was found on disk[8]. ME: An I/O error occurred to Drive 6 WD Support: Do you need advanced or standard RMA ME: Advanced WD Support: Is it okay if we go for Standard RMA? ME: No, the drives are in RAID so I can't send them back without putting a new one in right away WD Support: I understand, give me a minute please. WD Support: As of now, the replacement drives we provide for these drives, "WD Gold" are not in inventory. Could you check back with us within a few days as the inventory gets replenished frequently. ME: You can't just backorder one for me? WD Support: Let me check. I'm sorry for keeping you on hold, I'm on the call with concerned department along with the chat. WD Support: Thank you for waiting. We will need to wait for the drive to be in our inventory as they arrive in batches. Also, we'll provide you WD Gold of 4 TB in capacity, would that be okay? ME: When should I check back? WD Support: In 3-4 business days.
|
# ? Apr 12, 2021 18:30 |
|
Hope you've been using RAID with 2-drive redundancy...
|
# ? Apr 12, 2021 19:11 |
|
FWIW is had the same problem with Seagate a few weeks ago when this thread helped me with some HDD errors. They couldn’t do advance replacement but could take my failed drive now. How nice for them. Lucky I ran RAID-Z2!
|
# ? Apr 12, 2021 19:15 |
|
DrDork posted:Hope you've been using RAID with 2-drive redundancy... It's either a 10 or a 6, it's just a backup target. The the drive 'fully recovered', but I don't really want to take any chances and want some new drives.
|
# ? Apr 12, 2021 19:47 |
|
I've got my shiny (old) 2500k UnRaid server finally up and running, but I have a RAM question: It's currently got 8GB of 1600mhz DDR3 RAM in it (2x4). I have a second set of 2x4 RAM at 1600 but with different timings - is it worth throwing it into the system? Will the timings/different brand be anything but a net positive? No gaming/barely any VMs, just plex and local storage on it.
|
# ? Apr 13, 2021 01:38 |
|
BaronVanAwesome posted:I've got my shiny (old) 2500k UnRaid server finally up and running, but I have a RAM question: All of the ram will run at the slowest timings available. I'm not sure there's a lot of advantage to 8 vs 16 on a plex server but I kind of doubt it. Maybe keep an eye on ram usage and see if it's paging heavily right now without the extra ram.
|
# ? Apr 13, 2021 10:33 |
|
Rexxed posted:All of the ram will run at the slowest timings available. I'm not sure there's a lot of advantage to 8 vs 16 on a plex server but I kind of doubt it. Maybe keep an eye on ram usage and see if it's paging heavily right now without the extra ram. I already have the extra sticks so there's no cost issue with using them or not, I just didn't want to slow things down for no reason or something. I'll try a big file transfer and a Plex transcode at the same time and double check, thank you!
|
# ? Apr 13, 2021 12:40 |
|
I picked up an extra WD Elements 8 to shuck for a cold spare, and it's really acting weird with the smart long test. It starts, I let it sit for the 12 hours, and then when I run smartctl --all to check it, smartctl hangs and times out the first time, and then when I run it again it says the test was aborted by operator at 90%. And has done that twice now. I'm going ahead and running badblocks on it while I ponder what to do. There's no SMART errors yet, just seems to get stuck during the test. I don't remember having an issue on my original set of 6, though I did shuck those before doing the tests, and this time I'm trying to do it on the USB controller.
|
# ? Apr 13, 2021 14:06 |
|
RMA it. Bad blocks isn't the only way a HD can fail.
|
# ? Apr 13, 2021 15:02 |
|
BaronVanAwesome posted:I already have the extra sticks so there's no cost issue with using them or not, I just didn't want to slow things down for no reason or something. More RAM is always more better. The performance loss you might observe because the faster sticks are slowing down to match the slower ones should be pretty negligible given they're both 1600. If you've got the sticks laying around, shove 'em in and never think about them again.
|
# ? Apr 13, 2021 15:19 |
|
It's worth looking at your use case, doing some benchmarks or just looking at the memory usage of your system. If you aren't at 100% RAM then obviously an upgrade isn't going to do anything. But with the filesystem cache in mind, you probably are using up all your memory, and want more. So you can think of it less as "slightly slowing down 8GB of RAM" and more as "drastically speeding up 8GB of disk."
|
# ? Apr 13, 2021 15:25 |
|
Rescue Toaster posted:I picked up an extra WD Elements 8 to shuck for a cold spare, and it's really acting weird with the smart long test. It starts, I let it sit for the 12 hours, and then when I run smartctl --all to check it, smartctl hangs and times out the first time, and then when I run it again it says the test was aborted by operator at 90%. And has done that twice now. I have found the Elements USB controllers have issues consistently passing SMART commands. In my case as long the the short SMART test passes and shows the right amount of hours and a destructive badblocks works then it's been good enough for me.
|
# ? Apr 13, 2021 15:51 |
|
Less Fat Luke posted:I have found the Elements USB controllers have issues consistently passing SMART commands. In my case as long the the short SMART test passes and shows the right amount of hours and a destructive badblocks works then it's been good enough for me. Yeah I know USB is hit and miss for smart. I think if it passes badblocks OK I'll go ahead and carefully shuck it and try the long SMART test on SATA. These are easy to take apart without breaking the case, one of my original six died during badblocks and I just re-packed it carefully in the same enclosure and RMA'd it no problem.
|
# ? Apr 13, 2021 16:40 |
|
DrDork posted:More RAM is always more better. The performance loss you might observe because the faster sticks are slowing down to match the slower ones should be pretty negligible given they're both 1600. If you've got the sticks laying around, shove 'em in and never think about them again. Agreed. You'd be very hard pressed to measure the difference in RAM timings on two different 1600 configs in anything but a synthetic test designed to exploit that. I'd run the 16GB over 8GB even if it cut the RAM clock to 1333.
|
# ? Apr 13, 2021 16:42 |
|
DrDork posted:More RAM is always more better. The performance loss you might observe because the faster sticks are slowing down to match the slower ones should be pretty negligible given they're both 1600. If you've got the sticks laying around, shove 'em in and never think about them again. xtal posted:It's worth looking at your use case, doing some benchmarks or just looking at the memory usage of your system. If you aren't at 100% RAM then obviously an upgrade isn't going to do anything. But with the filesystem cache in mind, you probably are using up all your memory, and want more. So you can think of it less as "slightly slowing down 8GB of RAM" and more as "drastically speeding up 8GB of disk." IOwnCalculus posted:Agreed. You'd be very hard pressed to measure the difference in RAM timings on two different 1600 configs in anything but a synthetic test designed to exploit that. Thank you everyone for RAM help! I'm having a lot of fun tinkering with the system and getting stuff set up. I've only toyed with a raspberry pi before for anything Linux related, so I super appreciate the tips.
|
# ? Apr 13, 2021 21:27 |
|
While messing with my plex system (just a windows 10 pc running a plex server), I seemed to have turned my media drive into a simple volume (freaked out a little when the color of the drive in disk management turned from blue to olive). It was originally a 4tb hdd that I had split into 2x2tb partitions and now changed it into 1tb + 3tb. Is this going to be an issue or am I fine just leaving it as it is and not worry about trying to revert this change? What is the difference/use of simple volume vs basic volume anyways?
|
# ? Apr 15, 2021 04:30 |
|
Why even partition it assuming its a secondary disk drive? Just all your poo poo in a folder called 'poo poo' is the gold standard.
|
# ? Apr 15, 2021 13:12 |
|
It's a public pc in my apartment. One drive was for movies/shows everyone in the house uses and the other was for roms/emulators that I didn't want others deleting or moving around. Either way I'm a dummy.
ughhhh fucked around with this message at 14:17 on Apr 15, 2021 |
# ? Apr 15, 2021 13:22 |
|
BurgerQuest posted:Why even partition it assuming its a secondary disk drive? Just all your poo poo in a folder called 'poo poo' is the gold standard. Mine is "cleanup-today's date" which gets nested inside the next cleanup folder recursively, until I hit OS limits on file path length limits
|
# ? Apr 15, 2021 20:42 |
|
DrDork posted:More RAM is always more better. The performance loss you might observe because the faster sticks are slowing down to match the slower ones should be pretty negligible given they're both 1600. If you've got the sticks laying around, shove 'em in and never think about them again. Also with 'extra' RAM you have the option of carving some of it out for Plex to use as transcoding scratch rather than doing it on your actual drives. Based on how heavily your server gets used this could be pretty beneficial or not really noticable. Rescue Toaster posted:I picked up an extra WD Elements 8 to shuck for a cold spare, and it's really acting weird with the smart long test. It starts, I let it sit for the 12 hours, and then when I run smartctl --all to check it, smartctl hangs and times out the first time, and then when I run it again it says the test was aborted by operator at 90%. And has done that twice now. When I was testing all of the Elements I shucked, they kept failing out during SMART tests as well. Turns out running a SMART test sometimes doesn't count as 'using' the drive, so it would think it was inactive and go into powersave mode. I had to add a little script that would call the SMART status info every 60 seconds or so to keep it awake. Not sure if that's what you're seeing here but something to keep in mind when you're testing USB externals prior to shucking.
|
# ? Apr 16, 2021 00:05 |
|
The price for an Optane drive makes more sense to me as a scratch drive given they start at 15 drive writes / day endurance than buying more RAM given that a 32GB Optane drive is like $40 on Ebay although if we're talking high density compute systems sure we're going to go to something like 1 TB of RAM first before thinking of solid state but this isn't the HPC / hyperscaler thread exactly. And seriously, an Optane drive for like 480 GB is way cheaper than 400+ GB of DDR4 and can be plopped onto most motherboards.
|
# ? Apr 16, 2021 02:02 |
|
necrobobsledder posted:The price for an Optane drive makes more sense to me as a scratch drive given they start at 15 drive writes / day endurance than buying more RAM given that a 32GB Optane drive is like $40 on Ebay although if we're talking high density compute systems sure we're going to go to something like 1 TB of RAM first before thinking of solid state but this isn't the HPC / hyperscaler thread exactly. And seriously, an Optane drive for like 480 GB is way cheaper than 400+ GB of DDR4 and can be plopped onto most motherboards. Putting your write-ahead log or similar on an Optane would be pretty nice. For most applications you’re probably talking about the 280GB or 480 GB models of course, not the little 32GB ones or even the 90GB ones.
|
# ? Apr 16, 2021 02:33 |
|
I'm being relocated to a remote area for an indeterminate period of time where I'm going to have lots of free time on my hands. I have a lot of movies, so I'm looking to rip my collection to one central location. My current plan is to get a NAS like a QNAP TS-453D, install the correct software, and just plug directly into a TV, and it just works forever hopefully. Reliability is key for me here. If I need to troubleshoot, internet will be spotty and not guaranteed, and I certainly shouldn't expect being able to get my hands on physical replacement parts in a timely manner. My understanding is, this NAS should be more reliable generally speaking than something I would build myself. While it's feature overkill for my needs, I don't want to go the absolute cheapest route and buy something that would have more budget-level internal components. I was looking at Synology as well, which may have a slightly better reputation than QNAP overall, but there's no HDMI out on their NAS. Getting a Shield TV Pro solves this issue easily, but introduces another component therefore more complexity and potential points of failure. I may get it anyway, since I am a remuxing amateur and have no idea if I could run into codec support or compatibility issues playing movies directly from the NAS, and the Android TV software may be significantly easier to manage than the QNAP software. Any tips or glaring holes in my line of thinking here before I continue on with this? Edit: Also is raid 5 or 6 good for my specific situation? Seems like it could increase longevity A Bag of Milk fucked around with this message at 03:11 on Apr 17, 2021 |
# ? Apr 16, 2021 20:32 |
|
Two days and no email with my RMA information from WD.....ugh
|
# ? Apr 17, 2021 01:15 |
Bob Morales posted:Two days and no email with my RMA information from WD.....ugh
|
|
# ? Apr 17, 2021 09:00 |
|
|
# ? May 16, 2024 12:32 |
|
Optane chat: it was forward looking but its finally starting to mature. Hopefully Intel forces the issue, which is seems like they're going to from a platform perspective, and can bring the OEMs/ODMs along with them which will force prices down. Its just about the only differentiator they have in light of Threadripper being a thing so we'll see... e: jesus christ. Haswell isn't platform enabled, is it, as I recall--just Broadwell? https://www.ebay.com/itm/512GB-Inte...XYAAOSwHJZfZBgW Crunchy Black fucked around with this message at 11:18 on Apr 17, 2021 |
# ? Apr 17, 2021 11:14 |