Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BlankSystemDaemon
Mar 13, 2009



With SAS expanders and any of the venerable LSI controllers that this thread is so fond of, you can hang up to 1024 drives off a single SFF-8087 or SFF-8088 port, whereas the most number of drives you can hang off a SATA port is 15 drives using a SATA Port Multiplier (although there's no guarantee you'll be able to even use SATA PM since it's an optional part of the spec that not every vendor implements).
Naturally, all devices share bandwidth, so it's dubious whether it's smart to hang that many disks off one port, but given that spinning rust tops out at around 160MBps in the real world (with streaming I/O) and even SATA3 is capable of doing ~550MBps while SAS3 can do 1500MBps and SAS2 can do 550MBps, there's some wiggle-room.

One of the biggest advantages of SAS, though, is SCSI Enclosure Services.
It lets you get drive and enclosure temperatures, fan rotation speeds, and more importantly it lets you toggle whether disks are faulted so that you can make the LED on the SAS enclosure light up to indicate which disk it is you want to replace.
This can even be done automatically with ZFS and zfsd on FreeBSD.

Adbot
ADBOT LOVES YOU

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Buff Hardback posted:

As it is, a SAS card is using breakout cables (there's 2 physical ports on the card, each physical port connects to a cable that has one SFF-8087 to either 4 SATA connectors or (in my opinion the more versatile cable) 4 SAS SFF-8482s. The expander card (with a total of 8 SFF-8087 ports on it) is connected with two SFF-8087 to SFF-8087 cables, and then acts as basically a switch to allow for way more SAS/SATA drives to be connected.

Yup, and all this is possible because SAS isn’t just fancy SATA, where you’re just reading or writing from an address, it’s actually SCSI being sent over the same physical connector as SATA. Since SCSI is a higher level protocol that works on the concept of “commands” there is a place to slot in concepts like “multiple drives per channel”. (Or the fancy stuff previously referenced)

Sir Bobert Fishbone posted:

I have a 3600 in that ASRock motherboard and was never able to get any kind of ECC RAM to post in it. I think I bought and returned 3 sets, both on the supported list and not, before giving up.

That’s really weird, and kind of a bummer since that one was high on my shortlist.

I don’t want to second-guess you there but maybe it was just a problem with one specific bios or something? Seems really really weird.

BlankSystemDaemon
Mar 13, 2009



Speaking of SCSI, I just remembered this:

xtal posted:

Someone let me know if there is a better place to ask this.

I want a 3 or 4 bay enclosure for 3.5" drives that uses USB 3.0 and supports UAS. I've bought a couple that are labeled as supporting UAS but actually don't (or don't on Linux.) So I'd like to hear from someone who's found one they know works.
I bought a ICY BOX IB-123CL-U3, and it works great.

The issue you'll probably find is that your motherboards vendor didn't want to pay for the cost of implementing UAS, so you're likely going to have to invest in a daughterboard with a USB controller on it that explicitly supports it.

Sir Bobert Fishbone
Jan 16, 2006

Beebort

Paul MaudDib posted:

Yup, and all this is possible because SAS isn’t just fancy SATA, where you’re just reading or writing from an address, it’s actually SCSI being sent over the same physical connector as SATA. Since SCSI is a higher level protocol that works on the concept of “commands” there is a place to slot in concepts like “multiple drives per channel”. (Or the fancy stuff previously referenced)


That’s really weird, and kind of a bummer since that one was high on my shortlist.

I don’t want to second-guess you there but maybe it was just a problem with one specific bios or something? Seems really really weird.

In the research I did before just throwing my hands up and giving up on ECC, it sounds like it wasn't necessarily the mobo that was the issue, but the fact that the 3600 doesn't support ECC. You may have better luck with a different CPU that does. It's also definitely definitely possible that it was something else I just didn't have the patience to diagnose.

Kivi
Aug 1, 2006
I care

Paul MaudDib posted:

Yup, and all this is possible because SAS isn’t just fancy SATA, where you’re just reading or writing from an address, it’s actually SCSI being sent over the same physical connector as SATA. Since SCSI is a higher level protocol that works on the concept of “commands” there is a place to slot in concepts like “multiple drives per channel”. (Or the fancy stuff previously referenced)


That’s really weird, and kind of a bummer since that one was high on my shortlist.

I don’t want to second-guess you there but maybe it was just a problem with one specific bios or something? Seems really really weird.

My friend has it working with 128 GB of ECC RAM with 3-series consumer CPU.

BlankSystemDaemon
Mar 13, 2009



Kivi posted:

My friend has it working with 128 GB of ECC RAM with 3-series consumer CPU.
For various definitions of working, sure.

I would love to know if it does anything other than POST with ECC, because I have severe doubts that it implements proper error handling with NMIs, and if it isn't implemented properly, it's damned hard to figure out if it is implemented properly.

Teabag Dome Scandal
Mar 19, 2002


I added another drive to my array in Unraid. Is there any benefit to balancing out the current data so it is more evenly spread out or does it not matter that much if everything new gets saved there?

THF13
Sep 26, 2007

Keep an adversary in the dark about what you're capable of, and he has to assume the worst.

Teabag Dome Scandal posted:

I added another drive to my array in Unraid. Is there any benefit to balancing out the current data so it is more evenly spread out or does it not matter that much if everything new gets saved there?

I don't think so unless you have split levels manually configured. That setting could try to place new files on drives regardless of their available space.

Minty Swagger
Sep 8, 2005

Ribbit Ribbit Real Good
I prefer having the least amount of drives needed to be spun up so I fill each drive 1 by 1 with the 'fill up' share setting, but in the end it doesn't really matter too much.

Tuxedo Gin
May 21, 2003

Classy.

Speaking of - I've read a lot of mixed opinions on keeping drives spun up vs. letting them sleep. Is there a general time guideline for how often a drive is accessed that makes it better to leave spun up? Some people have said with modern drives the spin up doesn't wear that much so old arguments about spin ups drastically reducing life of a drive are outdated.

Is there a consensus here about that?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
the general advice is that spinups are not good for a drive and so you should minimize it as much as possible. I'd say maybe one spinup per day for 5 years is an incredibly safe level and maybe 10 per day is a slightly risky level and 100 per day is dangerous. Shift those down one tier at 10 years, so 1 per day is slightly risky at 10 years and so on.

since you can't know how long it's going to be until next access for the most part (obviously it's trivial to know how long it's been "since the last access" but you would have to have some kind of a schedule to spin them down in advance when you are likely to be idle), the advice is just to not spin them down ever. It's maybe 3w per drive to spin them idle, it's just not worth caring about. In terms of drive life it's better for the drives just to spin always

Paul MaudDib fucked around with this message at 05:19 on Apr 10, 2021

Raymond T. Racing
Jun 11, 2019

When I had my server up, I set my spindown time to like 10 hours or something. It would come up in the evening for TV and stuff, and then spin down during the day when I was at work.

KozmoNaut
Apr 23, 2008

Happiness is a warm
Turbo Plasma Rifle


Buff Hardback posted:

When I had my server up, I set my spindown time to like 10 hours or something. It would come up in the evening for TV and stuff, and then spin down during the day when I was at work.

Yeah, same. Long enough that they'll keep spinning in normal everyday use, but still spin down if I'm gone for half a day or more.

Rexxed
May 1, 2010

Dis is amazing!
I gotta try dis!

Who is outside my house yelling spin down? I will never spin down!

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

WD Support: Do you need Standard RMA or Advanced?
ME: Advanced
WD Support: Are you getting any error?
ME: Yes bad sectors
ME: Bad sector was found on disk[8].
ME: An I/O error occurred to Drive 6
WD Support: Do you need advanced or standard RMA
ME: Advanced
WD Support: Is it okay if we go for Standard RMA?
ME: No, the drives are in RAID so I can't send them back without putting a new one in right away
WD Support: I understand, give me a minute please.
WD Support: As of now, the replacement drives we provide for these drives, "WD Gold" are not in inventory. Could you check back with us within a few days as the inventory gets replenished frequently.
ME: You can't just backorder one for me?
WD Support: Let me check. I'm sorry for keeping you on hold, I'm on the call with concerned department along with the chat.
WD Support: Thank you for waiting. We will need to wait for the drive to be in our inventory as they arrive in batches. Also, we'll provide you WD Gold of 4 TB in capacity, would that be okay?
ME: When should I check back?
WD Support: In 3-4 business days.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Hope you've been using RAID with 2-drive redundancy...

Hed
Mar 31, 2004

Fun Shoe
FWIW is had the same problem with Seagate a few weeks ago when this thread helped me with some HDD errors. They couldn’t do advance replacement but could take my failed drive now. How nice for them.

Lucky I ran RAID-Z2!

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

DrDork posted:

Hope you've been using RAID with 2-drive redundancy...

It's either a 10 or a 6, it's just a backup target.

The the drive 'fully recovered', but I don't really want to take any chances and want some new drives.

BaronVanAwesome
Sep 11, 2001

I will never learn the secrets of "Increased fake female boar sp..."

Never say never, buddy.
Now you know.
Now we all know.
I've got my shiny (old) 2500k UnRaid server finally up and running, but I have a RAM question:

It's currently got 8GB of 1600mhz DDR3 RAM in it (2x4). I have a second set of 2x4 RAM at 1600 but with different timings - is it worth throwing it into the system? Will the timings/different brand be anything but a net positive?

No gaming/barely any VMs, just plex and local storage on it.

Rexxed
May 1, 2010

Dis is amazing!
I gotta try dis!

BaronVanAwesome posted:

I've got my shiny (old) 2500k UnRaid server finally up and running, but I have a RAM question:

It's currently got 8GB of 1600mhz DDR3 RAM in it (2x4). I have a second set of 2x4 RAM at 1600 but with different timings - is it worth throwing it into the system? Will the timings/different brand be anything but a net positive?

No gaming/barely any VMs, just plex and local storage on it.

All of the ram will run at the slowest timings available. I'm not sure there's a lot of advantage to 8 vs 16 on a plex server but I kind of doubt it. Maybe keep an eye on ram usage and see if it's paging heavily right now without the extra ram.

BaronVanAwesome
Sep 11, 2001

I will never learn the secrets of "Increased fake female boar sp..."

Never say never, buddy.
Now you know.
Now we all know.

Rexxed posted:

All of the ram will run at the slowest timings available. I'm not sure there's a lot of advantage to 8 vs 16 on a plex server but I kind of doubt it. Maybe keep an eye on ram usage and see if it's paging heavily right now without the extra ram.

I already have the extra sticks so there's no cost issue with using them or not, I just didn't want to slow things down for no reason or something.

I'll try a big file transfer and a Plex transcode at the same time and double check, thank you!

Rescue Toaster
Mar 13, 2003
I picked up an extra WD Elements 8 to shuck for a cold spare, and it's really acting weird with the smart long test. It starts, I let it sit for the 12 hours, and then when I run smartctl --all to check it, smartctl hangs and times out the first time, and then when I run it again it says the test was aborted by operator at 90%. And has done that twice now.

I'm going ahead and running badblocks on it while I ponder what to do. There's no SMART errors yet, just seems to get stuck during the test. I don't remember having an issue on my original set of 6, though I did shuck those before doing the tests, and this time I'm trying to do it on the USB controller.

redeyes
Sep 14, 2002

by Fluffdaddy
RMA it. Bad blocks isn't the only way a HD can fail.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

BaronVanAwesome posted:

I already have the extra sticks so there's no cost issue with using them or not, I just didn't want to slow things down for no reason or something.

I'll try a big file transfer and a Plex transcode at the same time and double check, thank you!

More RAM is always more better. The performance loss you might observe because the faster sticks are slowing down to match the slower ones should be pretty negligible given they're both 1600. If you've got the sticks laying around, shove 'em in and never think about them again.

xtal
Jan 9, 2011

by Fluffdaddy
It's worth looking at your use case, doing some benchmarks or just looking at the memory usage of your system. If you aren't at 100% RAM then obviously an upgrade isn't going to do anything. But with the filesystem cache in mind, you probably are using up all your memory, and want more. So you can think of it less as "slightly slowing down 8GB of RAM" and more as "drastically speeding up 8GB of disk."

Less Fat Luke
May 23, 2003

Exciting Lemon

Rescue Toaster posted:

I picked up an extra WD Elements 8 to shuck for a cold spare, and it's really acting weird with the smart long test. It starts, I let it sit for the 12 hours, and then when I run smartctl --all to check it, smartctl hangs and times out the first time, and then when I run it again it says the test was aborted by operator at 90%. And has done that twice now.

I'm going ahead and running badblocks on it while I ponder what to do. There's no SMART errors yet, just seems to get stuck during the test. I don't remember having an issue on my original set of 6, though I did shuck those before doing the tests, and this time I'm trying to do it on the USB controller.

I have found the Elements USB controllers have issues consistently passing SMART commands. In my case as long the the short SMART test passes and shows the right amount of hours and a destructive badblocks works then it's been good enough for me.

Rescue Toaster
Mar 13, 2003

Less Fat Luke posted:

I have found the Elements USB controllers have issues consistently passing SMART commands. In my case as long the the short SMART test passes and shows the right amount of hours and a destructive badblocks works then it's been good enough for me.

Yeah I know USB is hit and miss for smart. I think if it passes badblocks OK I'll go ahead and carefully shuck it and try the long SMART test on SATA. These are easy to take apart without breaking the case, one of my original six died during badblocks and I just re-packed it carefully in the same enclosure and RMA'd it no problem.

IOwnCalculus
Apr 2, 2003





DrDork posted:

More RAM is always more better. The performance loss you might observe because the faster sticks are slowing down to match the slower ones should be pretty negligible given they're both 1600. If you've got the sticks laying around, shove 'em in and never think about them again.

Agreed. You'd be very hard pressed to measure the difference in RAM timings on two different 1600 configs in anything but a synthetic test designed to exploit that.

I'd run the 16GB over 8GB even if it cut the RAM clock to 1333.

BaronVanAwesome
Sep 11, 2001

I will never learn the secrets of "Increased fake female boar sp..."

Never say never, buddy.
Now you know.
Now we all know.

DrDork posted:

More RAM is always more better. The performance loss you might observe because the faster sticks are slowing down to match the slower ones should be pretty negligible given they're both 1600. If you've got the sticks laying around, shove 'em in and never think about them again.

xtal posted:

It's worth looking at your use case, doing some benchmarks or just looking at the memory usage of your system. If you aren't at 100% RAM then obviously an upgrade isn't going to do anything. But with the filesystem cache in mind, you probably are using up all your memory, and want more. So you can think of it less as "slightly slowing down 8GB of RAM" and more as "drastically speeding up 8GB of disk."

IOwnCalculus posted:

Agreed. You'd be very hard pressed to measure the difference in RAM timings on two different 1600 configs in anything but a synthetic test designed to exploit that.

I'd run the 16GB over 8GB even if it cut the RAM clock to 1333.

Thank you everyone for RAM help! I'm having a lot of fun tinkering with the system and getting stuff set up. I've only toyed with a raspberry pi before for anything Linux related, so I super appreciate the tips.

ughhhh
Oct 17, 2012

While messing with my plex system (just a windows 10 pc running a plex server), I seemed to have turned my media drive into a simple volume (freaked out a little when the color of the drive in disk management turned from blue to olive). It was originally a 4tb hdd that I had split into 2x2tb partitions and now changed it into 1tb + 3tb. Is this going to be an issue or am I fine just leaving it as it is and not worry about trying to revert this change?

What is the difference/use of simple volume vs basic volume anyways?

BurgerQuest
Mar 17, 2009

by Jeffrey of YOSPOS
Why even partition it assuming its a secondary disk drive? Just all your poo poo in a folder called 'poo poo' is the gold standard.

ughhhh
Oct 17, 2012

It's a public pc in my apartment. One drive was for movies/shows everyone in the house uses and the other was for roms/emulators that I didn't want others deleting or moving around. Either way I'm a dummy.

ughhhh fucked around with this message at 14:17 on Apr 15, 2021

Hadlock
Nov 9, 2004

BurgerQuest posted:

Why even partition it assuming its a secondary disk drive? Just all your poo poo in a folder called 'poo poo' is the gold standard.

Mine is "cleanup-today's date" which gets nested inside the next cleanup folder recursively, until I hit OS limits on file path length limits

Takes No Damage
Nov 20, 2004

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.


Grimey Drawer

DrDork posted:

More RAM is always more better. The performance loss you might observe because the faster sticks are slowing down to match the slower ones should be pretty negligible given they're both 1600. If you've got the sticks laying around, shove 'em in and never think about them again.

Also with 'extra' RAM you have the option of carving some of it out for Plex to use as transcoding scratch rather than doing it on your actual drives. Based on how heavily your server gets used this could be pretty beneficial or not really noticable.


Rescue Toaster posted:

I picked up an extra WD Elements 8 to shuck for a cold spare, and it's really acting weird with the smart long test. It starts, I let it sit for the 12 hours, and then when I run smartctl --all to check it, smartctl hangs and times out the first time, and then when I run it again it says the test was aborted by operator at 90%. And has done that twice now.

I'm going ahead and running badblocks on it while I ponder what to do. There's no SMART errors yet, just seems to get stuck during the test. I don't remember having an issue on my original set of 6, though I did shuck those before doing the tests, and this time I'm trying to do it on the USB controller.

When I was testing all of the Elements I shucked, they kept failing out during SMART tests as well. Turns out running a SMART test sometimes doesn't count as 'using' the drive, so it would think it was inactive and go into powersave mode. I had to add a little script that would call the SMART status info every 60 seconds or so to keep it awake. Not sure if that's what you're seeing here but something to keep in mind when you're testing USB externals prior to shucking.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
The price for an Optane drive makes more sense to me as a scratch drive given they start at 15 drive writes / day endurance than buying more RAM given that a 32GB Optane drive is like $40 on Ebay although if we're talking high density compute systems sure we're going to go to something like 1 TB of RAM first before thinking of solid state but this isn't the HPC / hyperscaler thread exactly. And seriously, an Optane drive for like 480 GB is way cheaper than 400+ GB of DDR4 and can be plopped onto most motherboards.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

necrobobsledder posted:

The price for an Optane drive makes more sense to me as a scratch drive given they start at 15 drive writes / day endurance than buying more RAM given that a 32GB Optane drive is like $40 on Ebay although if we're talking high density compute systems sure we're going to go to something like 1 TB of RAM first before thinking of solid state but this isn't the HPC / hyperscaler thread exactly. And seriously, an Optane drive for like 480 GB is way cheaper than 400+ GB of DDR4 and can be plopped onto most motherboards.

Putting your write-ahead log or similar on an Optane would be pretty nice. For most applications you’re probably talking about the 280GB or 480 GB models of course, not the little 32GB ones or even the 90GB ones.

A Bag of Milk
Jul 3, 2007

I don't see any American dream; I see an American nightmare.
I'm being relocated to a remote area for an indeterminate period of time where I'm going to have lots of free time on my hands. I have a lot of movies, so I'm looking to rip my collection to one central location. My current plan is to get a NAS like a QNAP TS-453D, install the correct software, and just plug directly into a TV, and it just works forever hopefully. Reliability is key for me here. If I need to troubleshoot, internet will be spotty and not guaranteed, and I certainly shouldn't expect being able to get my hands on physical replacement parts in a timely manner. My understanding is, this NAS should be more reliable generally speaking than something I would build myself. While it's feature overkill for my needs, I don't want to go the absolute cheapest route and buy something that would have more budget-level internal components. I was looking at Synology as well, which may have a slightly better reputation than QNAP overall, but there's no HDMI out on their NAS. Getting a Shield TV Pro solves this issue easily, but introduces another component therefore more complexity and potential points of failure. I may get it anyway, since I am a remuxing amateur and have no idea if I could run into codec support or compatibility issues playing movies directly from the NAS, and the Android TV software may be significantly easier to manage than the QNAP software.

Any tips or glaring holes in my line of thinking here before I continue on with this?

Edit: Also is raid 5 or 6 good for my specific situation? Seems like it could increase longevity

A Bag of Milk fucked around with this message at 03:11 on Apr 17, 2021

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Two days and no email with my RMA information from WD.....ugh

BlankSystemDaemon
Mar 13, 2009



Bob Morales posted:

Two days and no email with my RMA information from WD.....ugh
I just had a 8TB drive RMA'd through the retailer, and it took a good two weeks.

Adbot
ADBOT LOVES YOU

Crunchy Black
Oct 24, 2017

by Athanatos
Optane chat: it was forward looking but its finally starting to mature. Hopefully Intel forces the issue, which is seems like they're going to from a platform perspective, and can bring the OEMs/ODMs along with them which will force prices down. Its just about the only differentiator they have in light of Threadripper being a thing so we'll see...

e: jesus christ. Haswell isn't platform enabled, is it, as I recall--just Broadwell?
https://www.ebay.com/itm/512GB-Inte...XYAAOSwHJZfZBgW

Crunchy Black fucked around with this message at 11:18 on Apr 17, 2021

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply