Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
GreenBuckanneer
Sep 15, 2007

H110Hawk posted:

Yeah it's raid0 that makes my eyes twitch. You would get only downsides and 0 upside. Jbod is at least the same odds of a single disk dying with 1/4 the impact. I would do shr1 and call it a day. You're shucking disks right?

I've done that before, it seems like it's a little cheaper, at $170 for an external (5400/5900rpm) vs $218 7200rpm for 10TB. Bigger drives than that seem too much per gigabyte I think. Voids the warranty, but when they're this cheap does it really matter for watching movies lol

Adbot
ADBOT LOVES YOU

H110Hawk
Dec 28, 2006

GreenBuckanneer posted:

I've done that before, it seems like it's a little cheaper, at $170 for an external (5400/5900rpm) vs $218 7200rpm for 10TB. Bigger drives than that seem too much per gigabyte I think. Voids the warranty, but when they're this cheap does it really matter for watching movies lol

Definitely shuck. 7200rpm isn't buying you anything here. Also if you're going to do it without any resiliency I wouldn't bother spending a premium on a Synology. If you do SHR1 then sure.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



GreenBuckanneer posted:

Well I mean the viable alternative is raid 10 but then that's half the storage space. I'd probably go for raid 5 because it's unlikely for more than 1 drive to fail at a time, although I have seen entire arrays go down at once, it's uncommon enough in raid 5 that businesses have sued the supplier over it. For a person like me it's fine, i'd be backing up anything anyways

JBOD sounds fine, actually

A good RAID array is not a backup, it's just a setup to reduce the risk of failure and increase read/write speeds.

You still need at least one off-site backup. I'd personally have an on-site backup as well as that allows a much faster data recovery in the event of a catastrophic failure of the array (and insures against a loss of quick internet access preventing recovery) but your budget will determine feasibility.

IOwnCalculus
Apr 2, 2003





RAID isn't backup, sure. But RAID0 is guaranteeing you'll be restoring from backup on a regular basis.

GreenBuckanneer
Sep 15, 2007

H110Hawk posted:

Definitely shuck. 7200rpm isn't buying you anything here. Also if you're going to do it without any resiliency I wouldn't bother spending a premium on a Synology. If you do SHR1 then sure.

Is there a cheaper four disk bay variant? I feel like I'm solidly in the camp of wanting four in JBOD or similar mode

Nitrousoxide posted:

A good RAID array is not a backup, it's just a setup to reduce the risk of failure and increase read/write speeds.

I know this, I used to do data backup escalations as my main job. I just, never set anything up like that at home. Like I said, anything important would just be backed up.

I looked around and I don't see a "premium" by going with synology, unless you're seeing something cheaper?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
It's a "premium" over something like just shoving 4 external drives onto a USB hub at the back of your computer, using a 4-bay expander, hanging them off a Pi 4, or something like that. If all you're doing is JBOD and aren't looking for anything else that you can do with the Synology (Plex, SHR, etc.) then you're basically paying several hundred dollars extra for a nice case.

In terms of actual NAS boxes though, they're not particularly overpriced.

movax
Aug 30, 2008

Goddammit. Stumbled onto today, my X11SSL-CF has a C232 which stops me from IGP pass-thru of the Kaby Lake (just upgraded to an eBay'd Xeon) for HW transcoding and I don't want to stand up another box to just do Plex — whole point in virtualizing the drat thing was to run everything in one box. Vaguely considering getting some little NUC-like thing and just making that dedicated Plex / media serving and mounting the NAS over Ethernet, but I already have this perfectly good box w/ 64 GB ECC (not that it's needed for Plex) that was supposed to "do it all". :argh:

I constrained myself to mATX (because of case, Node 804) and then chose the X11 boards that had a SAS3008 on-board, to save PCIe slots. (Thinking out loud a bit here, sorry).

Decoding the Supermicro X11 family and constraining my available parts (now an E3-1285 v6, 64 GB RAM, T520-CR 2x 10 Gb SFP NIC), the -CTF almost gets it done, but a 2260 NVMe drive is odd sized (I can't use any of my 2280s I have lying around) and I don't need the twin X550 copper 10 Gb NIC — my switches have SFP+ cages and copper -> fiber converters seem expensive. Seems like X11SSM-F, X11SSH-F, or X11SSH-LN4F will do the trick for me now with only a mobo swap involved, and the primary M vs. H diff being whether I get a PCIe 3.0 x2 M.2 slot, or a x4 PCIe slot. Since I decided on the boot drive being a single NVMe drive, I figure I'll just get the X11SSH-F to maximize the bandwidth to that drive versus needing 4 LAN jacks.

So, guess I have to actually get a HBA now and chuck it into the x8 slot. Case is still limited to 8 3.5" drives, but if I got a SAS 3216 or 3224 based adapter... could I place my random SSDs on that HBA in addition to my spinny drives, instead of passing through the Intel PCH SATA ports? If I get case mod'y with this, I bet I could fit even more physical drives and hook them up. A x8 Gen 3 link is 64 Gb/s which 8 spinny drives sure as poo poo won't hit, and neither will SATA SSDs. Guess SAS SSDs might though.

Think there's a resale market for the X11SSL-CF? At this point I have that, a spare Xeon (E3-1230 v5) and just need some RAM and then I have a second box's guts just staring me in the face.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

movax posted:

I don't need the twin X550 copper 10 Gb NIC — my switches have SFP+ cages and copper -> fiber converters seem expensive.

These have dropped a shitload in price over the last few years.

https://www.fs.com/products/66612.html

movax
Aug 30, 2008

Moey posted:

These have dropped a shitload in price over the last few years.

https://www.fs.com/products/66612.html

Oh hey, that's not bad at all. Since I have to run new cable(s) in my house anyways though, I'd rather just skip dealing with a bad kink / punch on Cat6A and just safe fiber connectivity. But, on below...

movax posted:

Goddammit. Stumbled onto today, my X11SSL-CF has a C232 which stops me from IGP pass-thru of the Kaby Lake (just upgraded to an eBay'd Xeon) for HW transcoding and I don't want to stand up another box to just do Plex — whole point in virtualizing the drat thing was to run everything in one box. Vaguely considering getting some little NUC-like thing and just making that dedicated Plex / media serving and mounting the NAS over Ethernet, but I already have this perfectly good box w/ 64 GB ECC (not that it's needed for Plex) that was supposed to "do it all". :argh:

I constrained myself to mATX (because of case, Node 804) and then chose the X11 boards that had a SAS3008 on-board, to save PCIe slots. (Thinking out loud a bit here, sorry).

Decoding the Supermicro X11 family and constraining my available parts (now an E3-1285 v6, 64 GB RAM, T520-CR 2x 10 Gb SFP NIC), the -CTF almost gets it done, but a 2260 NVMe drive is odd sized (I can't use any of my 2280s I have lying around) and I don't need the twin X550 copper 10 Gb NIC — my switches have SFP+ cages and copper -> fiber converters seem expensive. Seems like X11SSM-F, X11SSH-F, or X11SSH-LN4F will do the trick for me now with only a mobo swap involved, and the primary M vs. H diff being whether I get a PCIe 3.0 x2 M.2 slot, or a x4 PCIe slot. Since I decided on the boot drive being a single NVMe drive, I figure I'll just get the X11SSH-F to maximize the bandwidth to that drive versus needing 4 LAN jacks.

So, guess I have to actually get a HBA now and chuck it into the x8 slot. Case is still limited to 8 3.5" drives, but if I got a SAS 3216 or 3224 based adapter... could I place my random SSDs on that HBA in addition to my spinny drives, instead of passing through the Intel PCH SATA ports? If I get case mod'y with this, I bet I could fit even more physical drives and hook them up. A x8 Gen 3 link is 64 Gb/s which 8 spinny drives sure as poo poo won't hit, and neither will SATA SSDs. Guess SAS SSDs might though.

Think there's a resale market for the X11SSL-CF? At this point I have that, a spare Xeon (E3-1230 v5) and just need some RAM and then I have a second box's guts just staring me in the face.

Hah, so funny story, seems like forum posts indicate Supermicro "hosed up" at least one C236 board that should have supported video. The SSH-F does call out "Intel VHD" which seems to be indicative, but man, that's mildly infuriating. I'm going to hold off on getting a mobo until I find someone confirming it, and then also, I was a dumb-rear end searching Supermicro's page and the X11 series has been extended to support the latest processors, so I sense some eBay flipping in my future.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Yeah, I wound up with the harsh realization that given my past trends I will wind up wanting to have 24+ drive bays available and while I might normally have a uATX case or whatever housing 8+ drives a SAS expander would be inevitable for me. I had a different Supermicro board that hosted Plex, pfSense, a bunch of containers, and even macOS because I was running them all through ESXi. Worked fine for me honestly until a weird bootrom on an LSI card overwrote parts of my BIOS so I couldn't get back into it again and see wtf is going on unless I paid $250+ for a 7+ year old motherboard to get a copy of the original EEprom which was insane and obstinate. My current plan is to use my current 3900X + X570 setup (it has IPMI as a workstation board) for a server when my current old NAS kicks the bucket sometime in the next 2 years and if forced to setup earlier use one of the NUCs I have with an eGPU to use the LSI HBA I have to hook up my 24 bay Supermicro SAS expander that consumes like 90w without any drives running at all.

On the other hand I could just buy your gear for fun and try to fit it into my SAS expander which is this in a JBOD configuration.

movax
Aug 30, 2008

necrobobsledder posted:

Yeah, I wound up with the harsh realization that given my past trends I will wind up wanting to have 24+ drive bays available and while I might normally have a uATX case or whatever housing 8+ drives a SAS expander would be inevitable for me. I had a different Supermicro board that hosted Plex, pfSense, a bunch of containers, and even macOS because I was running them all through ESXi. Worked fine for me honestly until a weird bootrom on an LSI card overwrote parts of my BIOS so I couldn't get back into it again and see wtf is going on unless I paid $250+ for a 7+ year old motherboard to get a copy of the original EEprom which was insane and obstinate. My current plan is to use my current 3900X + X570 setup (it has IPMI as a workstation board) for a server when my current old NAS kicks the bucket sometime in the next 2 years and if forced to setup earlier use one of the NUCs I have with an eGPU to use the LSI HBA I have to hook up my 24 bay Supermicro SAS expander that consumes like 90w without any drives running at all.

On the other hand I could just buy your gear for fun and try to fit it into my SAS expander which is this in a JBOD configuration.

Heh, I'm coming from a 20-bay Norco and now I just want something small I can tuck in a corner of my office and be more or less silent. I figured this array would be plenty and if I do need to expand, I am willing to do another 8-drive pool but at that point, who knows what disk sizes will be. My 18 drive array of 2 TBs, if it's still even recoverable, is easily swallowable by my new 8x 16 TB drives.

Ended up getting sidetracked for a few hours today and up to speed on Coffee Lake / Comet Lake... looks like I can just keep all my stuff as-is and just deal with not having QuickSync, and then see what trickles out of Intel re: Ice Lake Xeons or anything from Rocket Lake. With AMD / EPYC, and looking for mATX, I'd have to burn a PCIe slot for a GPU to have any kind of HW for video decode, whereas with Intel, if the SKU has a iGPU, and apparently I don't get a mobo that doesn't support it, I've got that handled for whatever comes up.

Interim would be scouting out for an ASRock LGA 1200 board and tossing a Xeon W on it (more RAM!), or, spend ~$250 on the X11SSH-F and get QuickSync and everything wrapped up here.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



movax posted:

Goddammit. Stumbled onto today, my X11SSL-CF has a C232 which stops me from IGP pass-thru of the Kaby Lake (just upgraded to an eBay'd Xeon) for HW transcoding and I don't want to stand up another box to just do Plex — whole point in virtualizing the drat thing was to run everything in one box. Vaguely considering getting some little NUC-like thing and just making that dedicated Plex / media serving and mounting the NAS over Ethernet, but I already have this perfectly good box w/ 64 GB ECC (not that it's needed for Plex) that was supposed to "do it all". :argh:

I constrained myself to mATX (because of case, Node 804) and then chose the X11 boards that had a SAS3008 on-board, to save PCIe slots. (Thinking out loud a bit here, sorry).

Decoding the Supermicro X11 family and constraining my available parts (now an E3-1285 v6, 64 GB RAM, T520-CR 2x 10 Gb SFP NIC), the -CTF almost gets it done, but a 2260 NVMe drive is odd sized (I can't use any of my 2280s I have lying around) and I don't need the twin X550 copper 10 Gb NIC — my switches have SFP+ cages and copper -> fiber converters seem expensive. Seems like X11SSM-F, X11SSH-F, or X11SSH-LN4F will do the trick for me now with only a mobo swap involved, and the primary M vs. H diff being whether I get a PCIe 3.0 x2 M.2 slot, or a x4 PCIe slot. Since I decided on the boot drive being a single NVMe drive, I figure I'll just get the X11SSH-F to maximize the bandwidth to that drive versus needing 4 LAN jacks.

So, guess I have to actually get a HBA now and chuck it into the x8 slot. Case is still limited to 8 3.5" drives, but if I got a SAS 3216 or 3224 based adapter... could I place my random SSDs on that HBA in addition to my spinny drives, instead of passing through the Intel PCH SATA ports? If I get case mod'y with this, I bet I could fit even more physical drives and hook them up. A x8 Gen 3 link is 64 Gb/s which 8 spinny drives sure as poo poo won't hit, and neither will SATA SSDs. Guess SAS SSDs might though.

Think there's a resale market for the X11SSL-CF? At this point I have that, a spare Xeon (E3-1230 v5) and just need some RAM and then I have a second box's guts just staring me in the face.

Would you mind putting together a pcpartpicker list for what you made? A mATX formfactor is pretty much exactly what I'm looking at, and even if I don't get exactly what you made, it would be nice to have a good starting point.

GreenBuckanneer
Sep 15, 2007

DrDork posted:

It's a "premium" over something like just shoving 4 external drives onto a USB hub at the back of your computer, using a 4-bay expander, hanging them off a Pi 4, or something like that. If all you're doing is JBOD and aren't looking for anything else that you can do with the Synology (Plex, SHR, etc.) then you're basically paying several hundred dollars extra for a nice case.

In terms of actual NAS boxes though, they're not particularly overpriced.

The purpose would really be having easy to remove drives so when one fails I can just pop another one in without losing much of anything, and if I did, it wouldn't be the end of the world, and to have whatever the device is offload the encoding (if needed) and to have a web interface and grabbable on the network. I suppose I could just build a miniatx box/server but :effort:

movax
Aug 30, 2008

Nitrousoxide posted:

Would you mind putting together a pcpartpicker list for what you made? A mATX formfactor is pretty much exactly what I'm looking at, and even if I don't get exactly what you made, it would be nice to have a good starting point.

It would take me a bit to do a PCPartpicker list and I'd never get to it but I can certainly type out the key parts here; some of the specifics are way back in my e-mails from 2017 I think. Currently:

  • CPU: E3-1285 v6 (initially, was E3-1230 v5)
  • Cooler: Noctua UH-U12S
  • Mobo: Supermicro X11SSL-CF
  • RAM: Crucial 4x16 GB ECC DDR4 2400 (CT16G4WFD824A)
  • Case: Fractal Node 804
  • Fans: Noctua NF-S12A NF-F12 (this was before they made the newer mode, A12, IIRC that blends pressure/airflow perfectly)
  • Drives: Started off w/ 8x 8 TB WD Reds, now 8x Seagate Exos X16 16 TB
  • Drives: 1x 970 EVO 1 TB (boot drive), 1x 6.4 TB PM1735a (scratch / acceleration)
  • Network: Chelsio T520
  • PSU: Corsair RM650X

Now I'm looking at all the ASRock Rack offerings and drooling over rebuilding it, or just fixing my QuickSync problem and calling this a done deal.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

GreenBuckanneer posted:

I suppose I could just build a miniatx box/server but :effort:

Yeah, that's totally fair. If you want redundancy of any sort, SHR-1 is a good option. Otherwise just JBOD it and call it a day.

H110Hawk
Dec 28, 2006

GreenBuckanneer posted:

I suppose I could just build a miniatx box/server but :effort:

As long as it's a decision where you have all the available information spend your money as you see fit. :v: I love my synology.

BlankSystemDaemon
Mar 13, 2009



movax posted:

Goddammit. Stumbled onto today, my X11SSL-CF has a C232 which stops me from IGP pass-thru of the Kaby Lake (just upgraded to an eBay'd Xeon) for HW transcoding and I don't want to stand up another box to just do Plex — whole point in virtualizing the drat thing was to run everything in one box. Vaguely considering getting some little NUC-like thing and just making that dedicated Plex / media serving and mounting the NAS over Ethernet, but I already have this perfectly good box w/ 64 GB ECC (not that it's needed for Plex) that was supposed to "do it all". :argh:
Are you talking about passing a single GPU through to a single host (aka GVT-d, which basically just extends VT-d) or passing through the iGPU to multiple guests with GVT-g (effectively SR-IOV for graphics cards)?

BlankSystemDaemon fucked around with this message at 13:13 on Jan 12, 2021

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

GreenBuckanneer posted:

The purpose would really be having easy to remove drives so when one fails I can just pop another one in without losing much of anything, and if I did, it wouldn't be the end of the world, and to have whatever the device is offload the encoding (if needed) and to have a web interface and grabbable on the network. I suppose I could just build a miniatx box/server but :effort:

It does sound like in the end you want some type of redundant RAID, 1/5/6. With RAID0 you will lose everything and have to go through the hassle of restore. With JBOD you lose random quarter of your files, with some fragmented files possibly only partially. Assuming the filesystem doesn't get corrupted and you lose everything anyway. But at least you would need checksums of all your files to figure what exactly you have lost, then do a piecemeal restore which is even more of a hassle than full restore.

Synology or QNAP are the minimal effort solution to achieve your needs, with not too much of an extra expense. An old desktop with FreeNAS/Unraid is the minor expense solution. RaspPi4 is the minimal expense, lower performance solution, but at least you don't have to shuck the drives.

Saukkis fucked around with this message at 16:11 on Jan 12, 2021

CopperHound
Feb 14, 2012

Saukkis posted:

With JBOD you lose random quarter of your files, with some fragmented files possibly only partially. Assuming the filesystem doesn't get corrupted and you lose everything anyway. But at least you would need checksums of all your files to figure what exactly you have lost, then do a piecemeal restore which is even more of a hassle than full restore.... Unraid is the minor expense solution.
:science:The unraid implementation of jbod does not split files across drives. You can also set directory depth splitting rules for each share. For example, I can make sure individual seasons or entire shows don't get split across drives.


I remember reading here about a jbod implementation that let you granularly select duplication levels for any file. Was that storage spaces thing that has random data loss when windows is patched?

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Saukkis posted:

Synology or QNAP are the minimal effort solution to achieve your needs, with not too much of an extra expense. An old desktop with FreeNAS/Unraid is the minor expense solution. RaspPi4 is the minimal expense, lower performance solution, but at least you don't have to shuck the drives.

Also recommend checking out Open Media Vault in the same vein as FreeNAS/UnRAID.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



Are the Rosewill rackmount ATX cases pretty much the go-to for shoving desktop parts in a rack or is there a better option out there? I'd be looking for a 4U because IDGAF about density just having space to put a decent cooler on the CPU.

movax
Aug 30, 2008

BlankSystemDaemon posted:

Are you talking about passing a single GPU through to a single host (aka GVT-d, which basically just extends VT-d) or passing through the iGPU to multiple guests with GVT-g (effectively SR-IOV for graphics cards)?

The former — just passing through the iGPU for Plex to use the QuickSync engine / HW H.264/265/etc encoding cores.

Gonna Send It
Jul 8, 2010

movax posted:

The former — just passing through the iGPU for Plex to use the QuickSync engine / HW H.264/265/etc encoding cores.

Your two cheapest options appear to be getting a P400 Quadro in your x16 slot or getting something like the HP 290-p0043w as an external plex server for transcoding.

Do you need IPMI? All of the Supermicro workstation (not server) boards should be able to pass through the iGPU since they support using it as a display output.

movax
Aug 30, 2008

Gonna Send It posted:

Your two cheapest options appear to be getting a P400 Quadro in your x16 slot or getting something like the HP 290-p0043w as an external plex server for transcoding.

Do you need IPMI? All of the Supermicro workstation (not server) boards should be able to pass through the iGPU since they support using it as a display output.

I like having the IPMI, yeah. My specific issue is that the C232 PCH I have just simply doesn't support any iGPU usage whatsoever, and on some Supermicro boards w/ C236, like the X11SSM-F, the iGPU is still non-functional because they don't have iGPU voltage regulators. Switching to a X11SSH-F would enable the iGPU and I'd just have to dig up someone's SAS3008 PCIe card to hook my drives up to.

I think since we're so close to Milan, Rocket Lake and Ice Lake launches that I'll keep an eye out on what comes out and then make a decision on potentially just swapping out mobo / platform entirely — happy to wait a month or two to rebuild this and make it another 5-10 year machine like my 2600K desktop. CPU power will be high enough that transcoding is probably fine on it (not even sure I need it, but versatility!), but the HW encoders in Intel iGPU or NVENC are so fast that it's worth using them. That pushes me towards Intel moreso AMD for the iGPU, but the current platforms are "meh" on PCIe lanes to me, and having tons of PCIe 4.0 lanes would let me shove more NVMe drives in this NAS and have a pretty loving fast all-flash pool + spinny pool in a mATX box.

... I should have just bought a QNAP but this is fun!

e: this is probably the "best" choice today but I feel like Supermicro delivers better SW (BIOS) quality and stability. Their X12s don't give me the PCIe slots and config I want, though. You can get 4 x16 slots in mATX without blinking like the AMD boards can deliver, but current-gen Intel platforms don't have the lanes. This board is also very interesting, but one of the PCIe slots would get eaten by a 1660 immediately / some Turing-based NVENC gen-7 capable GPU.

movax fucked around with this message at 19:39 on Jan 12, 2021

Gonna Send It
Jul 8, 2010

movax posted:

I like having the IPMI, yeah. My specific issue is that the C232 PCH I have just simply doesn't support any iGPU usage whatsoever, and on some Supermicro boards w/ C236, like the X11SSM-F, the iGPU is still non-functional because they don't have iGPU voltage regulators. Switching to a X11SSH-F would enable the iGPU and I'd just have to dig up someone's SAS3008 PCIe card to hook my drives up to.

Yeah, this is the same issue I found when I was looking for an iGPU + IPMI mobo. I ended up on an X9SRL-F (LGA-2011) with maxxxx PCIE lanes, a 9207-8i, a mellanox dual SFP+ card, my PCIE to NVME adapter, and a GTX1070 w/ patched driver for unlimited streams. This leaves me with 1 x8 slot still open for the future and another x4 slot hiding under the GPU I can use with a riser cable.


This thread + the X11SCA-F thread has some info on possible solutions as well, but it was too duct-tape and bailing wire for me.
https://forums.unraid.net/topic/88024-intel-socket-1151-motherboards-with-ipmi-and-support-for-igpu/

Gonna Send It fucked around with this message at 20:13 on Jan 12, 2021

bobfather
Sep 20, 2001

I will analyze your nervous system for beer money

CopperHound posted:

I remember reading here about a jbod implementation that let you granularly select duplication levels for any file. Was that storage spaces thing that has random data loss when windows is patched?

You may be thinking of Stablebit DrivePool.

cage-free egghead
Mar 8, 2004
So I've got a variety of drives that I use for various storage but I'm not sure how to tie it all together. I've got an HP server with Unraid that uses 6x1tb 2.5" drives but only has USB 2 ports and no 3.5" bays. I've got a few 3.5" drives I'd like to use on that Unraid server for additional storage but not sure if USB 2 would work well, if at all with it. I do have the 3.5" drives on another small PC right now but not quite sure how I could pair them up? Any ideas?

Gonna Send It
Jul 8, 2010

cage-free egghead posted:

So I've got a variety of drives that I use for various storage but I'm not sure how to tie it all together. I've got an HP server with Unraid that uses 6x1tb 2.5" drives but only has USB 2 ports and no 3.5" bays. I've got a few 3.5" drives I'd like to use on that Unraid server for additional storage but not sure if USB 2 would work well, if at all with it. I do have the 3.5" drives on another small PC right now but not quite sure how I could pair them up? Any ideas?

Which HP server? Do you have any 5.25" bays?

cage-free egghead
Mar 8, 2004
DL380 G7, negative on the 5.25 bays. I did look into changing out the current setup for a 3.5" one but those are $300+ and I paid $90 for this thing and have a dozen 2.5" drives to use. They're just all 1tb and I have a few 4tb 3.5"s collecting dust atm.

BlankSystemDaemon
Mar 13, 2009



I just got a used HP DL380p G8 with 2x Xeon E5-2667v2 processors and 208GB memory, and 8 DIMM sockets which aren't occupied.
Despite the fact that it's considerably faster (900-1000MHz depending on turbo-boost or not) and has two times the number of cores than the 2x E5620s in my X3560, it's a LOT more quiet.

movax
Aug 30, 2008

BlankSystemDaemon posted:

I just got a used HP DL380p G8 with 2x Xeon E5-2667v2 processors and 208GB memory, and 8 DIMM sockets which aren't occupied.
Despite the fact that it's considerably faster (900-1000MHz depending on turbo-boost or not) and has two times the number of cores than the 2x E5620s in my X3560, it's a LOT more quiet.

That is... lots of cores and lots of memory! That's a 2U model?

I've been rabbit holing and reading up on stuff and I think I want this guy, if it wasn't vaporware: https://www.asrockrack.com/general/productdetail.asp?Model=EPYC3451D4U-2L2T2O8R#Specifications. Node 304 has me in this mATX constraint and I feel like a TRX40-esque platform would solve all my problems. I love the idea of having a platform with x16 lanes that can go x4/x4/x4/x4 for a quartet of used NVMe drives, at least x8 to a SAS3008 for my spinny drives, another x8 for a big HHHL drive, x4 or x8 to my T520 (10 Gb NIC), spare M.2s on top of that, and then having a x8/x16 link to run some NV GPU for my NVENC needs. Those guys probably saw market for dense infra that combines spinny drives + fast NVMe array + cache drives for the array all in one chassis, and are probably selling every unit they make to some cloud firm somewhere.

A Quadro T2000 can do everything that's needed, but I guess MXM modules are out of style now ; getting one of those and sticking it somewhere with some cabling to a x2/x4 random PCIe lane from the motherboard would have been really slick and a lot lower power than a full-size Turing card.

That little guy basically nails it all with everything that's on the motherboard. I guess if it's anything like the D-1500s when they first announced, it won't be out for a long rear end time. It's got a similar board that accepts a socketed processor... if they release a Zen 3 update to that (lol) that'd be amazing.

movax fucked around with this message at 08:07 on Jan 13, 2021

BlankSystemDaemon
Mar 13, 2009



movax posted:

That is... lots of cores and lots of memory! That's a 2U model?

I've been rabbit holing and reading up on stuff and I think I want this guy, if it wasn't vaporware: https://www.asrockrack.com/general/productdetail.asp?Model=EPYC3451D4U-2L2T2O8R#Specifications. Node 304 has me in this mATX constraint and I feel like a TRX40-esque platform would solve all my problems. I love the idea of having a platform with x16 lanes that can go x4/x4/x4/x4 for a quartet of used NVMe drives, at least x8 to a SAS3008 for my spinny drives, another x8 for a big HHHL drive, x4 or x8 to my T520 (10 Gb NIC), spare M.2s on top of that, and then having a x8/x16 link to run some NV GPU for my NVENC needs. Those guys probably saw market for dense infra that combines spinny drives + fast NVMe array + cache drives for the array all in one chassis, and are probably selling every unit they make to some cloud firm somewhere.

A Quadro T2000 can do everything that's needed, but I guess MXM modules are out of style now ; getting one of those and sticking it somewhere with some cabling to a x2/x4 random PCIe lane from the motherboard would have been really slick and a lot lower power than a full-size Turing card.

That little guy basically nails it all with everything that's on the motherboard. I guess if it's anything like the D-1500s when they first announced, it won't be out for a long rear end time. It's got a similar board that accepts a socketed processor... if they release a Zen 3 update to that (lol) that'd be amazing.
Yeah, it's the 2U model, the IBM X3560 is 2U too - so it's ultimately just down to IBM not having any real concept of the fans needing to go below 100%? :v:

I was looking at that board just the other day, and the only problem I can see with it is that it doesn't have a single PCI-express x1 that can be used for an audio card - but it makes sense, becasue there are no PCI-ex 4.0 switches, not even from PLXBroadcom (which

The only real problem with it that you've already sort-of hinted at, is that it's Zen (not Zen+ or Zen2, let alone Zen3) - so the CPU has a lot of errata which AMD categorizes as WONTFIX in the pdf of the revision guide.
Issues 1021, 1024, 1048, and 1049 can and will all affect real-world workloads - up to and including that most OS' work around it which involve pessimizing performance. Three of them are supposedly rare (though even at only ~6 million lines of code, FreeBSD had to have a couple patches for it - I can't imagine how many patches Linux would need with +15 million lines of code, or how many Windows would need with +100 million lines of code), but 1048 is incredibly common for basically any kind of CPU-intensive workload.

BlankSystemDaemon fucked around with this message at 11:46 on Jan 13, 2021

Gonna Send It
Jul 8, 2010

cage-free egghead posted:

DL380 G7, negative on the 5.25 bays. I did look into changing out the current setup for a 3.5" one but those are $300+ and I paid $90 for this thing and have a dozen 2.5" drives to use. They're just all 1tb and I have a few 4tb 3.5"s collecting dust atm.

The cheapest thing I came up with for something like this is to get a Node 304, a cheap(ish) ATX PSU, and an external HBA card in IT mode (9207-8e, $30-40) on ebay and turn the 304 into a DAS.

You could also just network share them from the other PC and use them as your download point for whatever you're doing.

BlankSystemDaemon
Mar 13, 2009



cage-free egghead posted:

So I've got a variety of drives that I use for various storage but I'm not sure how to tie it all together. I've got an HP server with Unraid that uses 6x1tb 2.5" drives but only has USB 2 ports and no 3.5" bays. I've got a few 3.5" drives I'd like to use on that Unraid server for additional storage but not sure if USB 2 would work well, if at all with it. I do have the 3.5" drives on another small PC right now but not quite sure how I could pair them up? Any ideas?
Since you've already got a rack server, I'm assuming you're not against more rack equipment?

You should check the option part list or maintenance & service guide; either one or the other will usually show you things like the HPE LFF drive cage with the option part number 496075-001, that can fit 6x 3.5" drives.
If you go that route, you'll need the maintenance & service manual in pdf format.

The other option is getting a SAS disk shelf and a SAS HBA with external ports like the LSI 9280-8E, along with some SFF-8088 cables for connecting them.

EDIT: I added some links to eBay to make it easy, but look around on local alternatives to eBay if you don't need to buy it right now, as you might be able to save a bit.

BlankSystemDaemon fucked around with this message at 17:11 on Jan 13, 2021

movax
Aug 30, 2008

BlankSystemDaemon posted:

Yeah, it's the 2U model, the IBM X3560 is 2U too - so it's ultimately just down to IBM not having any real concept of the fans needing to go below 100%? :v:

I was looking at that board just the other day, and the only problem I can see with it is that it doesn't have a single PCI-express x1 that can be used for an audio card - but it makes sense, becasue there are no PCI-ex 4.0 switches, not even from PLXBroadcom (which

The only real problem with it that you've already sort-of hinted at, is that it's Zen (not Zen+ or Zen2, let alone Zen3) - so the CPU has a lot of errata which AMD categorizes as WONTFIX in the pdf of the revision guide.
Issues 1021, 1024, 1048, and 1049 can and will all affect real-world workloads - up to and including that most OS' work around it which involve pessimizing performance. Three of them are supposedly rare (though even at only ~6 million lines of code, FreeBSD had to have a couple patches for it - I can't imagine how many patches Linux would need with +15 million lines of code, or how many Windows would need with +100 million lines of code), but 1048 is incredibly common for basically any kind of CPU-intensive workload.

Wow, 1048 is kind of ridiculous. Assuming those have been since fixed on anything after OG Zen.

I was unfamiliar with OCuLink until looking at these boards — seems like an interesting connector spec. I take it the state-of-the-art now is getting more similar to FPGAs where high-speed SerDes are routed to those connectors and then on-die, the PCS / protocol can change between SAS, PCIe, and I forget the third one? U.3, I think? Those little connectors would be perfect to route to some used / surplus Kioxia CM6/CD6 or PM1733/1735 when they start to filter out onto eBay more. Cheaper even if starting with used PCIe 3.0 SSDs. The Node 304 can hold 4x 2.5" SSDs with zero mods, so putting in a mirror of stripes seems like it would be a ridiculously fast array for a small SAN. How does RAID-Z/parity work with solid state drives? Feels like the write patterns on the parity drive / resultant wear is very different than the data drives themselves.

BlankSystemDaemon
Mar 13, 2009



movax posted:

Wow, 1048 is kind of ridiculous. Assuming those have been since fixed on anything after OG Zen.

I was unfamiliar with OCuLink until looking at these boards — seems like an interesting connector spec. I take it the state-of-the-art now is getting more similar to FPGAs where high-speed SerDes are routed to those connectors and then on-die, the PCS / protocol can change between SAS, PCIe, and I forget the third one? U.3, I think? Those little connectors would be perfect to route to some used / surplus Kioxia CM6/CD6 or PM1733/1735 when they start to filter out onto eBay more. Cheaper even if starting with used PCIe 3.0 SSDs. The Node 304 can hold 4x 2.5" SSDs with zero mods, so putting in a mirror of stripes seems like it would be a ridiculously fast array for a small SAN. How does RAID-Z/parity work with solid state drives? Feels like the write patterns on the parity drive / resultant wear is very different than the data drives themselves.
Yeah, 1048 ran a lot of developers a bit afoul.

OCuLink occupies a weird space; it's sort of become the defacto ThunderBolt3/4 but for internal-use-only, except there's nothing about ThunderBolt that implies it has to be out the case, and the connectors are smaller so would take up less area on motherboards - although admittedly that only matters for miniITX.
The other use of OCuLink is for connecting disks, but MiniSAS HD exists too, and is made for that.

Were you thinking of U.2? Because that's mostly just for connecting disks.

Please remember that RAIDz is distributed parity, and it depends on the level - RAIDz1 is a simple XOR like most other RAID implementations, but modern ZFS uses vectorized Galois finite field matrix calculations. Both parity blocks blocks distributed across the entire vdev, so the writing pattern is a lot less obvious.

As to wear-level in the face of ZFS parity, whether RAIDz or ZFS mirroring/ditto blocks for that matter, that entirely depends on the implementation as it's a question of support in ZFS as well as the device driver. Up until very recently (mid-2020, I think, unless you wanted to run software classified as not-ready-for-production), OpenZFS didn't really support it, and it was only ZFS on FreeBSD that did TRIM.

Basically, OpenZFS 2.0 should be all set with TRIM and exists on FreeBSD 13.0 and 12.2 (but FreeBSD was already set, so :shrug:), and presumably Linux distributions will have moved onto OpenZFS 2.0 if they support it at all (I know that's a contentious issue).

BlankSystemDaemon fucked around with this message at 19:42 on Jan 13, 2021

ILikeVoltron
May 17, 2003

I <3 spyderbyte!

BlankSystemDaemon posted:

As to wear-level in the face of ZFS parity, whether RAIDz or ZFS mirroring/ditto blocks for that matter, that entirely depends on the implementation as it's a question of support in ZFS as well as the device driver. Up until very recently (mid-2020, I think, unless you wanted to run software classified as not-ready-for-production), OpenZFS didn't really support it, and it was only ZFS on FreeBSD that did TRIM.

Basically, OpenZFS 2.0 should be all set with TRIM and exists on FreeBSD 13.0 and 12.2 (but FreeBSD was already set, so :shrug:), and presumably Linux distributions will have moved onto OpenZFS 2.0 if they support it at all (I know that's a contentious issue).

Trim has been in ZFS for a while, long long before 2.0. Something along the lines of 0.8.2 iirc included it. So if you're running ubuntu 20.04, trim should be enabled by default and if not, is as simple as `zpool set autotrim=on <poolname>`

Also, when setting up multiple vdevs, zfs will do some quick math if each vdev is evenly used or not (as in, free space is not the same across vdevs) it will attempt to allocate disk on the vdev with more free space first. You can stop this by adding all vdevs in at the start (and same size) and not layering them in. Or you can remove the content from the device and re-write it. Liberal use of zfs send/receive can be done to accomplish this with a simple script. Please note, it's not perfect and even if you have all vdevs with the exact same free space, you might notice one vdev getting slightly more traffic. This is normal (and annoying) but is a feature, not a bug.

movax
Aug 30, 2008

BlankSystemDaemon posted:

Yeah, 1048 ran a lot of developers a bit afoul.

OCuLink occupies a weird space; it's sort of become the defacto ThunderBolt3/4 but for internal-use-only, except there's nothing about ThunderBolt that implies it has to be out the case, and the connectors are smaller so would take up less area on motherboards - although admittedly that only matters for miniITX.
The other use of OCuLink is for connecting disks, but MiniSAS HD exists too, and is made for that.

Were you thinking of U.2? Because that's mostly just for connecting disks.

Please remember that RAIDz is distributed parity, and it depends on the level - RAIDz1 is a simple XOR like most other RAID implementations, but modern ZFS uses vectorized Galois finite field matrix calculations. Both parity blocks blocks distributed across the entire vdev, so the writing pattern is a lot less obvious.

As to wear-level in the face of ZFS parity, whether RAIDz or ZFS mirroring/ditto blocks for that matter, that entirely depends on the implementation as it's a question of support in ZFS as well as the device driver. Up until very recently (mid-2020, I think, unless you wanted to run software classified as not-ready-for-production), OpenZFS didn't really support it, and it was only ZFS on FreeBSD that did TRIM.

Basically, OpenZFS 2.0 should be all set with TRIM and exists on FreeBSD 13.0 and 12.2 (but FreeBSD was already set, so :shrug:), and presumably Linux distributions will have moved onto OpenZFS 2.0 if they support it at all (I know that's a contentious issue).

I fully plan on doing TrueNAS / BSD-based ZFS, so no worries there — I've never used ZFS on anything besides Solaris (lol) and FreeBSD via FreeNAS.

I'm familiar with U.2, but I've seen mention of U.3 which basically makes it, if I understand correctly, one connector, one type of cabling (I guess they picked a nominal impedance and stuck with it) and the controllers / PHYs on either end have to pass onto logic that can handle SAS, SATA or NVMe. Seems like it's rolling out with some of the newer PCIe 4.0 based NVMe drives like the Kioxias I mentioned earlier.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
OCuLink is probably not going to be around that long in the grand scheme, and personally I’m glad. They’re definitely smaller than the minisas connectors and can carry more sidebands, but they don’t have as good signal integrity and the on board connectors are pathetically fragile. Vendors have put down the surface mount versions that don’t have large pads or through holes and they get torn off the board even if the user is somewhat careful. Worse still is when the insertion causes the solder to crack on a couple links but the connector still looks fine, that’s a fun debug. Thankfully most have wised up and they are more sturdy on the board.

The retention is annoying too. It is the type that will not always disengage if someone is pulling it out, leading to people stressing them more trying to get them out (secret is push the connector in a little, press the retention so the “fangs” don’t get stuck in the connector).

The new hotness for gen5 and going forward will probably be mcio.

Also the “optical” part of oculink is a terrible lie, there are no optical cables for it at all. :mad:

Also there are oculink to u.2/u.3 cables (sff-8639) and they work great.

Adbot
ADBOT LOVES YOU

movax
Aug 30, 2008

priznat posted:

OCuLink is probably not going to be around that long in the grand scheme, and personally I’m glad. They’re definitely smaller than the minisas connectors and can carry more sidebands, but they don’t have as good signal integrity and the on board connectors are pathetically fragile. Vendors have put down the surface mount versions that don’t have large pads or through holes and they get torn off the board even if the user is somewhat careful. Worse still is when the insertion causes the solder to crack on a couple links but the connector still looks fine, that’s a fun debug. Thankfully most have wised up and they are more sturdy on the board.

The retention is annoying too. It is the type that will not always disengage if someone is pulling it out, leading to people stressing them more trying to get them out (secret is push the connector in a little, press the retention so the “fangs” don’t get stuck in the connector).

The new hotness for gen5 and going forward will probably be mcio.

Also the “optical” part of oculink is a terrible lie, there are no optical cables for it at all. :mad:

Also there are oculink to u.2/u.3 cables (sff-8639) and they work great.

Yeah — I saw the name and was like "... but where are the optics??". Guess they didn't learn anything from ThunderBolt's foray into that. Good heads up on the connector fragility — don't think these see a lot of cycles but if I do end up with a board with those connectors, most seem to be vertically oriented so maybe I can add some epoxy or similar to help out.

Got any links to recommended cables / adapters that are good from a SI POV?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply