Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
IOwnCalculus
Apr 2, 2003





Thermopyle posted:

I've got 24 drives smushed into my current case.

I need more space for drives and more sata ports and I don't like to spend money.

I only have some PCI Express 2 x1 slots left on my current motherboard.

My first thought is to buy another case and hack something together with some long-ish SATA cables and some SATA cards to go in those open slots I have.

That's starting to take up a lot of physical space though.

However, it seems like maybe that's my only choice without finding some expensive rack-mount case with room for more drives with new mobo/cpu/ram.

Any other suggestions?

I've done the long-rear end-SATA-cables (even eSATA-to-SATA at one point) and it's a mess. Would strongly recommend some sort of HBA and external array. Keep the case you've got unless you hate it, cases that support >24 drives are big money so you'll want to add an enclosure to it instead.

What's your current HBA setup in the slots that are occupied? Perhaps a higher-port-density HBA in one of those slots handling both internal and external SAS connections would work, like a 9201-16i and internal/external SAS cable adapters.

Adbot
ADBOT LOVES YOU

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

IOwnCalculus posted:

I've done the long-rear end-SATA-cables (even eSATA-to-SATA at one point) and it's a mess. Would strongly recommend some sort of HBA and external array. Keep the case you've got unless you hate it, cases that support >24 drives are big money so you'll want to add an enclosure to it instead.

What's your current HBA setup in the slots that are occupied? Perhaps a higher-port-density HBA in one of those slots handling both internal and external SAS connections would work, like a 9201-16i and internal/external SAS cable adapters.

Good idea.

I'm using two 9240-8i cards, but they don't have external SAS connections.

I see a 9201-16i (internal ports) and a 9201-16e (external ports)...nothing with a mixture of internal and external. So, maybe get one 9201-16i, move my current drives to that, and then get one 9201-16e to use with new drives in an external enclosure?

Anyone have recommendations for an external enclosure that'd work with this setup? I'm not familiar with how the external cabling works.

H110Hawk
Dec 28, 2006

Thermopyle posted:

Good idea.

I'm using two 9240-8i cards, but they don't have external SAS connections.

I see a 9201-16i (internal ports) and a 9201-16e (external ports)...nothing with a mixture of internal and external. So, maybe get one 9201-16i, move my current drives to that, and then get one 9201-16e to use with new drives in an external enclosure?

Anyone have recommendations for an external enclosure that'd work with this setup? I'm not familiar with how the external cabling works.

Google 8i8e. They make it.

CopperHound
Feb 14, 2012

Paul MaudDib posted:

Some tips on this build: you will need an 8-pin EPS power (CPU aux power) extension cable and a 24-pin ATX extension cable to make it work. They're cheap on ebay but you'll have to deal with the slowboat from china, so get it on order. I've emailed U-NAS support to mention this and ask them to put it on the page but they said "it depends on the PSU" which is total BS.
The case arrived with their PSU preinstalled. It came with a 24 pin ATX & 4 pin EPS cable that are almost the perfect length to route along the front of the case. Too bad their are no cable tie down points there. The back plane was also wired up with two different length molex splitters.

Also, the screws holding the case together suck.

Eletriarnation posted:

but you could get Ivy Bridge pretty cheap at this point; an X9SCM with a Xeon E3-1220 v2 will go for under $100. Conveniently, ECC DDR3 is also getting very inexpensive by now. Is there a reason you want a new platform other than reliability concerns?
I did end up getting this board and trying to get everything set up, but I'm running into one hangup: I don't actually have a monitor that accepts an analog signal and of course the IPMI password is not the default :v:

I also have to figure out my riser cable situation... I ended up with 16x riser cables and 8x slots. I guess I'll try cutting one of these cables instead of the slot itself.

IOwnCalculus
Apr 2, 2003





The 8i8e is rare and $texas. Adapters that can take internal SAS and provide you with an external SAS port are much cheaper. Like this: https://rover.ebay.com/rover/0/0/0?mpre=https%3A%2F%2Fwww.ebay.com%2Fulk%2Fitm%2F133052054969

I run my entire (admittedly not yet full) NetApp DS4243 on a single four lane SAS connection.

IOwnCalculus fucked around with this message at 04:49 on Aug 15, 2019

H110Hawk
Dec 28, 2006

IOwnCalculus posted:

The 8i8e is rare and $texas.

Oh there you go. I bought piles of them back in... Oh god 2008. :stare: but I was ordering them for work.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Thermopyle posted:

Good idea.

I'm using two 9240-8i cards, but they don't have external SAS connections.

I see a 9201-16i (internal ports) and a 9201-16e (external ports)...nothing with a mixture of internal and external. So, maybe get one 9201-16i, move my current drives to that, and then get one 9201-16e to use with new drives in an external enclosure?

Anyone have recommendations for an external enclosure that'd work with this setup? I'm not familiar with how the external cabling works.

Empty MD1220's are going used for $200 on ebay, I think that would fit the bill.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

CopperHound posted:

I did end up getting this board and trying to get everything set up, but I'm running into one hangup: I don't actually have a monitor that accepts an analog signal and of course the IPMI password is not the default :v:

Sorry to hear that, it's a bit of a bind since at the point you are getting a VGA->HDMI converter you might as well buy a VGA monitor. If you have a spare slot-powered GPU, try using that to get up to the point that you can run headless. Might not need to be slot-powered if your server's PSU has a PCIe power connector.

If you don't have any spare GPUs but there are any e-recyclers near you that sell working gear to the public or a thrift store that stocks electronics, you can probably get an old fullscreen VGA monitor for cheap. Depending on where you work, they might be throwing them out there too.

Eletriarnation fucked around with this message at 14:41 on Aug 15, 2019

CopperHound
Feb 14, 2012

Okay, after a night of sleep a realized I was typing admin/admin instead of ADMIN/ADMIN for IPMI and I am logged in :kiddo:
Now I just need to deal with running Java in an age where computers try not to let you do horribly insecure things.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

BangersInMyKnickers posted:

Empty MD1220's are going used for $200 on ebay, I think that would fit the bill.

The 1200/1220 don't require Dell branded "enterprise" drives correct? That is just when chaining them to a 32XX/i, managed by the main controller?

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Moey posted:

The 1200/1220 don't require Dell branded "enterprise" drives correct? That is just when chaining them to a 32XX/i, managed by the main controller?

That's my understanding, yes. They'll complain a bit and it might not present all the firmware fields because Dell flashes some custom stuff on there, but they're just sata/sas devices at the end of the day

Moey
Oct 22, 2010

I LIKE TO MOVE IT
Really wish I had a deeper "rack" at home, have a shallow-er network cabinet, so it really limits what I can mount.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

BangersInMyKnickers posted:

Empty MD1220's are going used for $200 on ebay, I think that would fit the bill.

Unfortunately, the cheapest i see is like 500 bucks. Except one in Australia for $195.76 + $209.27 shipping to me in the USA!

Ika
Dec 30, 2004
Pure insanity

Just something interesting: I RMAed a 6TB RED drive, and the replacement is a white label NASWare drive.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Thermopyle posted:

Unfortunately, the cheapest i see is like 500 bucks. Except one in Australia for $195.76 + $209.27 shipping to me in the USA!

https://www.ebay.com/itm/Dell-Powervault-MD1220-1x-SAS-6gb-s-Controller-2x-PSU-No-Drive-Trays/382527750246

It's single controller but that should be fine for this purpose

Moey posted:

Really wish I had a deeper "rack" at home, have a shallow-er network cabinet, so it really limits what I can mount.

MD1220's are surprisingly shallow compared to servers FYI. 19" deep, just an inch deeper than it is wide. A storage chode.

IOwnCalculus
Apr 2, 2003





Xyratex is the OEM for those Dells, as well as the Netapp 4243 / 4246, and a bunch of other similar devices. The rear view is a dead giveaway - room for two or four super long power supplies with a square profile, two or four rectangular slots in the middle for "controllers" (which seem to really just be SAS expanders in most cases).

The worst case scenario with any of them is typically with the Netapp stuff, in that the Netapp IOM uses a QSFP connector for SAS instead of a SAS connector. You can either get custom eSAS-QSFP cables, or you can swap the IOM out for a generic replacement that has eSAS. I went the latter route.

Something like this. Unless you get a screaming deal you want one with the caddies included since they cost too much to buy. Also protip if you switch the IOM3/IOM6 out for a Xyratex / Compellent controller, you need to provide AC power to all of the power supplies for it to work at all.

IOwnCalculus fucked around with this message at 21:01 on Aug 15, 2019

Moey
Oct 22, 2010

I LIKE TO MOVE IT

BangersInMyKnickers posted:

MD1220's are surprisingly shallow compared to servers FYI. 19" deep, just an inch deeper than it is wide. A storage chode.

Yeah, I run a handful of MD32XXi/MD36XXi at work, my lousy rack at home has like a max usable space from the front mounting to the back of the enclosure of 12".

It is wedged into a utility closet type dealy, couldn't really go bigger.

Crunchy Black
Oct 24, 2017

by Athanatos

IOwnCalculus posted:

lots of cool OEM info for MD1200s
Any idea of a part number for the rails? Dumbass I bought mine from on ebay shipped me two left handers. He said he'd send me the other but that never happened. Though I did get a screaming deal, 350 shipped with a full complement of 1TB drives, only one has died thus far.

IOwnCalculus
Apr 2, 2003





No idea on rails, my Netapp 4243 didn't come with them either. I have it on some generic rackmount shelves at the bottom of a shared rack.

I will say that, even compared to any other rackmount device I've lifted, these fuckers are heavy before you even get the drives in. Would strongly recommend loading the chassis empty and putting everything in - PSUs, controllers, drives - only after that.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Good info, thanks IOC.

So, this is a stupid question, but I've forgotten everything I learned about SAS when I was first setting it up in my home server.

How's it work with these drive cabinets? I seem to recall something about expanders letting you do some sort of nonsense that runs a bunch of drives off of a single 8088/8087 port. At least more than the current four I get with a fanout cable.

So, I can just run a single eSAS cable from one of my controllers to one of these Xyratex boxes or what?

IOwnCalculus
Apr 2, 2003





Yep - I have my DS4243 hanging on a single 8088 cable. The "controller" module seems to just be a SAS expander similar to the popular HP cards, just in a custom form factor.

lsscsi output:



Everything with a 6 at the start of the SAS address is in the DS4243. 0-5 are the base SATA ports on my server's chipset, 7 is a small add-in controller built into the box.

In theory that SAS connection could become a bottleneck, so I wouldn't hang a bunch of SSDs on it, but in practice this seems to be fine.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

IOwnCalculus posted:

Yep - I have my DS4243 hanging on a single 8088 cable. The "controller" module seems to just be a SAS expander similar to the popular HP cards, just in a custom form factor.

lsscsi output:



Everything with a 6 at the start of the SAS address is in the DS4243. 0-5 are the base SATA ports on my server's chipset, 7 is a small add-in controller built into the box.

In theory that SAS connection could become a bottleneck, so I wouldn't hang a bunch of SSDs on it, but in practice this seems to be fine.

Thanks. I'm going to buy the poo poo out of one of these bitches.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Thermopyle posted:

Good info, thanks IOC.

So, this is a stupid question, but I've forgotten everything I learned about SAS when I was first setting it up in my home server.

How's it work with these drive cabinets? I seem to recall something about expanders letting you do some sort of nonsense that runs a bunch of drives off of a single 8088/8087 port. At least more than the current four I get with a fanout cable.

So, I can just run a single eSAS cable from one of my controllers to one of these Xyratex boxes or what?

MiniSAS and HD-MiniSAS are essentially 4 links bonded in to the same physical cable. They all hit the controller/expander and then fan out to the drives. So your maximum transfer speed is going to be dictated by that (I'm pretty sure the 12x0 series are 6gig SAS so 24 aggregate, or 48 with a dual controller/dual cable setup).

The md12x0 series also has a cool feature where you can run it in split-mode, where one controller addresses bays 0-11 and the second 12-23. It can be good if you're trying to do drive expansion for 2 servers and aren't planning on populating a lot of bays. Took it away with the 14x0 series unfortunately.

BangersInMyKnickers fucked around with this message at 17:12 on Aug 16, 2019

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

BangersInMyKnickers posted:

MiniSAS and HD-MiniSAS are essentially 4 links bonded in to the same physical cable. They all hit the controller/expander and then fan out to the drives. So your maximum transfer speed is going to be dictated by that (I'm pretty sure the 12x0 series are 6gig SAS so 24 aggregate, or 48 with a dual controller/dual cable setup).

The md12x0 series also has a cool feature where you can run it in split-mode, where one controller addresses bays 0-11 and the second 12-23. It can be good if you're trying to do drive expansion for 2 servers and aren't planning on populating a lot of bays. Took it away with the 14x0 series unfortunately.

So, it's not 1 link per drive?

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Thermopyle posted:

So, it's not 1 link per drive?

No, even internal hot-plug bays on servers are there is always one or two quad-link HDSAS bundles feeding the storage backplane. It generally won't matter for even a fully populated 15k drive pool so long as you have a dual-controller/dual-link setup; 64gbps gets you around 8GB/s or 333MB/s per disk on a 24bay form factor and that's faster than any modern spinning disk can do even on sequential 1MB reads. It becomes a problem when you are using flash drives, however. It's pretty common to see a single flash drive pushing in excess of 1GB/s with the new SAS12 interfaces, so saturation can become a problem quickly for a flash array with a large sequential workload.

e: U.2 drives are a little different since you're feeding PCIe lanes up to the front but even then I am fairly sure its some manner of riser/PCIe switch setup were drives are ultimately sharing lanes

BlankSystemDaemon
Mar 13, 2009




BangersInMyKnickers posted:

No, even internal hot-plug bays on servers are there is always one or two quad-link HDSAS bundles feeding the storage backplane. It generally won't matter for even a fully populated 15k drive pool so long as you have a dual-controller/dual-link setup; 64gbps gets you around 8GB/s or 333MB/s per disk on a 24bay form factor and that's faster than any modern spinning disk can do even on sequential 1MB reads. It becomes a problem when you are using flash drives, however. It's pretty common to see a single flash drive pushing in excess of 1GB/s with the new SAS12 interfaces, so saturation can become a problem quickly for a flash array with a large sequential workload.

e: U.2 drives are a little different since you're feeding PCIe lanes up to the front but even then I am fairly sure its some manner of riser/PCIe switch setup were drives are ultimately sharing lanes
Well, there's still 8b/10b overhead on any PCI link so it's not quite that much - but I believe modern HBAs are doing 64b/66b?

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

D. Ebdrup posted:

Well, there's still 8b/10b overhead on any PCI link so it's not quite that much - but I believe modern HBAs are doing 64b/66b?

PCIe2 was 8/10, 3 is 64/66 so overhead should be under 5% for modern gear

Crunchy Black
Oct 24, 2017

by Athanatos
When I think about the trace routing necessary just for PCIe3, let alone 4, then I put Oculink into that equation, I nearly get sick.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

BangersInMyKnickers posted:

PCIe2 was 8/10, 3 is 64/66 so overhead should be under 5% for modern gear

There’s also packet layer overhead: TLP headers, and DLLP packets used for acknowledgments, flow control, etc. So for example the nominal half duplex throughput of gen3x8 pcie ought to be about 7.76 GB/s after 64b/66b line coding, but I’ve tested gen3x8 LSI raid controllers and they seem to max out at about 6.5 GB/s. (I was doing raid 0 across 16 sata3 ssds on a 16 port LSI, one disk per port, so the bottleneck wasn’t on that side.)

One factor is that even though Intel created pcie and provided for max packet sizes as high as 4KB, in practice even their server CPUs never seem to support more than 256 byte TLPs. Consumer segment CPUs are even worse at 128B. Everything on the bus must negotiate down to the least common denominator max packet size, so if you have an Intel cpu you’re stuck with relatively small TLPs, and consequently are losing a relatively high fraction of the channel bandwidth to packet headers.

Steakandchips
Apr 30, 2009

Can someone recommend some good home surveillance cameras that can write to my synology nas via Surveillance Station?

I'd like:

1. Night vision (it will be pointed out of our living room window that looks onto the entrance to our house).
2. PoE ethernet is preferable, but wifi and mains powered is fine too.
3. Not-chinese, I.e. not full of vulnerabilities and spyware, I.e. not Hikvision.
4. Works with Synology NASes.
5. Available in the UK.

Axe-man
Apr 16, 2005

The product of hundreds of hours of scientific investigation and research.

The perfect meatball.
Clapping Larry
I can at least give you options for number 4:

https://www.synology.com/en-uk/compatibility/camera

Honestly, with cameras it feels like there are 100,000 types of the loving things.

Rooted Vegetable
Jun 1, 2002
This thread has extensive camera discussion: https://forums.somethingawful.com/showthread.php?threadid=3635963

Sir Bobert Fishbone
Jan 16, 2006

Beebort
I have an amcrest that seems to tick all your boxes. Just set it up again last night with my Syno 215j

Steakandchips
Apr 30, 2009

Thanks guys, posted in the home automation and security thread.

Will check out Amcrest too.

sharkytm
Oct 9, 2003

Ba

By

Sharkytm doot doo do doot do doo


Fallen Rib

Steakandchips posted:

Thanks guys, posted in the home automation and security thread.

Will check out Amcrest too.

I'll respond in the other thread, but you've got a bunch of conflicting requirements.

Crunchy Black
Oct 24, 2017

by Athanatos

BobHoward posted:

One factor is that even though Intel created pcie and provided for max packet sizes as high as 4KB, in practice even their server CPUs never seem to support more than 256 byte TLPs. Consumer segment CPUs are even worse at 128B. Everything on the bus must negotiate down to the least common denominator max packet size, so if you have an Intel cpu you’re stuck with relatively small TLPs, and consequently are losing a relatively high fraction of the channel bandwidth to packet headers.

This is really interesting. Source?

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

Crunchy Black posted:

This is really interesting. Source?

There's not so much a definitive guide on what devices have what MPS (max payload size), more just a this is how devices seem to come out. The actual acceptable values (up to 4k) is in the PCIe Spec.

Interestingly the Intel DMA engine (crystal beach) will only do 64byte TLPs so it sucks even worse despite supposedly for high throughput DMA transfers.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

BobHoward posted:

There’s also packet layer overhead: TLP headers, and DLLP packets used for acknowledgments, flow control, etc. So for example the nominal half duplex throughput of gen3x8 pcie ought to be about 7.76 GB/s after 64b/66b line coding, but I’ve tested gen3x8 LSI raid controllers and they seem to max out at about 6.5 GB/s. (I was doing raid 0 across 16 sata3 ssds on a 16 port LSI, one disk per port, so the bottleneck wasn’t on that side.)

One factor is that even though Intel created pcie and provided for max packet sizes as high as 4KB, in practice even their server CPUs never seem to support more than 256 byte TLPs. Consumer segment CPUs are even worse at 128B. Everything on the bus must negotiate down to the least common denominator max packet size, so if you have an Intel cpu you’re stuck with relatively small TLPs, and consequently are losing a relatively high fraction of the channel bandwidth to packet headers.

I've consistently gotten 7.5GB/s off PERC 740/840's when they're really hitting on the cache right. This is on Epyc1 so maybe it doesn't have the same overhead issues as the Xeons.

BangersInMyKnickers fucked around with this message at 16:04 on Aug 19, 2019

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

BangersInMyKnickers posted:

I've consistently gotten 7.5GB/s off PERC 740/840's when they're really hitting on the cache right. This is on Epyc1 so maybe it doesn't have the same overhead issues as the Xeons.

If you're running Linux on them can you pastebin the output of "sudo lspci -vvv" on one? Would be interesting to see.

Adbot
ADBOT LOVES YOU

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

BobHoward posted:

If you're running Linux on them can you pastebin the output of "sudo lspci -vvv" on one? Would be interesting to see.

Here you go:

https://pastebin.com/iJqN34Gi

Looks like it might be doing 512 byte TLPs instead of 256?
code:
        DevCap: MaxPayload 1024 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us
            ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
        DevCtl: Report errors: Correctable- Non-Fatal+ Fatal+ Unsupported+
            RlxdOrd- ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
            MaxPayload 512 bytes, MaxReadReq 512 bytes

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply