|
Thermopyle posted:I've got 24 drives smushed into my current case. I've done the long-rear end-SATA-cables (even eSATA-to-SATA at one point) and it's a mess. Would strongly recommend some sort of HBA and external array. Keep the case you've got unless you hate it, cases that support >24 drives are big money so you'll want to add an enclosure to it instead. What's your current HBA setup in the slots that are occupied? Perhaps a higher-port-density HBA in one of those slots handling both internal and external SAS connections would work, like a 9201-16i and internal/external SAS cable adapters.
|
# ? Aug 15, 2019 00:58 |
|
|
# ? Jun 6, 2024 00:53 |
|
IOwnCalculus posted:I've done the long-rear end-SATA-cables (even eSATA-to-SATA at one point) and it's a mess. Would strongly recommend some sort of HBA and external array. Keep the case you've got unless you hate it, cases that support >24 drives are big money so you'll want to add an enclosure to it instead. Good idea. I'm using two 9240-8i cards, but they don't have external SAS connections. I see a 9201-16i (internal ports) and a 9201-16e (external ports)...nothing with a mixture of internal and external. So, maybe get one 9201-16i, move my current drives to that, and then get one 9201-16e to use with new drives in an external enclosure? Anyone have recommendations for an external enclosure that'd work with this setup? I'm not familiar with how the external cabling works.
|
# ? Aug 15, 2019 01:10 |
|
Thermopyle posted:Good idea. Google 8i8e. They make it.
|
# ? Aug 15, 2019 03:09 |
|
Paul MaudDib posted:Some tips on this build: you will need an 8-pin EPS power (CPU aux power) extension cable and a 24-pin ATX extension cable to make it work. They're cheap on ebay but you'll have to deal with the slowboat from china, so get it on order. I've emailed U-NAS support to mention this and ask them to put it on the page but they said "it depends on the PSU" which is total BS. Also, the screws holding the case together suck. Eletriarnation posted:but you could get Ivy Bridge pretty cheap at this point; an X9SCM with a Xeon E3-1220 v2 will go for under $100. Conveniently, ECC DDR3 is also getting very inexpensive by now. Is there a reason you want a new platform other than reliability concerns? I also have to figure out my riser cable situation... I ended up with 16x riser cables and 8x slots. I guess I'll try cutting one of these cables instead of the slot itself.
|
# ? Aug 15, 2019 04:46 |
|
The 8i8e is rare and $texas. Adapters that can take internal SAS and provide you with an external SAS port are much cheaper. Like this: https://rover.ebay.com/rover/0/0/0?mpre=https%3A%2F%2Fwww.ebay.com%2Fulk%2Fitm%2F133052054969 I run my entire (admittedly not yet full) NetApp DS4243 on a single four lane SAS connection. IOwnCalculus fucked around with this message at 04:49 on Aug 15, 2019 |
# ? Aug 15, 2019 04:46 |
|
IOwnCalculus posted:The 8i8e is rare and $texas. Oh there you go. I bought piles of them back in... Oh god 2008. but I was ordering them for work.
|
# ? Aug 15, 2019 05:01 |
|
Thermopyle posted:Good idea. Empty MD1220's are going used for $200 on ebay, I think that would fit the bill.
|
# ? Aug 15, 2019 14:34 |
|
CopperHound posted:I did end up getting this board and trying to get everything set up, but I'm running into one hangup: I don't actually have a monitor that accepts an analog signal and of course the IPMI password is not the default Sorry to hear that, it's a bit of a bind since at the point you are getting a VGA->HDMI converter you might as well buy a VGA monitor. If you have a spare slot-powered GPU, try using that to get up to the point that you can run headless. Might not need to be slot-powered if your server's PSU has a PCIe power connector. If you don't have any spare GPUs but there are any e-recyclers near you that sell working gear to the public or a thrift store that stocks electronics, you can probably get an old fullscreen VGA monitor for cheap. Depending on where you work, they might be throwing them out there too. Eletriarnation fucked around with this message at 14:41 on Aug 15, 2019 |
# ? Aug 15, 2019 14:37 |
|
Okay, after a night of sleep a realized I was typing admin/admin instead of ADMIN/ADMIN for IPMI and I am logged in Now I just need to deal with running Java in an age where computers try not to let you do horribly insecure things.
|
# ? Aug 15, 2019 16:40 |
|
BangersInMyKnickers posted:Empty MD1220's are going used for $200 on ebay, I think that would fit the bill. The 1200/1220 don't require Dell branded "enterprise" drives correct? That is just when chaining them to a 32XX/i, managed by the main controller?
|
# ? Aug 15, 2019 18:05 |
|
Moey posted:The 1200/1220 don't require Dell branded "enterprise" drives correct? That is just when chaining them to a 32XX/i, managed by the main controller? That's my understanding, yes. They'll complain a bit and it might not present all the firmware fields because Dell flashes some custom stuff on there, but they're just sata/sas devices at the end of the day
|
# ? Aug 15, 2019 18:39 |
|
Really wish I had a deeper "rack" at home, have a shallow-er network cabinet, so it really limits what I can mount.
|
# ? Aug 15, 2019 19:46 |
|
BangersInMyKnickers posted:Empty MD1220's are going used for $200 on ebay, I think that would fit the bill. Unfortunately, the cheapest i see is like 500 bucks. Except one in Australia for $195.76 + $209.27 shipping to me in the USA!
|
# ? Aug 15, 2019 19:52 |
|
Just something interesting: I RMAed a 6TB RED drive, and the replacement is a white label NASWare drive.
|
# ? Aug 15, 2019 20:02 |
|
Thermopyle posted:Unfortunately, the cheapest i see is like 500 bucks. Except one in Australia for $195.76 + $209.27 shipping to me in the USA! https://www.ebay.com/itm/Dell-Powervault-MD1220-1x-SAS-6gb-s-Controller-2x-PSU-No-Drive-Trays/382527750246 It's single controller but that should be fine for this purpose Moey posted:Really wish I had a deeper "rack" at home, have a shallow-er network cabinet, so it really limits what I can mount. MD1220's are surprisingly shallow compared to servers FYI. 19" deep, just an inch deeper than it is wide. A storage chode.
|
# ? Aug 15, 2019 20:36 |
|
Xyratex is the OEM for those Dells, as well as the Netapp 4243 / 4246, and a bunch of other similar devices. The rear view is a dead giveaway - room for two or four super long power supplies with a square profile, two or four rectangular slots in the middle for "controllers" (which seem to really just be SAS expanders in most cases). The worst case scenario with any of them is typically with the Netapp stuff, in that the Netapp IOM uses a QSFP connector for SAS instead of a SAS connector. You can either get custom eSAS-QSFP cables, or you can swap the IOM out for a generic replacement that has eSAS. I went the latter route. Something like this. Unless you get a screaming deal you want one with the caddies included since they cost too much to buy. Also protip if you switch the IOM3/IOM6 out for a Xyratex / Compellent controller, you need to provide AC power to all of the power supplies for it to work at all. IOwnCalculus fucked around with this message at 21:01 on Aug 15, 2019 |
# ? Aug 15, 2019 20:53 |
|
BangersInMyKnickers posted:MD1220's are surprisingly shallow compared to servers FYI. 19" deep, just an inch deeper than it is wide. A storage chode. Yeah, I run a handful of MD32XXi/MD36XXi at work, my lousy rack at home has like a max usable space from the front mounting to the back of the enclosure of 12". It is wedged into a utility closet type dealy, couldn't really go bigger.
|
# ? Aug 15, 2019 21:10 |
|
IOwnCalculus posted:lots of cool OEM info for MD1200s
|
# ? Aug 15, 2019 22:54 |
|
No idea on rails, my Netapp 4243 didn't come with them either. I have it on some generic rackmount shelves at the bottom of a shared rack. I will say that, even compared to any other rackmount device I've lifted, these fuckers are heavy before you even get the drives in. Would strongly recommend loading the chassis empty and putting everything in - PSUs, controllers, drives - only after that.
|
# ? Aug 15, 2019 23:01 |
|
Good info, thanks IOC. So, this is a stupid question, but I've forgotten everything I learned about SAS when I was first setting it up in my home server. How's it work with these drive cabinets? I seem to recall something about expanders letting you do some sort of nonsense that runs a bunch of drives off of a single 8088/8087 port. At least more than the current four I get with a fanout cable. So, I can just run a single eSAS cable from one of my controllers to one of these Xyratex boxes or what?
|
# ? Aug 15, 2019 23:53 |
|
Yep - I have my DS4243 hanging on a single 8088 cable. The "controller" module seems to just be a SAS expander similar to the popular HP cards, just in a custom form factor. lsscsi output: Everything with a 6 at the start of the SAS address is in the DS4243. 0-5 are the base SATA ports on my server's chipset, 7 is a small add-in controller built into the box. In theory that SAS connection could become a bottleneck, so I wouldn't hang a bunch of SSDs on it, but in practice this seems to be fine.
|
# ? Aug 16, 2019 00:12 |
|
IOwnCalculus posted:Yep - I have my DS4243 hanging on a single 8088 cable. The "controller" module seems to just be a SAS expander similar to the popular HP cards, just in a custom form factor. Thanks. I'm going to buy the poo poo out of one of these bitches.
|
# ? Aug 16, 2019 00:52 |
|
Thermopyle posted:Good info, thanks IOC. MiniSAS and HD-MiniSAS are essentially 4 links bonded in to the same physical cable. They all hit the controller/expander and then fan out to the drives. So your maximum transfer speed is going to be dictated by that (I'm pretty sure the 12x0 series are 6gig SAS so 24 aggregate, or 48 with a dual controller/dual cable setup). The md12x0 series also has a cool feature where you can run it in split-mode, where one controller addresses bays 0-11 and the second 12-23. It can be good if you're trying to do drive expansion for 2 servers and aren't planning on populating a lot of bays. Took it away with the 14x0 series unfortunately. BangersInMyKnickers fucked around with this message at 17:12 on Aug 16, 2019 |
# ? Aug 16, 2019 17:10 |
|
BangersInMyKnickers posted:MiniSAS and HD-MiniSAS are essentially 4 links bonded in to the same physical cable. They all hit the controller/expander and then fan out to the drives. So your maximum transfer speed is going to be dictated by that (I'm pretty sure the 12x0 series are 6gig SAS so 24 aggregate, or 48 with a dual controller/dual cable setup). So, it's not 1 link per drive?
|
# ? Aug 16, 2019 18:30 |
|
Thermopyle posted:So, it's not 1 link per drive? No, even internal hot-plug bays on servers are there is always one or two quad-link HDSAS bundles feeding the storage backplane. It generally won't matter for even a fully populated 15k drive pool so long as you have a dual-controller/dual-link setup; 64gbps gets you around 8GB/s or 333MB/s per disk on a 24bay form factor and that's faster than any modern spinning disk can do even on sequential 1MB reads. It becomes a problem when you are using flash drives, however. It's pretty common to see a single flash drive pushing in excess of 1GB/s with the new SAS12 interfaces, so saturation can become a problem quickly for a flash array with a large sequential workload. e: U.2 drives are a little different since you're feeding PCIe lanes up to the front but even then I am fairly sure its some manner of riser/PCIe switch setup were drives are ultimately sharing lanes
|
# ? Aug 16, 2019 18:52 |
BangersInMyKnickers posted:No, even internal hot-plug bays on servers are there is always one or two quad-link HDSAS bundles feeding the storage backplane. It generally won't matter for even a fully populated 15k drive pool so long as you have a dual-controller/dual-link setup; 64gbps gets you around 8GB/s or 333MB/s per disk on a 24bay form factor and that's faster than any modern spinning disk can do even on sequential 1MB reads. It becomes a problem when you are using flash drives, however. It's pretty common to see a single flash drive pushing in excess of 1GB/s with the new SAS12 interfaces, so saturation can become a problem quickly for a flash array with a large sequential workload.
|
|
# ? Aug 16, 2019 21:24 |
|
D. Ebdrup posted:Well, there's still 8b/10b overhead on any PCI link so it's not quite that much - but I believe modern HBAs are doing 64b/66b? PCIe2 was 8/10, 3 is 64/66 so overhead should be under 5% for modern gear
|
# ? Aug 16, 2019 21:57 |
|
When I think about the trace routing necessary just for PCIe3, let alone 4, then I put Oculink into that equation, I nearly get sick.
|
# ? Aug 16, 2019 22:43 |
|
BangersInMyKnickers posted:PCIe2 was 8/10, 3 is 64/66 so overhead should be under 5% for modern gear There’s also packet layer overhead: TLP headers, and DLLP packets used for acknowledgments, flow control, etc. So for example the nominal half duplex throughput of gen3x8 pcie ought to be about 7.76 GB/s after 64b/66b line coding, but I’ve tested gen3x8 LSI raid controllers and they seem to max out at about 6.5 GB/s. (I was doing raid 0 across 16 sata3 ssds on a 16 port LSI, one disk per port, so the bottleneck wasn’t on that side.) One factor is that even though Intel created pcie and provided for max packet sizes as high as 4KB, in practice even their server CPUs never seem to support more than 256 byte TLPs. Consumer segment CPUs are even worse at 128B. Everything on the bus must negotiate down to the least common denominator max packet size, so if you have an Intel cpu you’re stuck with relatively small TLPs, and consequently are losing a relatively high fraction of the channel bandwidth to packet headers.
|
# ? Aug 16, 2019 22:45 |
|
Can someone recommend some good home surveillance cameras that can write to my synology nas via Surveillance Station? I'd like: 1. Night vision (it will be pointed out of our living room window that looks onto the entrance to our house). 2. PoE ethernet is preferable, but wifi and mains powered is fine too. 3. Not-chinese, I.e. not full of vulnerabilities and spyware, I.e. not Hikvision. 4. Works with Synology NASes. 5. Available in the UK.
|
# ? Aug 17, 2019 16:10 |
|
I can at least give you options for number 4: https://www.synology.com/en-uk/compatibility/camera Honestly, with cameras it feels like there are 100,000 types of the loving things.
|
# ? Aug 17, 2019 17:56 |
|
This thread has extensive camera discussion: https://forums.somethingawful.com/showthread.php?threadid=3635963
|
# ? Aug 17, 2019 17:57 |
|
I have an amcrest that seems to tick all your boxes. Just set it up again last night with my Syno 215j
|
# ? Aug 17, 2019 17:58 |
|
Thanks guys, posted in the home automation and security thread. Will check out Amcrest too.
|
# ? Aug 17, 2019 19:11 |
|
Steakandchips posted:Thanks guys, posted in the home automation and security thread. I'll respond in the other thread, but you've got a bunch of conflicting requirements.
|
# ? Aug 17, 2019 20:41 |
|
BobHoward posted:One factor is that even though Intel created pcie and provided for max packet sizes as high as 4KB, in practice even their server CPUs never seem to support more than 256 byte TLPs. Consumer segment CPUs are even worse at 128B. Everything on the bus must negotiate down to the least common denominator max packet size, so if you have an Intel cpu you’re stuck with relatively small TLPs, and consequently are losing a relatively high fraction of the channel bandwidth to packet headers. This is really interesting. Source?
|
# ? Aug 18, 2019 21:30 |
|
Crunchy Black posted:This is really interesting. Source? There's not so much a definitive guide on what devices have what MPS (max payload size), more just a this is how devices seem to come out. The actual acceptable values (up to 4k) is in the PCIe Spec. Interestingly the Intel DMA engine (crystal beach) will only do 64byte TLPs so it sucks even worse despite supposedly for high throughput DMA transfers.
|
# ? Aug 18, 2019 22:03 |
|
BobHoward posted:There’s also packet layer overhead: TLP headers, and DLLP packets used for acknowledgments, flow control, etc. So for example the nominal half duplex throughput of gen3x8 pcie ought to be about 7.76 GB/s after 64b/66b line coding, but I’ve tested gen3x8 LSI raid controllers and they seem to max out at about 6.5 GB/s. (I was doing raid 0 across 16 sata3 ssds on a 16 port LSI, one disk per port, so the bottleneck wasn’t on that side.) I've consistently gotten 7.5GB/s off PERC 740/840's when they're really hitting on the cache right. This is on Epyc1 so maybe it doesn't have the same overhead issues as the Xeons. BangersInMyKnickers fucked around with this message at 16:04 on Aug 19, 2019 |
# ? Aug 19, 2019 16:01 |
|
BangersInMyKnickers posted:I've consistently gotten 7.5GB/s off PERC 740/840's when they're really hitting on the cache right. This is on Epyc1 so maybe it doesn't have the same overhead issues as the Xeons. If you're running Linux on them can you pastebin the output of "sudo lspci -vvv" on one? Would be interesting to see.
|
# ? Aug 20, 2019 09:05 |
|
|
# ? Jun 6, 2024 00:53 |
|
BobHoward posted:If you're running Linux on them can you pastebin the output of "sudo lspci -vvv" on one? Would be interesting to see. Here you go: https://pastebin.com/iJqN34Gi Looks like it might be doing 512 byte TLPs instead of 256? code:
|
# ? Aug 20, 2019 15:10 |