Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
RIP OcuLink you sucked and then you were gone

Adbot
ADBOT LOVES YOU

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

priznat posted:

RIP OcuLink you sucked and then you were gone

those Rome boards can turn one of their slots into a pair of oculinks. Obviously they aimed for u.2 connectivity instead but there's still some oculink.

what's the use-case for that?

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

Paul MaudDib posted:

those Rome boards can turn one of their slots into a pair of oculinks. Obviously they aimed for u.2 connectivity instead but there's still some oculink.

what's the use-case for that?

Oh! I missed that, I was rolling my eyes at oculink a bit because of the other board with all the slimline, and it looks like Gen5 will be mcio. We use oculink at work and it seems like it will be an extremely short lived connector. It's annoying though, it has a really fiddly retention mechanism that can sometimes be a real pain to disengage.

The 2x oculink is probably for oculink to U.2 cables although it's weird to see a mix of those and miniSAS HD on the same board, seems like a strange mishmash. Having 2 x4 oculink, 2 x4 M.2 and 2 potentially x4 miniSAS HD just raises more questions though, that's more than a x16!

The weirdest option I've seen so far is the Gigabyte rome that has an extra slot that you can use by using 4 slimline cables to cable to the onboard connectors to activate the slot. Weird! https://www.gigabyte.com/ca/Server-Motherboard/MZ32-AR0-rev-10#ov I guess it makes sense if you want the option of cabling up to a drive bay or an additional slot, just an odd thing on a production server I thought.

Hughlander
May 11, 2005

Paul MaudDib posted:

Asrock has listed three new AMD server boards, that were apparently teased in November (I missed this). All have IPMI and 2x10G-baseT PHYs with server-style configuration (no HDMI/audio, etc).

X570D4I-2T - 1x PCIe 4.0x16 (or x4x4x4x4 bifurcated), 4 ECC sodimm slots (rare but does exist), and no chipset fans (real heatsinks!). No M.2s, as much as Asrock's department of hold-my-beer did good here, a perfect 5/7 would have squeezed in a M.2 4.0x4 on the back. Solid but not their best hold-my-beer.

The 470 version of this board has to M.2 slots and also 4 ECC slots. https://www.asrockrack.com/general/productdetail.asp?Model=X470D4U2-2T#Specifications I'm running it with 128 gig ram and 2 1TB M.2s for my new docker host linked by 10G to my NAS.

EDIT: just noticed they say the 570 is only 64GB so if you need all the ram go with the 470.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

HP pricing on Epyc stuff is insane, you can get a 7402P in a complete supported 1U server with 64GB of RAM for $1700.

Henrik Zetterberg
Dec 7, 2007

Hmm, so I've been running a 6-disk (5 + parity) unraid. I've been replacing drives as I get larger ones, only because I only have 6 SATA ports and only have 6 drive bays. I have no idea why I never thought of buying a USB enclosure and running additional drives. My server is running on an old Supermicro X7SPA-HF, which only has USB 2.0. Would it be worth it to get a USB enclosure for additional drives and expand my Unraid, or should I just gut the server and update the hardware? Besides the drives, everything in it is probably a good decade old. The purpose of the server is simply storage for my Plex server (my desktop PC).

IOwnCalculus
Apr 2, 2003





USB2 is a big bottleneck. If you must do USB, add in a USB3 controller and use that, or perhaps something using SAS.

Henrik Zetterberg
Dec 7, 2007

Yeah that’s kind of what I figured. Didn’t even occur to me to add a USB card. But, it would be nice to rebuild it all so I can run the plex docker or whatever and not depend on my desktop pc for transcodes and serving media.

KKKLIP ART
Sep 3, 2004

Kinda running into a strange issue that I can't quite figure out dealing with being able to SMB into my FreeNas box from my Macs. On windows, I can access the share by putting in my username and password I set easy peasy. But when I try to connect using the same user and password from my Macs, I just get connection failed. My previous config I was using the same internal username and passwords to access it, so it has worked like this before, I just can't quite get the right permission combo to make it work again. Any ideas?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Hughlander posted:

The 470 version of this board has to M.2 slots and also 4 ECC slots. https://www.asrockrack.com/general/productdetail.asp?Model=X470D4U2-2T#Specifications I'm running it with 128 gig ram and 2 1TB M.2s for my new docker host linked by 10G to my NAS.

EDIT: just noticed they say the 570 is only 64GB so if you need all the ram go with the 470.

The X570 is an ITX board, so you probably won’t want to interchange them. Still a compelling board (and now fits into some super tiny itx chassis) but they couldn’t quite get the M.2s onto an ITX form factor.

Sniep
Mar 28, 2004

All I needed was that fatty blunt...



King of Breakfast
I hit 80% and it was That Time To Expand!

This DS1817+ has 2x ports for 5-bay-each additional units over esata. So, here goes nothing on expansion unit number one!



Idk what I'm gonna do when i fill this whole thing up with a 2nd unit but that's years away luckily

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Twerk from Home posted:

HP pricing on Epyc stuff is insane, you can get a 7402P in a complete supported 1U server with 64GB of RAM for $1700.

With the corecount AMD is pushing on Epyc cpus too, its gonna almost make dual sockets unnecessary

BlankSystemDaemon
Mar 13, 2009



CommieGIR posted:

With the corecount AMD is pushing on Epyc cpus too, its gonna almost make dual sockets unnecessary
There's absolutely still going to be a demand for cores, because some workloads scale linearly with more cores, and the more cores you can fit into the same amount of space, the better power-/cooling-/compute-efficiency you get.
Netflix is currently doing 200Gbps per FreeBSD machine with AMD MP boards and quad-50G connectors (for a total of ~100TBps at peak), and I know they aren't planning to stop there.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

D. Ebdrup posted:

There's absolutely still going to be a demand for cores, because some workloads scale linearly with more cores, and the more cores you can fit into the same amount of space, the better power-/cooling-/compute-efficiency you get.
Netflix is currently doing 200Gbps per FreeBSD machine with AMD MP boards and quad-50G connectors (for a total of ~100TBps at peak), and I know they aren't planning to stop there.

It sounds like you read the same slide deck I did: https://2019.eurobsdcon.org/slides/NUMA%20Optimizations%20in%20the%20FreeBSD%20Network%20Stack%20-%20Drew%20Gallatin.pdf

Their most recently shared 200Gigabit serving stack needs ~100GB/s memory bandwidth, and ~64 PCIe lanes. AMD meets both of those requirements in a single socket, so they're still using single-socket AMD with 4 100GBit NICs or 2-socket Intel with 2 100GBit NICs.

AMD's NUMA situation was so bad with first-gen Epyc that they had to make a ton of network resources bound to a NUMA zone, and do the magic mentioned in the article to minimize NUMA bus load (QPI for Intel, Infinity Fabric for AMD). As I read this article, I kept wondering if the optimizations wouldn't be necessary at all for 2nd-gen Epyc, and if they got early access to Rome and its single I/O die they could have just tossed 2 NICs in a box and not had to modify the BSD kernel as Netflix is so fond of doing.

Napkin math says that memory bandwidth is likely to be their overall limiter for the near future. Disk and NICs are faster than they need, but they are already using 100GB/s memory bandwidth and Intel only has ~180GB/s on 2 sockets, AMD has ~150GB/s tops on a single socket. Now that Intel pricing is getting better, lowest-end quad socket Intel could be an option?

Edit: Yeah if they really want more than 200Gbit with right-now technology, looks like the cleanest way forward is finding a way to get a 4-socket Intel board in, moving up to the slightly more expensive 4S-compatible Xeon 5218 from the 4216s they're using, and stuffing in 4 NICs, which we know works because they already did it for AMD.

Twerk from Home fucked around with this message at 15:00 on Jan 27, 2020

BlankSystemDaemon
Mar 13, 2009



Twerk from Home posted:

It sounds like you read the same slide deck I did: https://2019.eurobsdcon.org/slides/NUMA%20Optimizations%20in%20the%20FreeBSD%20Network%20Stack%20-%20Drew%20Gallatin.pdf

Their most recently shared 200Gigabit serving stack needs ~100GB/s memory bandwidth, and ~64 PCIe lanes. AMD meets both of those requirements in a single socket, so they're still using single-socket AMD with 4 100GBit NICs or 2-socket Intel with 2 100GBit NICs.

AMD's NUMA situation was so bad with first-gen Epyc that they had to make a ton of network resources bound to a NUMA zone, and do the magic mentioned in the article to minimize NUMA bus load (QPI for Intel, Infinity Fabric for AMD). As I read this article, I kept wondering if the optimizations wouldn't be necessary at all for 2nd-gen Epyc, and if they got early access to Rome and its single I/O die they could have just tossed 2 NICs in a box and not had to modify the BSD kernel as Netflix is so fond of doing.

Napkin math says that memory bandwidth is likely to be their overall limiter for the near future. Disk and NICs are faster than they need, but they are already using 100GB/s memory bandwidth and Intel only has ~180GB/s on 2 sockets, AMD has ~150GB/s tops on a single socket. Now that Intel pricing is getting better, lowest-end quad socket Intel could be an option?

Edit: Yeah if they really want more than 200Gbit with right-now technology, looks like the cleanest way forward is finding a way to get a 4-socket Intel board in, moving up to the slightly more expensive 4S-compatible Xeon 5218 from the 4216s they're using, and stuffing in 4 NICs, which we know works because they already did it for AMD.
Well, I watched EuroBSDCon when they were doing live-streaming from Lillehammer, but yes.
First-gen EPYC was, as I understand it because of the NUMA layering with up to 3 layers, essentially a SMD on one package, not too dissimilar to Intels Xeon 2nd Gen Scalable processors.
As to second-gen EPYC, it's a completely different layout with four sets of processor cores with their own layout of cache and-so-forth talking over infinity fabric, which reduces the number of NUMA domains to two.

What's interesting is that Netflix upstream all of their changes to FreeBSD that they can, so you and I could theoretically serve traffic at that rate too - just about the only thing they can't upstream (which they talk about during the presentation) is NDA'd drivers - and from looking at the Chelsio and Mellanox work going into FreeBSD HEAD, I can tell you there's plenty of drivers, that that's not going to be a worry.

Net[s]craft[/]flix confirms your napkin math about memory bandwidth in another talk, so it's gonna be interesting to see where they go.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

D. Ebdrup posted:

Net[s]craft[/]flix confirms your napkin math about memory bandwidth in another talk, so it's gonna be interesting to see where they go.

Nice. Got any links to decks or writeups from that?

BlankSystemDaemon
Mar 13, 2009



Twerk from Home posted:

Nice. Got any links to decks or writeups from that?
Yup, there was another talk where it's covered.
By the time you read this, the ID for the video should've been fixed to point to the correct one since I just submitted the fix and it should be commited soon.
I'd barely pressed post on this reply before it was fixed..

Boner Wad
Nov 16, 2003
I have an R620 and thinking of getting a MD1200 for DAS and doing the whole run ESX on a memory card and keep all the disks in the MD1200, expose them to FreeNAS and then expose the FreeNAS stuff back to ESX for host storage deal. I keep hearing stories about the MD1200 being loud, are those correct? Should I look at a different option that might be quieter?

Moey
Oct 22, 2010

I LIKE TO MOVE IT
I would assume a MD1200 would be just as loud as an R620.

I have a ton of MD3XX0i units and don't think they are louder than any other rack mount stuff. Mind you I am not keeping this gear at home, so YMMV.

BlankSystemDaemon
Mar 13, 2009



What about a sufficiently-sized cubboard which absorbs some noise with egg cases made of recycled paper to break up standing waves and absorb some more noise? Although you may wanna hook up some form of cooling to it.

Crunchy Black
Oct 24, 2017

by Athanatos
Commie is probably going to be your most experienced commentator on your noise question (because he is insane).

But the MD1200 is the loudest thing in my 25u rack *by far* and I'm working on doing an audrino relay to fire it up 15 minutes before a snapshot/replication of my main array once a week then shut it down. I will never actually finish this project

Sniep
Mar 28, 2004

All I needed was that fatty blunt...



King of Breakfast
Dumb question - on a Synology + Synology expansion unit, using SHR and Btrfs, when you move a shared folder from one volume to another volume, how long does it take for the first volume to show the reclaimed space?

It's been hours and the folder migration is complete, moved data in the right place and accessible, but the old volume shows the same utilization as before. Thoughts?

Edit: I logged in via terminal and did a du and found some dumb @deleted_subvol/ poo poo still holding the migrated data, and a storage pool scrub cleaned it up. Welp.

Sniep fucked around with this message at 05:27 on Jan 28, 2020

H110Hawk
Dec 28, 2006

Boner Wad posted:

I have an R620 and thinking of getting a MD1200 for DAS and doing the whole run ESX on a memory card and keep all the disks in the MD1200, expose them to FreeNAS and then expose the FreeNAS stuff back to ESX for host storage deal. I keep hearing stories about the MD1200 being loud, are those correct? Should I look at a different option that might be quieter?

Rack mount gear comes with a suggestion for hearing protection for long term exposure.

Hadlock
Nov 9, 2004

H110Hawk posted:

Rack mount gear comes with a suggestion for hearing protection for long term exposure.

Strong agree

I had a 4U rack mount with 4x reasonable-esque 120mm fans and it had to live in it's own room under the stairs

MD1200 looks like a 2U = 40mm fans = hairdryer mode 24/7 :suicide:

You need to be a single male living alone in your early 20s to deal with 4U and under formfactor in a residential space. Do not recommend.

Axe-man
Apr 16, 2005

The product of hundreds of hours of scientific investigation and research.

The perfect meatball.
Clapping Larry
whats a 70 - 60 decibels between friends and tech nerds :v:

H110Hawk
Dec 28, 2006

Axe-man posted:

whats a 70 - 60 decibels between friends and tech nerds :v:

WHAT? :suicide:

Boner Wad
Nov 16, 2003

Hadlock posted:

Strong agree

I had a 4U rack mount with 4x reasonable-esque 120mm fans and it had to live in it's own room under the stairs

MD1200 looks like a 2U = 40mm fans = hairdryer mode 24/7 :suicide:

You need to be a single male living alone in your early 20s to deal with 4U and under formfactor in a residential space. Do not recommend.

I'm on the opposite end of the age and family spectrum so I need something less...loud. I don't think the MD1200 is the way to go...

I think I have a few options...
1. Buy an R520 and run FreeNAS (or ESX+FreeNAS) or buy an R720 and run ESX+FreeNAS and connect via 10G cards and a DAC.
2. Just put some drives in the R620 and don't buy anything else.

Any other better options? I'm not sure if I'll get decent performance out of the FreeNAS exposed over 10G.

Hadlock
Nov 9, 2004

Is it worth looking at putting your virtual machines in the cloud? You can run a VPN to the cloud and it's like they're on your local network.

I played the local vm lab game for several years, do not miss it.

BlankSystemDaemon
Mar 13, 2009



:yaybutt: <- is not the place to put things without a flared base.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Packrats and Sex Thread Unite! - stay safe!

THF13
Sep 26, 2007

Keep an adversary in the dark about what you're capable of, and he has to assume the worst.
Schilling for serverbuilds.net based builds for like the 5th time in this thread, they have a few builds that use a 4u Rosewill RSV-L4500 with a few modifications to make it actually quiet.
-Take out front fans entirely
-Reverse interior fan wall, replace fans with quieter ones
-replace back 80mm fans
-Use desktop style CPU coolers instead of typical low profile server heatsinks.
-Don't run fans full speed

I haven't tried this first hand but it supposedly is extremely quiet if not silent. You can't transplant a Dell server mobo into it, their anniversary build2 guide has various mostly supermicro boards that should all work. https://forums.serverbuilds.net/t/guide-anniversary-2-0-snafu-server-needs-a-friggin-upgrade/1075

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

necrobobsledder posted:

Packrats and Sex Thread Unite! - stay safe!

Make sure your cables are sheethed properly and only plugged into approved ports.

Enos Cabell
Nov 3, 2004


THF13 posted:

Schilling for serverbuilds.net based builds for like the 5th time in this thread, they have a few builds that use a 4u Rosewill RSV-L4500 with a few modifications to make it actually quiet.
-Take out front fans entirely
-Reverse interior fan wall, replace fans with quieter ones
-replace back 80mm fans
-Use desktop style CPU coolers instead of typical low profile server heatsinks.
-Don't run fans full speed

I haven't tried this first hand but it supposedly is extremely quiet if not silent. You can't transplant a Dell server mobo into it, their anniversary build2 guide has various mostly supermicro boards that should all work. https://forums.serverbuilds.net/t/guide-anniversary-2-0-snafu-server-needs-a-friggin-upgrade/1075

Super happy with the serverbuilds, uhh, server build that I did a few years back. Might have been you that posted it ITT so thanks!

Axe-man
Apr 16, 2005

The product of hundreds of hours of scientific investigation and research.

The perfect meatball.
Clapping Larry
Who is rolling dirty with 10 gbps using cat6e :c00l:

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Axe-man posted:

Who is rolling dirty with 10 gbps using cat6e :c00l:

Not me, I went fiber! Which has honestly been super easy from a cabling perspective, and not at all so easy from a getting the hosts to actually see the loving SFP+s perspective.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
I also went fiber, mostly because my breakouts are QSFP 40GBs.

Raymond T. Racing
Jun 11, 2019

THF13 posted:

Schilling for serverbuilds.net based builds for like the 5th time in this thread, they have a few builds that use a 4u Rosewill RSV-L4500 with a few modifications to make it actually quiet.
-Take out front fans entirely
-Reverse interior fan wall, replace fans with quieter ones
-replace back 80mm fans
-Use desktop style CPU coolers instead of typical low profile server heatsinks.
-Don't run fans full speed

I haven't tried this first hand but it supposedly is extremely quiet if not silent. You can't transplant a Dell server mobo into it, their anniversary build2 guide has various mostly supermicro boards that should all work. https://forums.serverbuilds.net/t/guide-anniversary-2-0-snafu-server-needs-a-friggin-upgrade/1075

Problem with anniversary2 (as someone who was a former SB shill but ended up getting banned for speaking out against JDM's pricky behavior), is that all of the reasonably priced supermicro 2011 boards available now are narrow ILM, so there's pretty much no desktop coolers that fit it. Anni2 is basically only supermicro boards, so you can either get a server specced narrow ILM air cooler that's noisy as all hell, get a much more expensive narrow ILM air cooler running a noctua fan, or just say gently caress it and get a couple of asetek AIOs and the narrow ILM bracket they sell separately

edit: to clarify the above position, JDM makes a decent build, but he's kind of a prick in literally every other situation, plus the massive conflict of interest with creating a guide then using ebay affiliate links i wasn't a fan of

Raymond T. Racing fucked around with this message at 06:24 on Jan 29, 2020

Axe-man
Apr 16, 2005

The product of hundreds of hours of scientific investigation and research.

The perfect meatball.
Clapping Larry
I am not running any VMs on IP-SANs or anything, so 10 gb is enough to move around my data. Guess I gotta turn in my kool Kids Card. :smith:

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Axe-man posted:

I am not running any VMs on IP-SANs or anything, so 10 gb is enough to move around my data. Guess I gotta turn in my kool Kids Card. :smith:

Nope you are a cool kid. 10GB places you probably among the top 5% of people with DC level connections in your home.

My 40GB is split into 10GBx4. Your in the club, man.

Adbot
ADBOT LOVES YOU

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Of course getting 10Gb is just the start. Then you realize that, sure, you can hit 700+MB/s on iperf, but you can't actually sustain that for long actually moving real data. So then you look into throwing more RAM in there for a bigger write cache. And then you think that maybe upgrading your storage back-end so it can serve data at more than 300MB/s would be cool. And then, and then...

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply