|
RIP OcuLink you sucked and then you were gone
|
# ? Jan 25, 2020 07:24 |
|
|
# ? May 16, 2024 23:24 |
|
priznat posted:RIP OcuLink you sucked and then you were gone those Rome boards can turn one of their slots into a pair of oculinks. Obviously they aimed for u.2 connectivity instead but there's still some oculink. what's the use-case for that?
|
# ? Jan 25, 2020 07:40 |
|
Paul MaudDib posted:those Rome boards can turn one of their slots into a pair of oculinks. Obviously they aimed for u.2 connectivity instead but there's still some oculink. Oh! I missed that, I was rolling my eyes at oculink a bit because of the other board with all the slimline, and it looks like Gen5 will be mcio. We use oculink at work and it seems like it will be an extremely short lived connector. It's annoying though, it has a really fiddly retention mechanism that can sometimes be a real pain to disengage. The 2x oculink is probably for oculink to U.2 cables although it's weird to see a mix of those and miniSAS HD on the same board, seems like a strange mishmash. Having 2 x4 oculink, 2 x4 M.2 and 2 potentially x4 miniSAS HD just raises more questions though, that's more than a x16! The weirdest option I've seen so far is the Gigabyte rome that has an extra slot that you can use by using 4 slimline cables to cable to the onboard connectors to activate the slot. Weird! https://www.gigabyte.com/ca/Server-Motherboard/MZ32-AR0-rev-10#ov I guess it makes sense if you want the option of cabling up to a drive bay or an additional slot, just an odd thing on a production server I thought.
|
# ? Jan 25, 2020 07:50 |
|
Paul MaudDib posted:Asrock has listed three new AMD server boards, that were apparently teased in November (I missed this). All have IPMI and 2x10G-baseT PHYs with server-style configuration (no HDMI/audio, etc). The 470 version of this board has to M.2 slots and also 4 ECC slots. https://www.asrockrack.com/general/productdetail.asp?Model=X470D4U2-2T#Specifications I'm running it with 128 gig ram and 2 1TB M.2s for my new docker host linked by 10G to my NAS. EDIT: just noticed they say the 570 is only 64GB so if you need all the ram go with the 470.
|
# ? Jan 25, 2020 18:43 |
|
Paul MaudDib posted:Also, FYI, HP is running their branded Epyc processor kits for about 30% under MSRP of the processor alone. 7402P kit is 24C for $1030, 7302P is 16C for $607 HP pricing on Epyc stuff is insane, you can get a 7402P in a complete supported 1U server with 64GB of RAM for $1700.
|
# ? Jan 25, 2020 19:07 |
|
Hmm, so I've been running a 6-disk (5 + parity) unraid. I've been replacing drives as I get larger ones, only because I only have 6 SATA ports and only have 6 drive bays. I have no idea why I never thought of buying a USB enclosure and running additional drives. My server is running on an old Supermicro X7SPA-HF, which only has USB 2.0. Would it be worth it to get a USB enclosure for additional drives and expand my Unraid, or should I just gut the server and update the hardware? Besides the drives, everything in it is probably a good decade old. The purpose of the server is simply storage for my Plex server (my desktop PC).
|
# ? Jan 25, 2020 20:27 |
|
USB2 is a big bottleneck. If you must do USB, add in a USB3 controller and use that, or perhaps something using SAS.
|
# ? Jan 25, 2020 20:58 |
|
Yeah that’s kind of what I figured. Didn’t even occur to me to add a USB card. But, it would be nice to rebuild it all so I can run the plex docker or whatever and not depend on my desktop pc for transcodes and serving media.
|
# ? Jan 25, 2020 21:12 |
|
Kinda running into a strange issue that I can't quite figure out dealing with being able to SMB into my FreeNas box from my Macs. On windows, I can access the share by putting in my username and password I set easy peasy. But when I try to connect using the same user and password from my Macs, I just get connection failed. My previous config I was using the same internal username and passwords to access it, so it has worked like this before, I just can't quite get the right permission combo to make it work again. Any ideas?
|
# ? Jan 26, 2020 01:23 |
|
Hughlander posted:The 470 version of this board has to M.2 slots and also 4 ECC slots. https://www.asrockrack.com/general/productdetail.asp?Model=X470D4U2-2T#Specifications I'm running it with 128 gig ram and 2 1TB M.2s for my new docker host linked by 10G to my NAS. The X570 is an ITX board, so you probably won’t want to interchange them. Still a compelling board (and now fits into some super tiny itx chassis) but they couldn’t quite get the M.2s onto an ITX form factor.
|
# ? Jan 26, 2020 02:58 |
|
I hit 80% and it was That Time To Expand! This DS1817+ has 2x ports for 5-bay-each additional units over esata. So, here goes nothing on expansion unit number one! Idk what I'm gonna do when i fill this whole thing up with a 2nd unit but that's years away luckily
|
# ? Jan 26, 2020 04:50 |
|
Twerk from Home posted:HP pricing on Epyc stuff is insane, you can get a 7402P in a complete supported 1U server with 64GB of RAM for $1700. With the corecount AMD is pushing on Epyc cpus too, its gonna almost make dual sockets unnecessary
|
# ? Jan 27, 2020 05:59 |
CommieGIR posted:With the corecount AMD is pushing on Epyc cpus too, its gonna almost make dual sockets unnecessary Netflix is currently doing 200Gbps per FreeBSD machine with AMD MP boards and quad-50G connectors (for a total of ~100TBps at peak), and I know they aren't planning to stop there.
|
|
# ? Jan 27, 2020 14:06 |
|
D. Ebdrup posted:There's absolutely still going to be a demand for cores, because some workloads scale linearly with more cores, and the more cores you can fit into the same amount of space, the better power-/cooling-/compute-efficiency you get. It sounds like you read the same slide deck I did: https://2019.eurobsdcon.org/slides/NUMA%20Optimizations%20in%20the%20FreeBSD%20Network%20Stack%20-%20Drew%20Gallatin.pdf Their most recently shared 200Gigabit serving stack needs ~100GB/s memory bandwidth, and ~64 PCIe lanes. AMD meets both of those requirements in a single socket, so they're still using single-socket AMD with 4 100GBit NICs or 2-socket Intel with 2 100GBit NICs. AMD's NUMA situation was so bad with first-gen Epyc that they had to make a ton of network resources bound to a NUMA zone, and do the magic mentioned in the article to minimize NUMA bus load (QPI for Intel, Infinity Fabric for AMD). As I read this article, I kept wondering if the optimizations wouldn't be necessary at all for 2nd-gen Epyc, and if they got early access to Rome and its single I/O die they could have just tossed 2 NICs in a box and not had to modify the BSD kernel as Netflix is so fond of doing. Napkin math says that memory bandwidth is likely to be their overall limiter for the near future. Disk and NICs are faster than they need, but they are already using 100GB/s memory bandwidth and Intel only has ~180GB/s on 2 sockets, AMD has ~150GB/s tops on a single socket. Now that Intel pricing is getting better, lowest-end quad socket Intel could be an option? Edit: Yeah if they really want more than 200Gbit with right-now technology, looks like the cleanest way forward is finding a way to get a 4-socket Intel board in, moving up to the slightly more expensive 4S-compatible Xeon 5218 from the 4216s they're using, and stuffing in 4 NICs, which we know works because they already did it for AMD. Twerk from Home fucked around with this message at 15:00 on Jan 27, 2020 |
# ? Jan 27, 2020 14:54 |
Twerk from Home posted:It sounds like you read the same slide deck I did: https://2019.eurobsdcon.org/slides/NUMA%20Optimizations%20in%20the%20FreeBSD%20Network%20Stack%20-%20Drew%20Gallatin.pdf First-gen EPYC was, as I understand it because of the NUMA layering with up to 3 layers, essentially a SMD on one package, not too dissimilar to Intels Xeon 2nd Gen Scalable processors. As to second-gen EPYC, it's a completely different layout with four sets of processor cores with their own layout of cache and-so-forth talking over infinity fabric, which reduces the number of NUMA domains to two. What's interesting is that Netflix upstream all of their changes to FreeBSD that they can, so you and I could theoretically serve traffic at that rate too - just about the only thing they can't upstream (which they talk about during the presentation) is NDA'd drivers - and from looking at the Chelsio and Mellanox work going into FreeBSD HEAD, I can tell you there's plenty of drivers, that that's not going to be a worry. Net[s]craft[/]flix confirms your napkin math about memory bandwidth in another talk, so it's gonna be interesting to see where they go.
|
|
# ? Jan 27, 2020 15:34 |
|
D. Ebdrup posted:Net[s]craft[/]flix confirms your napkin math about memory bandwidth in another talk, so it's gonna be interesting to see where they go. Nice. Got any links to decks or writeups from that?
|
# ? Jan 27, 2020 15:58 |
Twerk from Home posted:Nice. Got any links to decks or writeups from that? By the time you read this, the ID for the video should've been fixed to point to the correct one since I just submitted the fix and it should be commited soon. I'd barely pressed post on this reply before it was fixed..
|
|
# ? Jan 27, 2020 16:46 |
|
I have an R620 and thinking of getting a MD1200 for DAS and doing the whole run ESX on a memory card and keep all the disks in the MD1200, expose them to FreeNAS and then expose the FreeNAS stuff back to ESX for host storage deal. I keep hearing stories about the MD1200 being loud, are those correct? Should I look at a different option that might be quieter?
|
# ? Jan 27, 2020 21:38 |
|
I would assume a MD1200 would be just as loud as an R620. I have a ton of MD3XX0i units and don't think they are louder than any other rack mount stuff. Mind you I am not keeping this gear at home, so YMMV.
|
# ? Jan 27, 2020 21:50 |
What about a sufficiently-sized cubboard which absorbs some noise with egg cases made of recycled paper to break up standing waves and absorb some more noise? Although you may wanna hook up some form of cooling to it.
|
|
# ? Jan 28, 2020 00:56 |
|
Commie is probably going to be your most experienced commentator on your noise question (because he is insane). But the MD1200 is the loudest thing in my 25u rack *by far* and I'm working on doing an audrino relay to fire it up 15 minutes before a snapshot/replication of my main array once a week then shut it down. I will never actually finish this project
|
# ? Jan 28, 2020 01:23 |
|
Dumb question - on a Synology + Synology expansion unit, using SHR and Btrfs, when you move a shared folder from one volume to another volume, how long does it take for the first volume to show the reclaimed space? It's been hours and the folder migration is complete, moved data in the right place and accessible, but the old volume shows the same utilization as before. Thoughts? Edit: I logged in via terminal and did a du and found some dumb @deleted_subvol/ poo poo still holding the migrated data, and a storage pool scrub cleaned it up. Welp. Sniep fucked around with this message at 05:27 on Jan 28, 2020 |
# ? Jan 28, 2020 01:37 |
|
Boner Wad posted:I have an R620 and thinking of getting a MD1200 for DAS and doing the whole run ESX on a memory card and keep all the disks in the MD1200, expose them to FreeNAS and then expose the FreeNAS stuff back to ESX for host storage deal. I keep hearing stories about the MD1200 being loud, are those correct? Should I look at a different option that might be quieter? Rack mount gear comes with a suggestion for hearing protection for long term exposure.
|
# ? Jan 28, 2020 05:45 |
|
H110Hawk posted:Rack mount gear comes with a suggestion for hearing protection for long term exposure. Strong agree I had a 4U rack mount with 4x reasonable-esque 120mm fans and it had to live in it's own room under the stairs MD1200 looks like a 2U = 40mm fans = hairdryer mode 24/7 You need to be a single male living alone in your early 20s to deal with 4U and under formfactor in a residential space. Do not recommend.
|
# ? Jan 28, 2020 06:44 |
|
whats a 70 - 60 decibels between friends and tech nerds
|
# ? Jan 28, 2020 07:08 |
|
Axe-man posted:whats a 70 - 60 decibels between friends and tech nerds WHAT?
|
# ? Jan 28, 2020 07:16 |
|
Hadlock posted:Strong agree I'm on the opposite end of the age and family spectrum so I need something less...loud. I don't think the MD1200 is the way to go... I think I have a few options... 1. Buy an R520 and run FreeNAS (or ESX+FreeNAS) or buy an R720 and run ESX+FreeNAS and connect via 10G cards and a DAC. 2. Just put some drives in the R620 and don't buy anything else. Any other better options? I'm not sure if I'll get decent performance out of the FreeNAS exposed over 10G.
|
# ? Jan 28, 2020 07:44 |
|
Is it worth looking at putting your virtual machines in the cloud? You can run a VPN to the cloud and it's like they're on your local network. I played the local vm lab game for several years, do not miss it.
|
# ? Jan 28, 2020 08:09 |
<- is not the place to put things without a flared base.
|
|
# ? Jan 28, 2020 12:07 |
|
Packrats and Sex Thread Unite! - stay safe!
|
# ? Jan 28, 2020 21:20 |
|
Schilling for serverbuilds.net based builds for like the 5th time in this thread, they have a few builds that use a 4u Rosewill RSV-L4500 with a few modifications to make it actually quiet. -Take out front fans entirely -Reverse interior fan wall, replace fans with quieter ones -replace back 80mm fans -Use desktop style CPU coolers instead of typical low profile server heatsinks. -Don't run fans full speed I haven't tried this first hand but it supposedly is extremely quiet if not silent. You can't transplant a Dell server mobo into it, their anniversary build2 guide has various mostly supermicro boards that should all work. https://forums.serverbuilds.net/t/guide-anniversary-2-0-snafu-server-needs-a-friggin-upgrade/1075
|
# ? Jan 28, 2020 21:31 |
|
necrobobsledder posted:Packrats and Sex Thread Unite! - stay safe! Make sure your cables are sheethed properly and only plugged into approved ports.
|
# ? Jan 28, 2020 22:42 |
|
THF13 posted:Schilling for serverbuilds.net based builds for like the 5th time in this thread, they have a few builds that use a 4u Rosewill RSV-L4500 with a few modifications to make it actually quiet. Super happy with the serverbuilds, uhh, server build that I did a few years back. Might have been you that posted it ITT so thanks!
|
# ? Jan 28, 2020 22:51 |
|
Who is rolling dirty with 10 gbps using cat6e
|
# ? Jan 29, 2020 01:34 |
|
Axe-man posted:Who is rolling dirty with 10 gbps using cat6e Not me, I went fiber! Which has honestly been super easy from a cabling perspective, and not at all so easy from a getting the hosts to actually see the loving SFP+s perspective.
|
# ? Jan 29, 2020 03:38 |
|
I also went fiber, mostly because my breakouts are QSFP 40GBs.
|
# ? Jan 29, 2020 03:56 |
|
THF13 posted:Schilling for serverbuilds.net based builds for like the 5th time in this thread, they have a few builds that use a 4u Rosewill RSV-L4500 with a few modifications to make it actually quiet. Problem with anniversary2 (as someone who was a former SB shill but ended up getting banned for speaking out against JDM's pricky behavior), is that all of the reasonably priced supermicro 2011 boards available now are narrow ILM, so there's pretty much no desktop coolers that fit it. Anni2 is basically only supermicro boards, so you can either get a server specced narrow ILM air cooler that's noisy as all hell, get a much more expensive narrow ILM air cooler running a noctua fan, or just say gently caress it and get a couple of asetek AIOs and the narrow ILM bracket they sell separately edit: to clarify the above position, JDM makes a decent build, but he's kind of a prick in literally every other situation, plus the massive conflict of interest with creating a guide then using ebay affiliate links i wasn't a fan of Raymond T. Racing fucked around with this message at 06:24 on Jan 29, 2020 |
# ? Jan 29, 2020 05:53 |
|
I am not running any VMs on IP-SANs or anything, so 10 gb is enough to move around my data. Guess I gotta turn in my kool Kids Card.
|
# ? Jan 29, 2020 07:52 |
|
Axe-man posted:I am not running any VMs on IP-SANs or anything, so 10 gb is enough to move around my data. Guess I gotta turn in my kool Kids Card. Nope you are a cool kid. 10GB places you probably among the top 5% of people with DC level connections in your home. My 40GB is split into 10GBx4. Your in the club, man.
|
# ? Jan 29, 2020 16:41 |
|
|
# ? May 16, 2024 23:24 |
|
Of course getting 10Gb is just the start. Then you realize that, sure, you can hit 700+MB/s on iperf, but you can't actually sustain that for long actually moving real data. So then you look into throwing more RAM in there for a bigger write cache. And then you think that maybe upgrading your storage back-end so it can serve data at more than 300MB/s would be cool. And then, and then...
|
# ? Jan 29, 2020 17:29 |