Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
SlowBloke
Aug 14, 2017

Paul MaudDib posted:

If I want to do adapter bonding on a board, what is the minimum buy-in from the network side? Managed switch that's aware of bonding? What's my cheapest buy-in there for 8 ports?

You need to find a switch that support LACP(IEEE 802.3ad), most cheap consumer brands call port aggregation LAG. Most smart switches from tplink or Netgear will have that. If you want to go cheap there is the tplink sg108e v3(v1/v2 can only be configured via a chintzy app on Windows so avoid those) or Netgear GS108Tv2

Adbot
ADBOT LOVES YOU

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
First define the use case then we'll be able to steer you in the right direction. What problem are you trying to solve?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

SlowBloke posted:

You need to find a switch that support LACP(IEEE 802.3ad), most cheap consumer brands call port aggregation LAG. Most smart switches from tplink or Netgear will have that. If you want to go cheap there is the tplink sg108e v3(v1/v2 can only be configured via a chintzy app on Windows so avoid those) or Netgear GS108Tv2

Perfect answer, thank you.

H2SO4 posted:

First define the use case then we'll be able to steer you in the right direction. What problem are you trying to solve?

Taking a board with quad-Gig-E, pushing it into a switch, and serving a bunch of clients at Gig-E speeds at the same time over the same switch.

I guess I'm open to 16-port too if that's priced right.

SlowBloke
Aug 14, 2017

Paul MaudDib posted:

I guess I'm open to 16-port too if that's priced right.

Tplink makes a killer 10g soho switch, the T1700g-28tq(http://www.tp-link.com/us/products/details/cat-40_T1700G-28TQ.html), 24 1g ports and 4 10g ports, if you can scrounge some cash and want to future proof that's the one i'd suggest. Here in europe that model is always on rebate at 230-250€, dunno about us prices.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

SlowBloke posted:

Tplink makes a killer 10g soho switch, the T1700g-28tq(http://www.tp-link.com/us/products/details/cat-40_T1700G-28TQ.html), 24 1g ports and 4 10g ports, if you can scrounge some cash and want to future proof that's the one i'd suggest. Here in europe that model is always on rebate at 230-250€, dunno about us prices.

That's Extremely My poo poo if you can get it down to like $200. I'm looking at the QNAP card that can do 2x NVMe and 10 GbE for my fileserver.

$350 is a bit steep for the switch though. I'd like to keep this whole endeavor down to like $500 all-told.

(sorry, Infiniband has babied me here, where I can get adapters for $40 and switches for $100 - I realize that when the cheapest 10GbE switches are $200 I'm probably not going to make that happen)

Paul MaudDib fucked around with this message at 09:54 on Nov 22, 2017

SlowBloke
Aug 14, 2017

Paul MaudDib posted:

That's Extremely My poo poo if you can get it down to like $200. I'm looking at the QNAP card that can do 2x NVMe and 10 GbE for my fileserver.

$350 is a bit steep for the switch though. I'd like to keep this whole endeavor down to like $500 all-told.

(sorry, Infiniband has babied me here)

Unless you have a QNAP nas the qm2 is actually detrimental. You have a card that talks with your computer at PCIe 2.0 4x while having a 10gbase-t card(which takes three lanes give or take) and two pcie x4 ssd or two sata600 ssd. Bandwidth is overcommitted in both cases(slightly on the sata model, heavily on the pcie model). I'd suggest a dedicated 10g nic(qlogic/broadcomm or mellanox if you want to cheap out) if you whitebox your NAS

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

SlowBloke posted:

Unless you have a QNAP nas the qm2 is actually detrimental. You have a card that talks with your computer at PCIe 2.0 4x while having a 10gbase-t card(which takes three lanes give or take) and two pcie x4 ssd or two sata600 ssd. Bandwidth is overcommitted in both cases(slightly on the sata model, heavily on the pcie model). I'd suggest a dedicated 10g nic(qlogic/broadcomm or mellanox if you want to cheap out) if you whitebox your NAS

What about the QNAP NAS makes the QM2 less detrimental then?

I fully realize the bottleneck problems and I have pointed them out to the NAS thread... but I'm hoping that these issues won't manifest with a dedicated NAS server (even with a homelab spec). This would be a dedicated ZFS NAS server with an i3-7100 spec (I already own the processor).

If it comes down to it I am willing to disable the 10GbE on the QM2 and run a separate adapter (so the QM2 is just acting as a PEX switch for the NVMe). My fallback for now is putting my Mellanox IB QDR 2.0x8 (40/32 gbps) adapter into the other slot (should run at full speed on the PCH), and there's zero real-world chance that I can actually utilize that on a sustained basis. I am not willing to commit to the idle power consumption that would be necessary for more than 3.0x16 lanes.

Paul MaudDib fucked around with this message at 10:08 on Nov 22, 2017

SlowBloke
Aug 14, 2017

Paul MaudDib posted:

What about the QNAP NAS makes the QM2 less detrimental then?

On a qnap nas the qm2 is one of the few ways to add 10g to a gig only nas while keeping the warranty, a basic qnap 10gbase-t NIC costs little less than a qm2 sata+10g so that's the main reason i'd buy one.

If you don't have a qnap nas there is a campaign from aquantia that will drop prices for their 10gbase-t nics as low as 50-69$ for a new part while you can source used 10g sfp+ cards for as low as 30$(connectx2) so the QM2 price is not competitive.

If you build your own nas you can easily spec the board to have at least two PCIe 8x to have a dedicated nic and a twin m2 adapter. Chances are that the NIC+m2 adapter are cheaper than the qm2 and faster to boot.

Paul MaudDib posted:

I fully realize the bottleneck problems and I have pointed them out to the NAS thread... but I'm hoping that these issues won't manifest with a dedicated NAS server (even with a homelab spec). This would be a dedicated ZFS NAS server with an i3-7100 spec (I already own the processor).

If it comes down to it I am willing to disable the 10GbE on the QM2 and run a separate adapter (so the QM2 is just acting as a PEX switch for the NVMe). My fallback for now is putting my Mellanox IB QDR 2.0x8 (40/32 gbps) adapter into the other slot (should run at full speed on the PCH), and there's zero real-world chance that I can actually utilize that on a sustained basis. I am not willing to commit to the idle power consumption that would be necessary for more than 3.0x16 lanes.

A mellanox QDR card would burn as much power as a 10gbase-t nic and the two m2 drives combined even when running on two different slots. Unless you have a very specific reason for that uplink type i would advise againt that.

A PEX adds latencies so i would use a m2 adapter only if it hasn't got any PEX chips on that(most cheap cards do not, they are only lanes adapters and a little signal boost).

Are you sure you are not overspeccing? Your cpu will bottleneck before requiring more iops than a sata600 ssd cache drive set

SlowBloke fucked around with this message at 10:35 on Nov 22, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Yeah, so, my NAS is already specified. I'm getting this board, and I can toss on up to 2 3.0x8 adapters. I've figured one will be PLX switches (i.e. QM2) that host NVMe and the other will be ethernet. There is the PCH as a wildcard but in terms of overall performance I will be watching the HDDs too.

SlowBloke posted:

Are you sure you are not overspeccing? Your cpu will bottleneck before requiring more iops than a sata600 ssd cache drive set

Oh I'm definitely overspeccing. I just want to wring whatever I can out of my build here (as a dedicated ZFS NAS build). I want to provide maximum throughput to my workstation/application server/etc.

Paul MaudDib fucked around with this message at 10:40 on Nov 22, 2017

SlowBloke
Aug 14, 2017

Paul MaudDib posted:

Yeah, so, my NAS is already specified. I'm getting this board, and I can toss on up to 2 3.0x8 adapters. I've figured one will be PLX switches (i.e. QM2) that host NVMe and the other will be ethernet. There is the PCH as a wildcard but in terms of overall performance I will be watching the HDDs too.

Hmm have you already bought the motherboard? There are better Supermicro cards for your scenario like a X11SSH-TF (https://www.supermicro.com/products/motherboard/Xeon/C236_C232/X11SSH-TF.cfm) which includes the 10g nic and a m2 slot on the motherboard so your wasted pcie slot power issues are mitigated or X11SSH-CTF (https://www.supermicro.com/products/motherboard/Xeon/C236_C232/X11SSH-CTF.cfm) which have all of the above and 8x sas/sata ports for extra storage(plus 8x sata on the pch).

SlowBloke fucked around with this message at 10:52 on Nov 22, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

SlowBloke posted:

Hmm have you already bought the motherboard? There are better Supermicro cards for your scenario like a X11SSH-TF (https://www.supermicro.com/products/motherboard/Xeon/C236_C232/X11SSH-TF.cfm) which includes the 10g nic and a m2 slot on the motherboard so your power issue are mitigated

Yes. But it's a fairly linear tradeoff either way - I need 8 SATA ports, and the 10 GbE port onboard costs me ~6 PCIe lanes, more or less, so I'm back in the same situation I started in.

1x Fast Networking (whatever that is), 2x NVMe 3.0x4, and 8x SATA ports. Every other build I've considered is merry-go-round on those capabilities. I was originally looking at X99 but I don't want to spend the idle power.

The QM2 adapter is actually one of the only capability gains that I've managed so far.

Paul MaudDib fucked around with this message at 10:52 on Nov 22, 2017

SlowBloke
Aug 14, 2017

Paul MaudDib posted:

Yes. But it's a fairly linear tradeoff either way - I need 8 SATA ports, and the 10 GbE port onboard costs me ~6 PCIe lanes, more or less, so I'm back in the same situation I started in.

1x Fast Networking (whatever that is), 2x NVMe 3.0x4, and 8x SATA ports. Every other build I've considered is merry-go-round on those capabilities. I was originally looking at X99 but I don't want to spend the idle power.

Is the X11SSH-CTF outside your budget? It has pretty much everything you ask on the motherboard(one vs two m2 is the only missing item) so no extra cards are needed.

Understood, i am perplexed by the choice of motherboard, did you buy the cpu before the mobo? For a similar project i would have picked the server atoms(c3xxx) line, the mobo+cpu combo is better value that way. a A2SDi-H-TP4F (https://www.supermicro.com/products/motherboard/atom/A2SDi-H-TP4F.cfm) costs less than your combo and it has fuckloads more power than a 7 series i3.

SlowBloke fucked around with this message at 10:59 on Nov 22, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

SlowBloke posted:

Is the X11SSH-CTF outside your budget? It has pretty much everything you ask on the motherboard(one vs two m2 is the only missing item) so no extra cards are needed.

but surely the two NVMe drives are hung off the PCH... which cuts them down to half speed like the QM2...

From my perspective, SATA is a good thing to park on the PCH at commodity-hardware prices. Ideally it would be dedicated, otherwise I need another Fast Built-In - potentially Fast Networking (if the PCH can run at 2.0x8 speeds and multiplex onto 3.0x4 throughput). Otherwise, a NVMe drive I guess. Then on the other two fast slots (3.0x8 each)it's a PLX switch for the NVMe drives, or Fast Networking. The fourth NVMe drive, in this scenario, would go off the CPU PEG lanes and eat up 3.0x8 lanes.

The QM2 is potentially a good force-multiplier in this situation (even if I would be bottlenecked when fully utilizing it).

SlowBloke posted:

Understood, i am perplexed by the choice of motherboard, did you buy the cpu before the mobo? For a similar project i would have picked the server atoms(c3xxx) line, the mobo+cpu combo is better value that way.

Nope, I considered that but I don't want to be limited to 2.0x4 lanes expansion. I want at least 3.0x16+PCH lanes for this build.

The C2750d4i is only one of the many points on this merry-go-round of capabilities - and it means downgrading from an arbitrary LGA1151 (potentially G4560/7100, or a high-end Xeon 4C8T) a C2750, and giving up my upgrade capabilities.

Paul MaudDib fucked around with this message at 11:12 on Nov 22, 2017

SlowBloke
Aug 14, 2017

Paul MaudDib posted:


Nope, I considered that but I don't want to be limited to 2.0x4 lanes expansion. I want at least 3.0x16+PCH lanes for this build.

Bottlenecked by what? If everything you need and then some is already on the motherboard by the time you need to deploy the extra pcie kit you will find yourself with an old cpu/mobo(more power expenditure for equivalent performance) and the pcie expansion cards may be more expensive than a full rebuild. For instance, if you have a 10g motherboard and you don't have neither 10g clients nor switches there is no point worrying about network expansion, as by the time you have both saturated a new mobo+cpu will be quicker/cheaper to deploy.

SlowBloke fucked around with this message at 11:08 on Nov 22, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

SlowBloke posted:

Bottlenecked by what? If everything you need and then some is already on the motherboard by the time you need to deploy the extra pcie kit you will find yourself with an old cpu/mobo(more power expenditure for equivalent performance) and the pcie expansion cards may be more expensive than a full rebuild.

See my edit, I'm a power-user/home-lab user but I'm mostly driven by what I can pick up cheap. If there's a big potential capability (going from a 2C4T to a 4C8T) then by all means. But I'm also all about plugging in the $40 IB QDR adapter I bought from eBay to get fast networking, and I don't mind throwing it out for 10 GbE for $100/adapter either for a handful of servers.

I'm really just looking to maintain my expandability as far as possible (in both compute and fast storage) given the LGA1151 thermal envelope (for a dedicated NAS machine, with a separate application server).

quote:

For instance, if you have a 10g motherboard and you don't have neither 10g clients nor switches there is no point worrying about network expansion, as by the time you have both saturated a new mobo+cpu will be quicker/cheaper to deploy.

I'm looking to service multiple clients at 1 GbE speeds. So I need a faster trunk to the server.

Also, crossover cables are not expensive. Neither are crossover QSFP cables any different from any others... Just plug your adapters in directly.

Paul MaudDib fucked around with this message at 11:14 on Nov 22, 2017

SlowBloke
Aug 14, 2017

Paul MaudDib posted:

See my edit, I'm a power-user/home-lab user but I'm mostly driven by what I can pick up cheap. If there's a big potential capability (going from a 2C4T to a 4C8T) then by all means. But I'm also all about plugging in the $40 IB QDR adapter I bought from eBay to get fast networking, and I don't mind throwing it out for 10 GbE for $100/adapter either for a handful of servers.

I'm really just looking to maintain my expandability as far as possible given the LGA1151 thermal envelope (for a dedicated NAS machine, no application servers).


I'm looking to service multiple clients at 1 GbE speeds. So I need a faster trunk to the server.

Also, crossover cables are not expensive. Neither are crossover QSFP cables any different from any others...

There is a price/performance ratio limit to ghettorigging your nas/whitebox, if you go all ship-of-theseus you will end up with a subpar configuration in the long run. Yes QDR may be cheap when you have Point-to-Point but once you scale to more than two nodes the switching kit will make you hate it. If a QDR uplink is an hard requirement for you, what kind of homelab payload are we talking about? I have a several terabytes of AFA storage at work and I'm still having issues saturating our 2x10g filer uplinks(on a lacp channel).

SlowBloke fucked around with this message at 11:26 on Nov 22, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

SlowBloke posted:

There is a price/performance ratio limit to ghettorigging your nas/whitebox, if you go all ship-of-theseus you will end up with a subpar configuration in the long run. Yes QDR may be cheap when you have Point-to-Point but once you scale to more than two nodes the switching kit will make you hate it. If a QDR uplink is an hard requirement for you, what kind of homelab payload are we talking about? I have a several terabytes of AFA storage at work and I'm still having issues saturating our 2x LACP filer uplinks.

Which of these tradeoffs does my decision actually force right now?

I've made this decision with the intention of forcing as few of them as possible. Which is SATA on the PCH, and then NVMe and Fast Networking on my 2 PCIe slots, while leaving myself the ability to expand core count.

edit: Also I bought a 36-port IB QDR switch (Voltaire 4036) for $125 a while ago, so that's not an issue. It may never get used, I don't care, if 10 GbE is the right answer...

Paul MaudDib fucked around with this message at 11:32 on Nov 22, 2017

SlowBloke
Aug 14, 2017

Paul MaudDib posted:

Which of these tradeoffs does my decision actually force right now?

I've made this decision with the intention of forcing as few of them as possible. Which is SATA on the PCH, and then NVMe and Fast Networking on my 2 PCIe slots, while leaving myself the ability to expand core count.

edit: Also I bought a 36-port QDR switch (Voltaire 4036) for $125 so that's not an issue. It may never get used, I don't care, if 10 GbE is the right answer...

As i said the issue is in the long run, not now. A ghettorig will take time picking and choosing parts from ebay, babysitting the config after the change, power usage and noise mostly. If you dont mind wasting time, power and having a wind tunnel in your basement more power to you :)

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

SlowBloke posted:

As i said the issue is in the long run, not now. A ghettorig will take time picking and choosing parts from ebay, babysitting the config after the change, power usage and noise mostly. If you dont mind wasting time, power and having a wind tunnel in your basement more power to you :)

I asked for network hardware advice, not PC hardware advice. I assure you that my build is in no way ghetto-rigged, but rather as diametric an opposite as I can design. I am planning for the long term - in as many dimensions as possible :)

I don't know where I might want to go - but PCIe lanes are as good as cash, in build design terms.

An i3-7100 isn't a wind-tunnel either. That's what the NH-L9i is for. Golly, my IRL 35W TDP... so hot!

Paul MaudDib fucked around with this message at 11:44 on Nov 22, 2017

SlowBloke
Aug 14, 2017

Paul MaudDib posted:

An i3-7100 isn't a wind-tunnel either. That's what the NH-L9i is for. Golly, my IRL 35W TDP... so hot!

Any pcie card pushing enough data to require a >=8 lane channel will either have a fan on it or will require decent air pressure to cool down, that's where the wasted heat/noise in my post referred to. Ending the derail.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

SlowBloke posted:

Any pcie card pushing enough data to require a >=8 lane channel will either have a fan on it or will require decent air pressure to cool down, that's where the wasted heat/noise comes from. Ending the derail.

That's fair, I haven't considered the effect of fast networking and NVMe drives on total drive temps during peak load. But that's why I wanted to put a low-power processor inside the NAS case (and have a separate application server doing the work). And I'm going to put good fans on this build - 2x 120mm. Something I should be looking at besides Noctua's?

Paul MaudDib fucked around with this message at 12:05 on Nov 22, 2017

SlowBloke
Aug 14, 2017

Paul MaudDib posted:

That's fair, I haven't considered total drive temps during peak load. But I'm going to put good fans on this build - 2x 120mm. Something I should be looking at besides Noctua's?

I found Revoltec to be a good compromise in noise/airflow but i don't know if there are sellers on the other side of the atlantic.

BigDumper
Feb 15, 2008

I’m looking to upgrade my old Linksys router while they are on sale for Black Friday. My Linksys is an older one from 2007-09 that I got for free and it’s kind of weak for my 850 square foot apartment. I’m trying to determine what would be the best bang for buck replacement, right now I’m between these two:

Netgear AC1200 Dual Band gigabit R6230 - $69.99
TP Link Archer C1200 - $59.99

I’m leaning towards TP Link because it’s the recommended router mentioned in the OP for my price range but I’m more familiar with the NetGear brand. We have 100 mb/s download speed from Comcast and both look to be able to handle that and then a lot more.

Our internet use is mostly online gaming and video streaming, often on multiple devices at the same time. Our current router can handle this for the most part, but buffering is noticeable at times because of the bandwidth limit causing a bottleneck.

ickna
May 19, 2004

ThatWhiteGuy posted:

I’m looking to upgrade my old Linksys router while they are on sale for Black Friday. My Linksys is an older one from 2007-09 that I got for free and it’s kind of weak for my 850 square foot apartment. I’m trying to determine what would be the best bang for buck replacement, right now I’m between these two:

Netgear AC1200 Dual Band gigabit R6230 - $69.99
TP Link Archer C1200 - $59.99

I’m leaning towards TP Link because it’s the recommended router mentioned in the OP for my price range but I’m more familiar with the NetGear brand. We have 100 mb/s download speed from Comcast and both look to be able to handle that and then a lot more.

Our internet use is mostly online gaming and video streaming, often on multiple devices at the same time. Our current router can handle this for the most part, but buffering is noticeable at times because of the bandwidth limit causing a bottleneck.

I'd go with the TP-Link. Mine has been bullet proof going on a year now, and it provides just as much configurability as any of the Netgear or Linksys stuff I've ever owned. In an 850 sq ft apartment, you should be able to get fantastic throughput from pretty much anywhere.

Bob Socko
Feb 20, 2001

Any good Mesh WiFi deals on Black Friday? I’m leaning toward Best Buy’s 3-pack of Google WiFis for $249, but I haven’t bought into any ecosystem yet and am happy to look at other options.

If it matters, I’ll be buying four of whatever because I have a big split-level home, and have already run lines through the walls to support my four Airport Extremes.

Bob Socko fucked around with this message at 18:51 on Nov 22, 2017

BigDumper
Feb 15, 2008

ickna posted:

I'd go with the TP-Link. Mine has been bullet proof going on a year now, and it provides just as much configurability as any of the Netgear or Linksys stuff I've ever owned. In an 850 sq ft apartment, you should be able to get fantastic throughput from pretty much anywhere.

This is the kind of feedback I was hoping to get, thank you!

Encrypted
Feb 25, 2016

Paul MaudDib posted:

Which of these tradeoffs does my decision actually force right now?

Keep us updated. I've been eyeing 10Gbps network for home use for a while but it seems there are still quite some way to go before the

- Components in the computer being cheap and fast enough to feed that 10Gbps reliably
- NIC that's cheap and can reliably hit 10Gbps
- 10Gbps switch that's cheap and fanless for home use
- Router that can route at 10Gbps

There were a few new nic/switches released months ago but besides that everything else are still kinda expensive right now. And there seem to be no point to upgrade pieces of it at a time.

SlowBloke
Aug 14, 2017

Encrypted posted:

Keep us updated. I've been eyeing 10Gbps network for home use for a while but it seems there are still quite some way to go before the

- Components in the computer being cheap and fast enough to feed that 10Gbps reliably
- NIC that's cheap and can reliably hit 10Gbps
- 10Gbps switch that's cheap and fanless for home use
- Router that can route at 10Gbps

There were a few new nic/switches released months ago but besides that everything else are still kinda expensive right now. And there seem to be no point to upgrade pieces of it at a time.

You can get a used mellanox connectx2 sfp+ nic at 30$, a 10g switch from mikrotik at less than 200$ and a handful of SFP+ direct attach cable for 20-30$ each. It's not free but neither incredibly expensive. 10g routers are kinda useless now for home use, you don't need to route at 10g, you need to switch at 10g. Unless you have a 10g fiber uplink in which case you can afford a cisco/juniper with ease :D

Tiny Man Thinking Big
Apr 24, 2005

Somehow I imagined this experience would be more rewarding.

Can anyone tell me the difference between the Archer C9 and the Archer C1900? It looks like the C1900 is slightly more powerful and black but it is also cheaper than the C9 which doesn’t make sense to my simple brain

ickna
May 19, 2004

Tiny Man Thinking Big posted:

Can anyone tell me the difference between the Archer C9 and the Archer C1900? It looks like the C1900 is slightly more powerful and black but it is also cheaper than the C9 which doesn’t make sense to my simple brain

Feature-wise, there isn't really a difference other than the color of the plastic housing and the listed 900 mw amplifiers on the C1900. Their spec sheets on TP-Link's website are identical. You can't tell the difference in output power between them from the spec sheet though because it just lists that they are compliant with the maximum output of <30dBm FCC regulation.

That being said, the C1900 I bought for my parents house works great; ~3200 sq ft two level house, router on the top floor at the front of the house, and 3/3 wifi bars anywhere on the property. Previously had an B/G/N Apple Airport, then later a Linksys E1200 in that same position, both would drop to 2/3 bars on the main floor, 1/3 bars by the pool.

Actuarial Fables
Jul 29, 2014

Taco Defender
The C1900 is sold in the US (and I think Canada?) only. The more powerful wireless transmission is compliant to US regulations, but not those in the EU.

e. As to why it's cheaper, it could be that it's just not as popular as the c5/7/9 line.

Actuarial Fables fucked around with this message at 19:08 on Nov 23, 2017

Lork
Oct 15, 2007
Sticks to clorf
The QoS feature of the Edgerouters is supposed to figure out which traffic needs to be prioritized automatically, right? I dabbled in Tomato's QoS with my current router, which requires you to manually classify everything by the port it uses and ended up concluding that it wasn't worth the trouble.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Lork posted:

The QoS feature of the Edgerouters is supposed to figure out which traffic needs to be prioritized automatically, right?

Yes.

Photex
Apr 6, 2009




Anyone try out Ubiquiti's new software UNMS? Looks like it's a wrapper for their Edge and Unifi lines, probably will be replacing the unifi controller at some point on the roadmap.

https://unms.com/

CrazyLittle
Sep 11, 2001





Clapping Larry

Photex posted:

Anyone try out Ubiquiti's new software UNMS? Looks like it's a wrapper for their Edge and Unifi lines, probably will be replacing the unifi controller at some point on the roadmap.

https://unms.com/

Different target market. UniFi is an enclosed ecosystem for firewall, switch, wifi management. UNMS is targetted at multi-site management of edgerouters, edgeswitch devices. UNMS works great in the absence of any other central config management system for edgemax / vyatta devices, but it's got a lot of work that still needs to be done. Right now it has barely any access to the config-ability of the edgerouter's config tree.

bsaber
Jul 27, 2007

SlowBloke posted:

Most recent asus routers support l2tp vpn so does the edgerouter. Check the router model at your parents house.

So while I was over, I found out that they replaced their router with an Amplifi Mesh Router. And there doesn't seem to be any VPN support that I can see from the app. Any other suggestions?

SlowBloke
Aug 14, 2017

bsaber posted:

So while I was over, I found out that they replaced their router with an Amplifi Mesh Router. And there doesn't seem to be any VPN support that I can see from the app. Any other suggestions?

https://amplifi.com/teleport/

VPN endpoint kit for amplifi.

SlowBloke
Aug 14, 2017

Photex posted:

Anyone try out Ubiquiti's new software UNMS? Looks like it's a wrapper for their Edge and Unifi lines, probably will be replacing the unifi controller at some point on the roadmap.

https://unms.com/

It's going to take a long time to add unifi integration for UNMS(it's at the bottom of their known schedule), plus I get the feeling unms is going to talk to the unifi controller rather than replace it. UNMS main target are ISPs to provide remote client access/management(to unfuck end users modems/routers provided by the ISP) rather than SOHO/SMB like unifi. Cherry-on-top UNMS is not one-click deploy like unifi controller so I personally hope they delay the one-size-fits-all replacement as long as possible

SlowBloke fucked around with this message at 22:48 on Nov 24, 2017

bsaber
Jul 27, 2007

SlowBloke posted:

https://amplifi.com/teleport/

VPN endpoint kit for amplifi.

Oh that’s cool... but can’t get it yet. Only preorder but something I’ll keep an eye on. Thanks again SlowBloke!

Adbot
ADBOT LOVES YOU

FCKGW
May 21, 2006

Archer C7 is $55 today
https://www.amazon.com/gp/product/B00BUSDVBQ/

AC1200 is $40 as well.

FCKGW fucked around with this message at 00:41 on Nov 25, 2017

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply