|
priznat posted:Check out all these balllllzz Delid it you coward!
|
# ? Apr 9, 2019 03:59 |
|
|
# ? May 18, 2024 06:55 |
|
3D print a delidding tool and hit it with a hammer
|
# ? Apr 9, 2019 05:45 |
|
You’d have to hit it with your car or something. I wonder what the bga pitch is on those and if the layout people all die of heart attacks when attempting to escape the high speed signals.
|
# ? Apr 9, 2019 06:04 |
|
priznat posted:Check out all these balllllzz Jesus H. Christ. It's like Intel wants to lose more marketshare to AMD.
|
# ? Apr 9, 2019 17:32 |
|
spasticColon posted:Jesus H. Christ. It's like Intel wants to lose more marketshare to AMD. Hey at least they're doing it in entertaining and creative ways
|
# ? Apr 9, 2019 18:31 |
|
priznat posted:You’d have to hit it with your car or something. They have to update this guide then https://www.youtube.com/watch?v=zx641tAZFH0
|
# ? Apr 9, 2019 18:43 |
|
spasticColon posted:Jesus H. Christ. It's like Intel wants to lose more marketshare to AMD. Intel is fine. They can provide a whole stack to the enterprise or cloud provider, which AMD cannot.
|
# ? Apr 9, 2019 19:54 |
|
lol do not use intel networking
|
# ? Apr 9, 2019 20:03 |
|
Do like nics that start randomly rebroadcast traffic to each other while in S3 for no reason, effectively looping your network if two or more systems start doing that behavior on a segment at a time? Do you like wildly unstable linux drivers that will drop link EVERY loving TEN SECONDS that they refuse to fix and instead just wallpaper over the problem by reinitializing the driver as quickly as possible? WELL HAVE I GOT A PRODUCT FOR YOU. https://www.youtube.com/watch?v=zoNtXVMOutE
|
# ? Apr 9, 2019 20:08 |
|
BangersInMyKnickers posted:lol do not use intel networking Around six years ago, I found an issue that almost got Intel CNA cards removed from VMware's ESX hardware compatibility list, the threat from VMware engineering got them to pay attention to the issue.
|
# ? Apr 9, 2019 21:16 |
|
Dell had to eat a full swap in our VMware cluster from X710's to whatever Qlogic was selling because of how poo poo they were. Spent over 6 months fighting with them and trying every conceivable firmware and driver combo we could think of and they were still horrible. Incredibly fast, could do 10gig line speed on 64 byte frames, but the firmware and drivers were atrocious. The qlogics could only hit 4-5gig on standard 1500 byte frames and would only hit 10gig on jumbos but at least they could hold a drat link.
|
# ? Apr 9, 2019 21:47 |
|
Would you rather use an intel nic or a Realtek one though
|
# ? Apr 9, 2019 23:37 |
|
Killer NICs my dude
|
# ? Apr 9, 2019 23:39 |
|
Nvidia NICs (rip Mellanox)
|
# ? Apr 9, 2019 23:43 |
|
Cygni posted:Nvidia NICs I wonder if they keep the mellanox brand, it isn’t that established/well known but on the other hand seeing nvidia nics is too weird.
|
# ? Apr 9, 2019 23:45 |
|
What... nvidia nics have AIDS.
|
# ? Apr 10, 2019 01:10 |
|
Wild EEPROM posted:Would you rather use an intel nic or a Realtek one though I would honestly prefer the realteks because they don't cause broadcast storms in the middle of the night on systems in S3.
|
# ? Apr 10, 2019 13:59 |
|
When did the Intel NICs all go to poo poo? I remember making very conscious choices in my home hardware to make sure I had all Intel NICs. Though, they are all 82xxx series GigE NICs, not 10GbE.
|
# ? Apr 10, 2019 22:59 |
|
Their consumer NICs are much better than the competition. Their server products are apparently a different story, and they gave up a ton of marketshare to Mellanox.
|
# ? Apr 10, 2019 23:12 |
|
Intel's server products have some seeeeriously dodgy Linux drivers that tend to mangle every single offload feature possible.
Kazinsal fucked around with this message at 23:15 on Apr 10, 2019 |
# ? Apr 10, 2019 23:12 |
|
Mellanox dual 100GbE NICs with x16 Gen4 are really sweet for NVMe over Fabric/RDMA
|
# ? Apr 10, 2019 23:14 |
|
RIP Mellanox.
|
# ? Apr 10, 2019 23:16 |
|
movax posted:When did the Intel NICs all go to poo poo? I remember making very conscious choices in my home hardware to make sure I had all Intel NICs. Though, they are all 82xxx series GigE NICs, not 10GbE. We've been having issues on our optiplex fleet with embedded intel nics for at least the last five years requiring firmware updates to correct.
|
# ? Apr 11, 2019 02:54 |
|
BangersInMyKnickers posted:We've been having issues on our optiplex fleet with embedded intel nics for at least the last five years requiring firmware updates to correct. Beginning to think these issues have a r of nearly 1 as subsequent generations of Intel desktop platforms introduced even more ME / out-of-band management functions... So, who's left for consumer NICs that doesn't completely suck? Realtek? Does Broadcom even play in that space anymore? Aquantia on high-end platforms? Intel has the advantage where virtually every single PCH they ship has a MAC for "free" and all the OEM has to do is buy the conveniently high-priced Intel PHY (that at least a few generations ago, ate one of the PCIe lanes from the PCH doing probably pseudo-SGMII or something) to offer an Ethernet solution. I used the 82574L frequently in SBC / motherboard designs and bought quite a few of the PCIe x1 cards they came on for various home servers around 2009-2010 or so (gently caress, 10 years ago was 90nm for NICs and 1Gbit is still plenty for networking in consumer tech?!) and they were pretty bullet proof. I want to say the 82579 is the P/N for the PHY that accompanied the PCH around Ibex Peak and Cougar Peak? movax fucked around with this message at 04:57 on Apr 11, 2019 |
# ? Apr 11, 2019 04:52 |
For plain old GigE, the Intel i2xx series is still being produced and pops up here and there on some motherboards. You could always slap an old x540-t2 in there for 10gigE and be set for the next decade if recent onboard stuff is really as bad as it sounds. They’re (relatively)cheap, and I’ve never had an issue with them, despite my concerns due to the suspiciously low prices that pop up on eBay. Although I just use mine to connect to my internet router, and use the second port to crossover to my wife’s PC directly since I don’t think my router even support 10 GigE. She sends over a ton of backup data to the big hard drive in my computer while also using ICS without any complaints for years now.
|
|
# ? Apr 11, 2019 08:07 |
|
movax posted:So, who's left for consumer NICs that doesn't completely suck? I suppose it depends what you mean by "consumer." In most venues, "consumer" means 1GbE on Windows, for which Intel's NICs are still quite good. I've also never had an issue with them under BSD, either. 10GbE and above is a different issue, but also generally outside "consumer" spec, as well.
|
# ? Apr 11, 2019 14:33 |
|
I was planning on using the built-in Aquantia 10G NIC on my board for a primary NIC. Good/bad idea?
|
# ? Apr 12, 2019 01:04 |
|
BIG HEADLINE posted:I was planning on using the built-in Aquantia 10G NIC on my board for a primary NIC. Good/bad idea? Already doing this on my Supermicro Z390, used the default Intel NIC to get the driver from the mobo vendor site. Swapped ports and have had no issues for more than a month now. The Comcast connection where I live maxes out at 450-ish Mbits/s so i can't be a judge of multi gig perfomance though.
|
# ? Apr 12, 2019 02:22 |
|
priznat posted:Mellanox dual 100GbE NICs with x16 Gen4 are really sweet for NVMe over Fabric/RDMA All that Ethernet overhead? Gross.
|
# ? Apr 12, 2019 05:40 |
|
PCjr sidecar posted:All that Ethernet overhead? Gross. True, need a proper gen4 pcie over optical solution for distance. Altho some of the Mellanox cards do infiniband too but I have never used that so it is mysterious to me
|
# ? Apr 12, 2019 06:05 |
|
Would one of you mind breaking that down? Ethernet overhead, gen4?
|
# ? Apr 12, 2019 06:44 |
|
When you're connecting really fast systems ethernet becomes a limitation instead of just a protocol. Gen 4 PCI-E allows up to 32 GB per second. A single 100 gbit link can max out a gen 3 connection.
|
# ? Apr 12, 2019 07:40 |
|
.
sincx fucked around with this message at 05:55 on Mar 23, 2021 |
# ? Apr 12, 2019 07:54 |
|
PCjr sidecar posted:All that Ethernet overhead? Gross. we got super-jumbo 16 and 64k frames now, its not too bad. even my 5 year old Force10 gear would support up to 12k.
|
# ? Apr 12, 2019 14:59 |
|
priznat posted:True, need a proper gen4 pcie over optical solution for distance. Altho some of the Mellanox cards do infiniband too but I have never used that so it is mysterious to me IB is weird but good at what it does. Haven’t had a chance to play with the 200G stuff yet. Most of the gen4 fabrics are using the Ethernet PHY.
|
# ? Apr 12, 2019 18:06 |
|
BangersInMyKnickers posted:we got super-jumbo 16 and 64k frames now, its not too bad. even my 5 year old Force10 gear would support up to 12k. Packet size doesn’t necessarily help with small message latency though.
|
# ? Apr 12, 2019 18:10 |
|
PCjr sidecar posted:Packet size doesn’t necessarily help with small message latency though. You're talking about microseconds of overhead, it doesn't matter in the vast majority of use cases and you should be using Optane if it does.
|
# ? Apr 12, 2019 18:12 |
|
BangersInMyKnickers posted:You're talking about microseconds of overhead, it doesn't matter in the vast majority of use cases and you should be using Optane if it does. Yeah, 1 to 10 us is still an order of magnitude. Not everything is storage and for what is lol at optane baby capacity.
|
# ? Apr 12, 2019 18:34 |
|
PCjr sidecar posted:Yeah, 1 to 10 us is still an order of magnitude. Not everything is storage and for what is lol at optane baby capacity. Sounds like someone is too poor for a 1.5TB 4800x
|
# ? Apr 12, 2019 21:35 |
|
|
# ? May 18, 2024 06:55 |
|
WhyteRyce posted:Sounds like someone is too poor for a 1.5TB 4800x All the way out in PCIe land? Pass, I don't have all day.
|
# ? Apr 13, 2019 19:50 |