|
Wait, I thought Intel NICs was the preferred option compared to Realtek?
|
# ? Jan 23, 2023 21:24 |
|
|
# ? May 27, 2024 00:04 |
|
they absolutely were in the gigabit era, but their consumer 2.5gb chipset has been and continues to be a disaster realtek had teething problems with 2.5gb as well but the complaints about that seem to have dried up after 2021
|
# ? Jan 23, 2023 21:30 |
|
the issues with intels chipset have flown under the radar to a degree because the issues mostly crop up when running it in 2.5gb mode, which is still relatively niche apparently it's also more prone to crapping out when connected directly to a router, putting a dumb switch inbetween tends to make it behave for some reason
|
# ? Jan 23, 2023 21:36 |
|
repiv posted:they absolutely were in the gigabit era, but their consumer 2.5gb chipset has been and continues to be a disaster In hunting down which adapter to use for 2.5Gb on an Apple silicon mac, the RTL8156B seems to be the preferred option but still has a bunch of complaints about it. 82574 was the last great Intel controller IMO.
|
# ? Jan 23, 2023 22:23 |
|
My 10GBe intel dual nic card seems to work just fine. I didn't dare to try my mobo's integrated 2.5G Intel solution.
|
# ? Jan 23, 2023 22:35 |
|
Ihmemies posted:My 10GBe intel dual nic card seems to work just fine. I didn't dare to try my mobo's integrated 2.5G Intel solution. Yeah, I went with a dual-port X540 simply to have 10Gbit before the inevitable cavalry charge.
|
# ? Jan 23, 2023 22:45 |
The use-case for 2.5Gbps seems to be if you've got cat5 (not cat5e, which can do 10G up to 25-30m, longer if you used S/STP - which has been the recommendation for many years) installed in walls, but somehow didn't ensure that you could install new cable. Also known making it a problem for your future self or someone else. If anyone know wants to know why Realtek were and are a complete disaster, I'd recommend this, this, and the comments in this and this. And best of all, Realtek still regularly pulls the kind of poo poo that no other vendor dares, up to and including shipping patches for drivers to fix PCB trace issues, not documenting this in their technical specifications, not issuing product change notifications, or including it in any of their (absolutely terribly written) opensource drivers that are designed to be as obfuscated as possible while technically still qualifying as opensource.
|
|
# ? Jan 23, 2023 22:49 |
|
2.5g exists because hyperscalers wanted to do 4 2.5g lanes off a 10g switch port in 2012
|
# ? Jan 23, 2023 22:56 |
|
2.5 is on your motherboard and makes it easier to drag things back and forth to your NAS. Unfortunately consumer and early enterprise multigig switches are finicky and every one seems to have some weird compatibility issue. Newer stuff does seem to be better in my limited testing, but I’m not running this stuff on mission critical hardware.
|
# ? Jan 23, 2023 23:02 |
|
Why on earth did we stop at 2.5G when 10GBaseT has been around for ages, and older 10GBaseT switches won't even negotiate at 2.5G?
|
# ? Jan 23, 2023 23:04 |
|
Don't 2.5GBASE-T and 5GBASE-T exist so that you can run WiFi 6/6E APs with existing wiring and with formal support for PoE++?
|
# ? Jan 23, 2023 23:06 |
|
I just run OC48 links for everything.
|
# ? Jan 23, 2023 23:26 |
|
Twerk from Home posted:Why on earth did we stop at 2.5G when 10GBaseT has been around for ages, and older 10GBaseT switches won't even negotiate at 2.5G? an intel 2.5g chipset costs $2.87 now, they're so cheap that even budget boards have abandoned gigabit despite the fact that almost no consumers actually need multi-gig networking 10g chipsets are still expensive enough to make a major dent in the BOM of a motherboard repiv fucked around with this message at 23:42 on Jan 23, 2023 |
# ? Jan 23, 2023 23:31 |
|
Twerk from Home posted:Why on earth did we stop at 2.5G when 10GBaseT has been around for ages, and older 10GBaseT switches won't even negotiate at 2.5G? A good consumer cost comparison is checking out mikrotik's switch offerings. They don't really charge a premium. There are other brands and some good used switch options too. I caved and bought some $10-$20 mellanox sfp+ nics (well branded by hp/hpe as infiniband but they flash to normal) on ebay and then bought a cheapish switch with 4 sfp+ 10gb ports and some 1gbe rj45 ports. A bunch of people in this sub did this 5+ years earlier than I did, but there are some good <=20w new "small business" or "home" switch offerings now in this space that are passively cooled. Khorne fucked around with this message at 00:04 on Jan 24, 2023 |
# ? Jan 23, 2023 23:37 |
repiv posted:an intel 2.5g chipset costs $2.87 and a 10g chipset still costs >$100 Anyway, the used market is absolutely flooded with SFP+ daughterboards.
|
|
# ? Jan 23, 2023 23:38 |
|
in a well actually posted:2.5g exists because hyperscalers wanted to do 4 2.5g lanes off a 10g switch port in 2012 I thought it was because the 2.5G SERDES links were bonded to make the 10G link, same as how you take 4 10G links now to make a 40G, or 4 25s to make a 100G?
|
# ? Jan 23, 2023 23:38 |
|
Methylethylaldehyde posted:I thought it was because the 2.5G SERDES links were bonded to make the 10G link, same as how you take 4 10G links now to make a 40G, or 4 25s to make a 100G? Even copper 1GbE is four 250M links, one per twisted pair. They use some scheme I don't remember the details of to enable simultaneous transmission on each pair from each end of the link without having to do CSMA/CD backoff. e: that said, I don't know whether 2.5G copper ethernet is 1 pair worth of 10G copper ethernet, or something else
|
# ? Jan 24, 2023 01:00 |
|
I've had a Intel Xeon (2650v3) on my desk sitting idle and there's a pillar of TIM emerging from the tiny hole on the heat spreader. Is this why they have this hole? Some sort of valve to exhaust heat/pressure/excess TIM?
|
# ? Jan 24, 2023 07:01 |
|
I upgraded to 2.5Gbe and I'm very happy with it. Large transfers to/from my NAS are much less annoying now than before. 2.5G switches are also much less expensive than 10Gbe.
|
# ? Jan 24, 2023 07:02 |
|
The Intel NIC thing can also be an issue negotiating with other hardware, like an ONT. There's a long-standing problem with the V225 and the recent Fios ONTs. It's something with the IPv6 checksumming. Turning off offloading fixes it for those, but no such luck with the 226. I just gave up fighting with my 226-V on my z790 Tomahawk and bought a cheap 1x slot RealTek 2.5GbE card.
|
# ? Jan 24, 2023 07:20 |
|
Boat Stuck posted:I upgraded to 2.5Gbe and I'm very happy with it. Large transfers to/from my NAS are much less annoying now than before. 2.5G switches are also much less expensive than 10Gbe. Yeah I went 2.5 once I found a passive switch that was $100. I tried looking for used 10 equipment on eBay like was suggested here but it was still more expensive and were usually sfp based and/or had some active fan This was a few years ago though. I could look again but 2.5 is more than enough for the connection between my main desktop and Plex/blue iris server WhyteRyce fucked around with this message at 07:39 on Jan 24, 2023 |
# ? Jan 24, 2023 07:35 |
|
Stock talk but lol Intel stock
|
# ? Jan 27, 2023 02:02 |
|
Kivi posted:I've had a Intel Xeon (2650v3) on my desk sitting idle and there's a pillar of TIM emerging from the tiny hole on the heat spreader. Is this why they have this hole? Some sort of valve to exhaust heat/pressure/excess TIM? it's actually a hole to relieve pressure during the soldering process, so yeah, pretty much. the server dies were soldered and it was a decently large heatspreader so they had to give it a vent. but there's no problem with it being full of thermal paste in general. I haven't heard of it doing a "pillar of TIM" and have not observed that on my X99 5820Ks or 1660v3 or 2697v3s so far. It's probably fine, but perhaps it's a sign of pumping, like from thermal change, but that still might not really be a problem, just an oddity of your CPU? if it dies, who cares, though. 2699v3s are under $50 now, my 2697v3s have lost over half their value in the last 3 or 4 months, from $50 all the way down to like $20 or less lol. Waiting for 2697v4s to come down since that's the best-in-socket for my boards, but those are still $225 a pop which nah I'll keep watching. Also remember that 1650v3, 1660v3, and 1680v3 are basically 5930K and 5960X with xeon feature bits turned on - they are multiplier unlocked and can use high RAM clocks too (probably need X99), but also can take RDIMMs for large capacity (again, X99 WS boards are p. neat, they will do ECC and multiplier unlock lol). And they're like $50 or less for the 1650v3 last time I checked, probably the 1660v3 is down there too. And that was why Intel decided no more Xeons on consumer chipsets after Haswell/Haswell-E lol. Cutting off that flood of cheap upgrades as the server market dumps. 2011-3 is really a solid platform, the socket is still small enough to do 24-DIMM dual-socket builds in a normal-ish consumer form factor (EE-ATX or similar) and it's actually really got a lot of features turned on with X99 WS boards (but single-socket only, of course) with RDIMM support and (with v3s and X99) all-core-turbo unlock. Not a high clocker on memory but hey, 32GB RDIMMs are $40 a pop and it's got a ton of IO and a bunch of cheap server chips now. It's a real fun homelab/tinkerer platform imo. Paul MaudDib fucked around with this message at 02:22 on Jan 27, 2023 |
# ? Jan 27, 2023 02:03 |
|
WhyteRyce posted:Stock talk but lol Intel stock Dat earnings report
|
# ? Jan 27, 2023 02:08 |
|
The only intel product I want to buy is optane and of course they are killing it.
|
# ? Jan 27, 2023 02:19 |
|
Perplx posted:The only intel product I want to buy is optane and of course they are killing it. that's too hard, can we interest you in NICs that don't work
|
# ? Jan 27, 2023 02:21 |
|
wow revenue down 30% yoy
|
# ? Jan 27, 2023 02:36 |
|
WhyteRyce posted:Stock talk but lol Intel stock intel is so hosed in the long term, they're a huge-rear end company with a lotta billz that gotta get paid in order to have a path back to competitiveness. A company like that doesn't go down instantly but the trajectory is so bad for them and there are so many things they'd have to execute well on to even hit optimistic (but still not great) projections, and they're obviously not executing well on literally anything. It's obviously important to be vertically integrated and be able to build the whole system (software and cpu and accelerators, plus interconnect and advanced packaging) to be competitive in HPC or advanced computing say 10 years from now, that's why AMD bought Xilinx and NVIDIA tried to buy ARM and why everyone from tesla to google to amazon is building their own uarchs for neural accelerators/etc. But just operating the cpu division and the gpu division and alterra and the fabs are going to be a massive drain on operating funds, and there were rumors about GPU being cut, or cut back to enterprise too, although I hope they don't because I think it'll be real tough without GPU and other accelerators. And this assumes that all those groups actually execute well. Intel is genuinely in deep poo poo simply because they have so much going out just to keep the lights on, and their revenue is terrible and their products are behind and not getting much better. They're genuinely in the reverse position of AMD all those years ago, and coming into a recession just the same. And I'm not even sure how much selling the fabs would help, even under normal circumstances. With the leading edge slowing a lot, it's ironically a chance for Samsung and Intel to regain some ground if it turns out that everyone's stalled at like 2nm for a while (due to economics of leading-edge development or profound technical problems). And they rely a lot on churning out a shitload of embedded chips and chipsets and network chips (lol, lmao) etc. But now the fab is gonna be expensive to run during a huge recession and also probably nobody is going to buy it nor is there really a route to financial viability spinning it off GloFo style most likely, especially during a huge recession. They will never be allowed to go under, they're way too strategically important to let die, so they'll be fine in the long term, but like, they actually are hosed in the short term because it's not gonna stop going down, and hosed in the medium term simply because it's gonna take 5+ years to really turn things around absolute minimum and it's gonna take a ton of money in the meantime, and there's not really any sign that anything is going well over there. datacenter running at 0% margin last quarter was the smoke coming out of the building and now it's completely ablaze, they're tilting heavily into overall loss and ain't like the market is improving or intel is catching AMD on literally anything anytime soon they're so hosed, not really a stock trader but it's crazy it's not more than 10%, their long-term prognosis is just so awful, it seems like a crazy place to park your money just because the prognosis is so relentlessly negative. but maybe that's what people said about boeing, or it's already priced in (with an expectation of bailouts if it became necessary), etc edit: Paul MaudDib fucked around with this message at 04:49 on Jan 27, 2023 |
# ? Jan 27, 2023 02:58 |
|
priznat posted:Dat earnings report I don't think they're technically earnings if you lost money.
|
# ? Jan 27, 2023 03:43 |
|
Twerk from Home posted:I don't think they're technically earnings if you lost money. lol, lack thereof Also Paul I heard the GPU division was on the block rumours too from industry connected folks, but who knows. It would fit with their pattern of killing off non-core businesses before they even have a chance to be successful though (optane etc)
|
# ? Jan 27, 2023 04:10 |
|
Paul MaudDib posted:intel is so hosed in the long term, If you want a premise for a Clancy like thriller there is one scenario where Intel isn’t hosed: an invasion of Taiwan.
|
# ? Jan 27, 2023 04:17 |
|
priznat posted:lol, lack thereof the accelerator/gpu group is burning money like crazy, I can 100% understand the guess/instinct/desire (depending on actual credibility) to kill the GPU division, or kill the consumer GPU division. Graphics/Accelerator group is losing 441M on a revenue of 247M, so they're operating at almost a -200% margin in the previous 3 months. converting that portion of headquarters into an incinerator and dumping dollar bills in with forklifts and dumptrucks would be cheaper. perversely maybe that's a sign of their commitment too though. we'll know pretty quick I guess. you don't run -200% margin on a $250m group for too long if you're not serious. the first-gen intel Arc GPUs are a dumpster fire in terms of silicon usage, Intel is using a 3070-sized piece of silicon, and it's also TSMC 6nm instead of dirt-cheap low-density samsung 8nm, to compete with a 3060, at 3050 pricing. they clearly are paying an insane transistor/area penalty for wave-8 design in terms of things like scheduler overhead and cache tagging and memory controller/SM scheduler complexity etc etc. tbh it seems like maybe that's a design meant to flower in later gens with higher node density where logic becomes comparatively much cheaper (and cache becomes comparatively less effective as a strategy compared to 6nm/7nm-family) because wow that really makes very little sense at 6nm. But it is very interesting in a GPGPU sense as an argument for reducing divergence. Yeah divergence sucks, but if you can allow Volta-style per-thread-instruction-ptr (so warps only sync at a warp-fence, either implicit via warp-collective call or explicitly via fence intrinsic) and you're only diverging and syncing groups of 8, that's easier - smaller groups and less threads waiting/diverging at a given time. And you have this fancy facility for throwing off promise/future operations into a queue that gets realigned oportunistically based on what's actually in the async op queue. It's a very very compute-driven approach, this is like GCN times a million. This is a very serious look at the "divergence sucks, how do we fix this" and coming up with at least a novel argument. Smaller groups to handle a little more sparseness and async promise/future queues to handle really sparse/divergent things, with rebatching/realignment whenever possible, and just build a bigger machine that does smaller warps to try and carve out higher efficiency. Maybe it's an attempt to skate to where the puck is going to be, and design for where you're going to be in 2 nodes at 2.5-3x logic density and 1.1x cache density rather than being great at this node with 1.0x logic and 1.0x cache. When I hear him say "the average task latency [number of RT BVH intersection levels] is X and it gets longer with wider warps and holds everything up until it returns" I hear that as being a more general analysis that says they think utilization, memory coherency, etc with narrower warps is better when measured as total divergence vs latency, and they think they can keep warp fences far enough apart to make narrower warps work and be worth the incremental scheduling overhead etc. if you can code the bottom of your loop efficiently without too many warp fences (relative to warp size), and just run the sparse code async so the sparse stuff happens efficiently, and realign your random accesses opportunistically based on what’s in-flight in the memory controller, it works. With much higher logic density, that might end up being better than it plays right now, I don't see a reason wave-8 is compelling on 6nm otherwise, the logic overhead of bigger wave-8 partitions has to be insane right now. And they're lighting money on fire writing the drivers. Obviously. But tbh they have to do a lot of that anyway to make a go of it with a premium integrated laptop/desktop graphics platform. It all goes together - to me this only makes sense if you do the whole thing, enterprise GPGPU (and OneAPI), discrete gaming, and integrated gaming. Otherwise they might as well license RDNA or Adreno and move on, because that's enough graphics for microsoft word. But it'd be a strategic mistake too, I don't think they can be taken seriously without the enterprise stack and the consumer and enthusiast stuff all is interlocked enough that you might as well do those too if you're going to do enterprise. But enterprise-and-license-adreno strategy and killing consumer graphics entirely is also a valid answer too, I guess. But tbh $600m is less than half of what Intel is losing right now, even if they killed the whole GPU division they'd still have problems and the long-term strategic position would also weaken. "all other" is very obviously just everything they'd rather not pay, like non-base-salary employee compensation, and they are having trouble enough retaining talent, imagine going to help put out that dumpster fire let alone you have a lovely low salary (because intel has been bottom feeding forever) and then you lose your bonus or whatever. gently caress it I'll go make 50% more at AMD or triple my salary at Apple. But the "other" category is explicitly designed to make you go "spend less on candles", it's deliberately all cost centers and no revenue. That stuff should just be rolled into the operating budget of whatever department that employee is FTE in or whatever group the sponsorship/fellowship is benefiting. That's phony financial grouping. Genuinely Intel does do a lot of bullshit stuff and bullshit projects and bullshit sponsorships though and the plug needs to be pulled on that stuff right now. That's the country-club dues of the family that's losing the house. Stop having IEEE sponsorships and distinguished fellows or whatever. at least raja finally failed downwards, he's demoted to basically chief architect instead of executive VP in charge of GPU/accelerators. Meaning intel wants less guff about product strategy, more tech stuff and results. Paul MaudDib fucked around with this message at 10:57 on Jan 27, 2023 |
# ? Jan 27, 2023 05:05 |
|
Since the "Network and Edge" category is so vague and barely in the black, but every server board made still has a smattering of Intel controllers on it, I wonder if they're taking a bath on Barefoot. Turns out nobody in white box land wants a programmable switching ASIC if you don't release the driver code, and nobody in black box land who doesn't already have their own custom ASICs cares enough to spend twice as much per ASIC as they would on a Broadcom or Mellanox. Who knew?
|
# ? Jan 27, 2023 05:19 |
|
Paul MaudDib posted:
Intel pay usually isn’t great. They’ve targeted lower cost geos (i.e. not SV), which worked out well (for both sides actually) until poo poo like direct competitors setting up shop across the street in Hillsboro and the pandemic changing Folsom employment possibilities. But they still have a lot of smart people there, it’s just they still have so much middle management rot and empire building that persists that they won’t ever be able to execute WhyteRyce fucked around with this message at 05:46 on Jan 27, 2023 |
# ? Jan 27, 2023 05:42 |
|
I know a lot of people that got pulled to AMD and they created a 200+ person design centre in the area from nothing basically in the last year or so. The strange thing with the AMD location is that there are people in all different teams, like semicustom to IP to server chips. It is much less structured around a single business unit than the intel locations I know about. A lot of them are interlinked though.
|
# ? Jan 27, 2023 05:49 |
|
hobbesmaster posted:If you want a premise for a Clancy like thriller there is one scenario where Intel isn’t hosed: an invasion of Taiwan. and to be clear this is why intel will never be allowed to fail and intel will never divest themselves of the fabs because it might allow circumstances in which they could be allowed to fail. intel is the US's sole leading-edge fab (except for that tiny national-security tsmc fab that will be years behind even intel let alone TSMC taiwan when it comes online) and there's zero chance intel lets it go down and have korea and taiwan dominate the picture.
|
# ? Jan 27, 2023 07:22 |
|
Yowsa, I remember the datacenter overtaking the client group revenue. SPR delays really hosed over their revenues this year.
|
# ? Jan 27, 2023 10:38 |
|
Paul MaudDib posted:it's actually a hole to relieve pressure during the soldering process, so yeah, pretty much. the server dies were soldered and it was a decently large heatspreader so they had to give it a vent. quote:if it dies, who cares, though. 2699v3s are under $50 now, my 2697v3s have lost over half their value in the last 3 or 4 months, from $50 all the way down to like $20 or less lol. Waiting for 2697v4s to come down since that's the best-in-socket for my boards, but those are still $225 a pop which nah I'll keep watching. quote:2011-3 is really a solid platform, the socket is still small enough to do 24-DIMM dual-socket builds in a normal-ish consumer form factor (EE-ATX or similar) and it's actually really got a lot of features turned on with X99 WS boards (but single-socket only, of course) with RDIMM support and (with v3s and X99) all-core-turbo unlock. Not a high clocker on memory but hey, 32GB RDIMMs are $40 a pop and it's got a ton of IO and a bunch of cheap server chips now. It's a real fun homelab/tinkerer platform imo. I've got two set-ups, a normal ATX sized dual CPU (2696v4s) board for virtualization (games) and this tiny box for my wife: I built it with spare parts I had lying around (mostly RDIMMs and the NCase) and some eBay finds like that $90 P2200. She actually does some CAD work on it so it's actually spot on for her.
|
# ? Jan 27, 2023 10:51 |
|
Paul MaudDib posted:and to be clear this is why intel will never be allowed to fail and intel will never divest themselves of the fabs because it might allow circumstances in which they could be allowed to fail. Does the US military/government ACTUALLY design custom ASICs for their applications? The only ones I am aware of are the RF chips for the RF front-end electronics in military radios/RADARs, which are comparatively much lower tech than computer chips.
|
# ? Jan 27, 2023 12:15 |
|
|
# ? May 27, 2024 00:04 |
|
silence_kit posted:Does the US military/government ACTUALLY design custom ASICs for their applications? The only ones I am aware of are the RF chips for the RF front-end electronics in military radios/RADARs, which are comparatively much lower tech than computer chips. Even if they don't, they need a CPU supplier that's on-shore to build current-gen war stuff.
|
# ? Jan 27, 2023 13:40 |