Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
pofcorn
May 30, 2011
Wait, I thought Intel NICs was the preferred option compared to Realtek?

Adbot
ADBOT LOVES YOU

repiv
Aug 13, 2009

they absolutely were in the gigabit era, but their consumer 2.5gb chipset has been and continues to be a disaster

realtek had teething problems with 2.5gb as well but the complaints about that seem to have dried up after 2021

repiv
Aug 13, 2009

the issues with intels chipset have flown under the radar to a degree because the issues mostly crop up when running it in 2.5gb mode, which is still relatively niche

apparently it's also more prone to crapping out when connected directly to a router, putting a dumb switch inbetween tends to make it behave for some reason

movax
Aug 30, 2008

repiv posted:

they absolutely were in the gigabit era, but their consumer 2.5gb chipset has been and continues to be a disaster

realtek had teething problems with 2.5gb as well but the complaints about that seem to have dried up after 2021

In hunting down which adapter to use for 2.5Gb on an Apple silicon mac, the RTL8156B seems to be the preferred option but still has a bunch of complaints about it.

82574 was the last great Intel controller IMO.

Ihmemies
Oct 6, 2012

My 10GBe intel dual nic card seems to work just fine. I didn't dare to try my mobo's integrated 2.5G Intel solution.

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"

Ihmemies posted:

My 10GBe intel dual nic card seems to work just fine. I didn't dare to try my mobo's integrated 2.5G Intel solution.

Yeah, I went with a dual-port X540 simply to have 10Gbit before the inevitable cavalry charge.

BlankSystemDaemon
Mar 13, 2009



The use-case for 2.5Gbps seems to be if you've got cat5 (not cat5e, which can do 10G up to 25-30m, longer if you used S/STP - which has been the recommendation for many years) installed in walls, but somehow didn't ensure that you could install new cable.
Also known making it a problem for your future self or someone else.

If anyone know wants to know why Realtek were and are a complete disaster, I'd recommend this, this, and the comments in this and this.
And best of all, Realtek still regularly pulls the kind of poo poo that no other vendor dares, up to and including shipping patches for drivers to fix PCB trace issues, not documenting this in their technical specifications, not issuing product change notifications, or including it in any of their (absolutely terribly written) opensource drivers that are designed to be as obfuscated as possible while technically still qualifying as opensource.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

2.5g exists because hyperscalers wanted to do 4 2.5g lanes off a 10g switch port in 2012

Cygni
Nov 12, 2005

raring to post

2.5 is on your motherboard and makes it easier to drag things back and forth to your NAS. Unfortunately consumer and early enterprise multigig switches are finicky and every one seems to have some weird compatibility issue. Newer stuff does seem to be better in my limited testing, but I’m not running this stuff on mission critical hardware.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
Why on earth did we stop at 2.5G when 10GBaseT has been around for ages, and older 10GBaseT switches won't even negotiate at 2.5G?

phongn
Oct 21, 2006

Don't 2.5GBASE-T and 5GBASE-T exist so that you can run WiFi 6/6E APs with existing wiring and with formal support for PoE++?

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
I just run OC48 links for everything.

repiv
Aug 13, 2009

Twerk from Home posted:

Why on earth did we stop at 2.5G when 10GBaseT has been around for ages, and older 10GBaseT switches won't even negotiate at 2.5G?

an intel 2.5g chipset costs $2.87 now, they're so cheap that even budget boards have abandoned gigabit despite the fact that almost no consumers actually need multi-gig networking

10g chipsets are still expensive enough to make a major dent in the BOM of a motherboard

repiv fucked around with this message at 23:42 on Jan 23, 2023

Khorne
May 1, 2002

Twerk from Home posted:

Why on earth did we stop at 2.5G when 10GBaseT has been around for ages, and older 10GBaseT switches won't even negotiate at 2.5G?
10gb ports for copper (rj45) have non-trivial power requirements & heat dissipation. They aren't that expensive anymore, but manufacturers don't like adding cost for a feature most people don't care about. The average user has a device/devices connected to wifi and is bottlenecked by their not-even-1gbe-saturating isp.

A good consumer cost comparison is checking out mikrotik's switch offerings. They don't really charge a premium. There are other brands and some good used switch options too.

I caved and bought some $10-$20 mellanox sfp+ nics (well branded by hp/hpe as infiniband but they flash to normal) on ebay and then bought a cheapish switch with 4 sfp+ 10gb ports and some 1gbe rj45 ports.

A bunch of people in this sub did this 5+ years earlier than I did, but there are some good <=20w new "small business" or "home" switch offerings now in this space that are passively cooled.

Khorne fucked around with this message at 00:04 on Jan 24, 2023

BlankSystemDaemon
Mar 13, 2009



repiv posted:

an intel 2.5g chipset costs $2.87 and a 10g chipset still costs >$100
Are you sure you aren't comparing RJ45 vs SFP+?

Anyway, the used market is absolutely flooded with SFP+ daughterboards.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

in a well actually posted:

2.5g exists because hyperscalers wanted to do 4 2.5g lanes off a 10g switch port in 2012

I thought it was because the 2.5G SERDES links were bonded to make the 10G link, same as how you take 4 10G links now to make a 40G, or 4 25s to make a 100G?

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Methylethylaldehyde posted:

I thought it was because the 2.5G SERDES links were bonded to make the 10G link, same as how you take 4 10G links now to make a 40G, or 4 25s to make a 100G?

Even copper 1GbE is four 250M links, one per twisted pair. They use some scheme I don't remember the details of to enable simultaneous transmission on each pair from each end of the link without having to do CSMA/CD backoff.

e: that said, I don't know whether 2.5G copper ethernet is 1 pair worth of 10G copper ethernet, or something else

Kivi
Aug 1, 2006
I care
I've had a Intel Xeon (2650v3) on my desk sitting idle and there's a pillar of TIM emerging from the tiny hole on the heat spreader. Is this why they have this hole? Some sort of valve to exhaust heat/pressure/excess TIM?

Boat Stuck
Apr 20, 2021

I tried to sneak through the canal, man! Can't make it, can't make it, the ship's stuck! Outta my way son! BOAT STUCK! BOAT STUCK!
I upgraded to 2.5Gbe and I'm very happy with it. Large transfers to/from my NAS are much less annoying now than before. 2.5G switches are also much less expensive than 10Gbe.

Echophonic
Sep 16, 2005

ha;lp
Gun Saliva
The Intel NIC thing can also be an issue negotiating with other hardware, like an ONT. There's a long-standing problem with the V225 and the recent Fios ONTs. It's something with the IPv6 checksumming. Turning off offloading fixes it for those, but no such luck with the 226. I just gave up fighting with my 226-V on my z790 Tomahawk and bought a cheap 1x slot RealTek 2.5GbE card.

WhyteRyce
Dec 30, 2001

Boat Stuck posted:

I upgraded to 2.5Gbe and I'm very happy with it. Large transfers to/from my NAS are much less annoying now than before. 2.5G switches are also much less expensive than 10Gbe.

Yeah I went 2.5 once I found a passive switch that was $100. I tried looking for used 10 equipment on eBay like was suggested here but it was still more expensive and were usually sfp based and/or had some active fan

This was a few years ago though. I could look again but 2.5 is more than enough for the connection between my main desktop and Plex/blue iris server

WhyteRyce fucked around with this message at 07:39 on Jan 24, 2023

WhyteRyce
Dec 30, 2001

Stock talk but lol Intel stock

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Kivi posted:

I've had a Intel Xeon (2650v3) on my desk sitting idle and there's a pillar of TIM emerging from the tiny hole on the heat spreader. Is this why they have this hole? Some sort of valve to exhaust heat/pressure/excess TIM?

it's actually a hole to relieve pressure during the soldering process, so yeah, pretty much. the server dies were soldered and it was a decently large heatspreader so they had to give it a vent.

but there's no problem with it being full of thermal paste in general. I haven't heard of it doing a "pillar of TIM" and have not observed that on my X99 5820Ks or 1660v3 or 2697v3s so far. It's probably fine, but perhaps it's a sign of pumping, like from thermal change, but that still might not really be a problem, just an oddity of your CPU?

if it dies, who cares, though. 2699v3s are under $50 now, my 2697v3s have lost over half their value in the last 3 or 4 months, from $50 all the way down to like $20 or less lol. Waiting for 2697v4s to come down since that's the best-in-socket for my boards, but those are still $225 a pop which nah I'll keep watching.

Also remember that 1650v3, 1660v3, and 1680v3 are basically 5930K and 5960X with xeon feature bits turned on - they are multiplier unlocked and can use high RAM clocks too (probably need X99), but also can take RDIMMs for large capacity (again, X99 WS boards are p. neat, they will do ECC and multiplier unlock lol). And they're like $50 or less for the 1650v3 last time I checked, probably the 1660v3 is down there too.

And that was why Intel decided no more Xeons on consumer chipsets after Haswell/Haswell-E lol. Cutting off that flood of cheap upgrades as the server market dumps.

2011-3 is really a solid platform, the socket is still small enough to do 24-DIMM dual-socket builds in a normal-ish consumer form factor (EE-ATX or similar) and it's actually really got a lot of features turned on with X99 WS boards (but single-socket only, of course) with RDIMM support and (with v3s and X99) all-core-turbo unlock. Not a high clocker on memory but hey, 32GB RDIMMs are $40 a pop and it's got a ton of IO and a bunch of cheap server chips now. It's a real fun homelab/tinkerer platform imo.

Paul MaudDib fucked around with this message at 02:22 on Jan 27, 2023

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

WhyteRyce posted:

Stock talk but lol Intel stock

Dat earnings report :stare:

Perplx
Jun 26, 2004


Best viewed on Orgasma Plasma
Lipstick Apathy
The only intel product I want to buy is optane and of course they are killing it.

repiv
Aug 13, 2009

Perplx posted:

The only intel product I want to buy is optane and of course they are killing it.

that's too hard, can we interest you in NICs that don't work

shrike82
Jun 11, 2005

wow revenue down 30% yoy
:rip:

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

WhyteRyce posted:

Stock talk but lol Intel stock

intel is so hosed in the long term, they're a huge-rear end company with a lotta billz that gotta get paid in order to have a path back to competitiveness. A company like that doesn't go down instantly but the trajectory is so bad for them and there are so many things they'd have to execute well on to even hit optimistic (but still not great) projections, and they're obviously not executing well on literally anything. It's obviously important to be vertically integrated and be able to build the whole system (software and cpu and accelerators, plus interconnect and advanced packaging) to be competitive in HPC or advanced computing say 10 years from now, that's why AMD bought Xilinx and NVIDIA tried to buy ARM and why everyone from tesla to google to amazon is building their own uarchs for neural accelerators/etc. But just operating the cpu division and the gpu division and alterra and the fabs are going to be a massive drain on operating funds, and there were rumors about GPU being cut, or cut back to enterprise too, although I hope they don't because I think it'll be real tough without GPU and other accelerators. And this assumes that all those groups actually execute well. Intel is genuinely in deep poo poo simply because they have so much going out just to keep the lights on, and their revenue is terrible and their products are behind and not getting much better.

They're genuinely in the reverse position of AMD all those years ago, and coming into a recession just the same. And I'm not even sure how much selling the fabs would help, even under normal circumstances. With the leading edge slowing a lot, it's ironically a chance for Samsung and Intel to regain some ground if it turns out that everyone's stalled at like 2nm for a while (due to economics of leading-edge development or profound technical problems). And they rely a lot on churning out a shitload of embedded chips and chipsets and network chips (lol, lmao) etc. But now the fab is gonna be expensive to run during a huge recession and also probably nobody is going to buy it nor is there really a route to financial viability spinning it off GloFo style most likely, especially during a huge recession.

They will never be allowed to go under, they're way too strategically important to let die, so they'll be fine in the long term, but like, they actually are hosed in the short term because it's not gonna stop going down, and hosed in the medium term simply because it's gonna take 5+ years to really turn things around absolute minimum and it's gonna take a ton of money in the meantime, and there's not really any sign that anything is going well over there. datacenter running at 0% margin last quarter was the smoke coming out of the building and now it's completely ablaze, they're tilting heavily into overall loss and ain't like the market is improving or intel is catching AMD on literally anything anytime soon

they're so hosed, not really a stock trader but it's crazy it's not more than 10%, their long-term prognosis is just so awful, it seems like a crazy place to park your money just because the prognosis is so relentlessly negative. but maybe that's what people said about boeing, or it's already priced in (with an expectation of bailouts if it became necessary), etc

edit:

Paul MaudDib fucked around with this message at 04:49 on Jan 27, 2023

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

priznat posted:

Dat earnings report :stare:

I don't think they're technically earnings if you lost money.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

Twerk from Home posted:

I don't think they're technically earnings if you lost money.

lol, lack thereof

Also Paul I heard the GPU division was on the block rumours too from industry connected folks, but who knows. It would fit with their pattern of killing off non-core businesses before they even have a chance to be successful though (optane etc)

hobbesmaster
Jan 28, 2008

Paul MaudDib posted:

intel is so hosed in the long term,

If you want a premise for a Clancy like thriller there is one scenario where Intel isn’t hosed: an invasion of Taiwan.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

priznat posted:

lol, lack thereof

Also Paul I heard the GPU division was on the block rumours too from industry connected folks, but who knows. It would fit with their pattern of killing off non-core businesses before they even have a chance to be successful though (optane etc)

the accelerator/gpu group is burning money like crazy, I can 100% understand the guess/instinct/desire (depending on actual credibility) to kill the GPU division, or kill the consumer GPU division. Graphics/Accelerator group is losing 441M on a revenue of 247M, so they're operating at almost a -200% margin in the previous 3 months. converting that portion of headquarters into an incinerator and dumping dollar bills in with forklifts and dumptrucks would be cheaper.

perversely maybe that's a sign of their commitment too though. we'll know pretty quick I guess. you don't run -200% margin on a $250m group for too long if you're not serious.

the first-gen intel Arc GPUs are a dumpster fire in terms of silicon usage, Intel is using a 3070-sized piece of silicon, and it's also TSMC 6nm instead of dirt-cheap low-density samsung 8nm, to compete with a 3060, at 3050 pricing. they clearly are paying an insane transistor/area penalty for wave-8 design in terms of things like scheduler overhead and cache tagging and memory controller/SM scheduler complexity etc etc. tbh it seems like maybe that's a design meant to flower in later gens with higher node density where logic becomes comparatively much cheaper (and cache becomes comparatively less effective as a strategy compared to 6nm/7nm-family) because wow that really makes very little sense at 6nm.

But it is very interesting in a GPGPU sense as an argument for reducing divergence. Yeah divergence sucks, but if you can allow Volta-style per-thread-instruction-ptr (so warps only sync at a warp-fence, either implicit via warp-collective call or explicitly via fence intrinsic) and you're only diverging and syncing groups of 8, that's easier - smaller groups and less threads waiting/diverging at a given time. And you have this fancy facility for throwing off promise/future operations into a queue that gets realigned oportunistically based on what's actually in the async op queue. It's a very very compute-driven approach, this is like GCN times a million. This is a very serious look at the "divergence sucks, how do we fix this" and coming up with at least a novel argument. Smaller groups to handle a little more sparseness and async promise/future queues to handle really sparse/divergent things, with rebatching/realignment whenever possible, and just build a bigger machine that does smaller warps to try and carve out higher efficiency.

Maybe it's an attempt to skate to where the puck is going to be, and design for where you're going to be in 2 nodes at 2.5-3x logic density and 1.1x cache density rather than being great at this node with 1.0x logic and 1.0x cache. When I hear him say "the average task latency [number of RT BVH intersection levels] is X and it gets longer with wider warps and holds everything up until it returns" I hear that as being a more general analysis that says they think utilization, memory coherency, etc with narrower warps is better when measured as total divergence vs latency, and they think they can keep warp fences far enough apart to make narrower warps work and be worth the incremental scheduling overhead etc. if you can code the bottom of your loop efficiently without too many warp fences (relative to warp size), and just run the sparse code async so the sparse stuff happens efficiently, and realign your random accesses opportunistically based on what’s in-flight in the memory controller, it works. With much higher logic density, that might end up being better than it plays right now, I don't see a reason wave-8 is compelling on 6nm otherwise, the logic overhead of bigger wave-8 partitions has to be insane right now.

And they're lighting money on fire writing the drivers. Obviously. But tbh they have to do a lot of that anyway to make a go of it with a premium integrated laptop/desktop graphics platform. It all goes together - to me this only makes sense if you do the whole thing, enterprise GPGPU (and OneAPI), discrete gaming, and integrated gaming. Otherwise they might as well license RDNA or Adreno and move on, because that's enough graphics for microsoft word. But it'd be a strategic mistake too, I don't think they can be taken seriously without the enterprise stack and the consumer and enthusiast stuff all is interlocked enough that you might as well do those too if you're going to do enterprise.

But enterprise-and-license-adreno strategy and killing consumer graphics entirely is also a valid answer too, I guess. But tbh $600m is less than half of what Intel is losing right now, even if they killed the whole GPU division they'd still have problems and the long-term strategic position would also weaken.

"all other" is very obviously just everything they'd rather not pay, like non-base-salary employee compensation, and they are having trouble enough retaining talent, imagine going to help put out that dumpster fire let alone you have a lovely low salary (because intel has been bottom feeding forever) and then you lose your bonus or whatever. gently caress it I'll go make 50% more at AMD or triple my salary at Apple. But the "other" category is explicitly designed to make you go "spend less on candles", it's deliberately all cost centers and no revenue. That stuff should just be rolled into the operating budget of whatever department that employee is FTE in or whatever group the sponsorship/fellowship is benefiting. That's phony financial grouping.

Genuinely Intel does do a lot of bullshit stuff and bullshit projects and bullshit sponsorships though and the plug needs to be pulled on that stuff right now. That's the country-club dues of the family that's losing the house. Stop having IEEE sponsorships and distinguished fellows or whatever.

at least raja finally failed downwards, he's demoted to basically chief architect instead of executive VP in charge of GPU/accelerators. Meaning intel wants less guff about product strategy, more tech stuff and results.

Paul MaudDib fucked around with this message at 10:57 on Jan 27, 2023

Kazinsal
Dec 13, 2011
Since the "Network and Edge" category is so vague and barely in the black, but every server board made still has a smattering of Intel controllers on it, I wonder if they're taking a bath on Barefoot.

Turns out nobody in white box land wants a programmable switching ASIC if you don't release the driver code, and nobody in black box land who doesn't already have their own custom ASICs cares enough to spend twice as much per ASIC as they would on a Broadcom or Mellanox. Who knew?

WhyteRyce
Dec 30, 2001

Paul MaudDib posted:


"all other" is very obviously just everything they'd rather not pay, like non-base-salary employee compensation, and they are having trouble enough retaining talent, imagine going to help put out that dumpster fire let alone you have a lovely low salary (because intel has been bottom feeding forever) and then you lose your bonus or whatever. gently caress it I'll go make 50% more at AMD or triple my salary at Apple. But the "other" category is explicitly designed to make you go "spend less on candles", it's deliberately all cost centers and no revenue. That stuff should just be rolled into the operating budget of whatever department that employee is FTE in or whatever group the sponsorship/fellowship is benefiting. That's phony financial grouping.


Intel pay usually isn’t great. They’ve targeted lower cost geos (i.e. not SV), which worked out well (for both sides actually) until poo poo like direct competitors setting up shop across the street in Hillsboro and the pandemic changing Folsom employment possibilities. But they still have a lot of smart people there, it’s just they still have so much middle management rot and empire building that persists that they won’t ever be able to execute

WhyteRyce fucked around with this message at 05:46 on Jan 27, 2023

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
I know a lot of people that got pulled to AMD and they created a 200+ person design centre in the area from nothing basically in the last year or so. The strange thing with the AMD location is that there are people in all different teams, like semicustom to IP to server chips. It is much less structured around a single business unit than the intel locations I know about. A lot of them are interlinked though.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

hobbesmaster posted:

If you want a premise for a Clancy like thriller there is one scenario where Intel isn’t hosed: an invasion of Taiwan.

and to be clear this is why intel will never be allowed to fail and intel will never divest themselves of the fabs because it might allow circumstances in which they could be allowed to fail.

intel is the US's sole leading-edge fab (except for that tiny national-security tsmc fab that will be years behind even intel let alone TSMC taiwan when it comes online) and there's zero chance intel lets it go down and have korea and taiwan dominate the picture.

Beef
Jul 26, 2004
Yowsa, I remember the datacenter overtaking the client group revenue. SPR delays really hosed over their revenues this year.

Kivi
Aug 1, 2006
I care

Paul MaudDib posted:

it's actually a hole to relieve pressure during the soldering process, so yeah, pretty much. the server dies were soldered and it was a decently large heatspreader so they had to give it a vent.

but there's no problem with it being full of thermal paste in general. I haven't heard of it doing a "pillar of TIM" and have not observed that on my X99 5820Ks or 1660v3 or 2697v3s so far. It's probably fine, but perhaps it's a sign of pumping, like from thermal change, but that still might not really be a problem, just an oddity of your CPU?
I'm not worried about it, it just looked funny. Sadly, wife just wiped it away and said look it's fine now when I was trying to snap a photo of it :v:

quote:

if it dies, who cares, though. 2699v3s are under $50 now, my 2697v3s have lost over half their value in the last 3 or 4 months, from $50 all the way down to like $20 or less lol. Waiting for 2697v4s to come down since that's the best-in-socket for my boards, but those are still $225 a pop which nah I'll keep watching.

Also remember that 1650v3, 1660v3, and 1680v3 are basically 5930K and 5960X with xeon feature bits turned on - they are multiplier unlocked and can use high RAM clocks too (probably need X99), but also can take RDIMMs for large capacity (again, X99 WS boards are p. neat, they will do ECC and multiplier unlock lol). And they're like $50 or less for the 1650v3 last time I checked, probably the 1660v3 is down there too.
I already upgraded it to a 2640v4 that I got for $20 on eBay. Same amount of cores, somewhat higher clocks, less power. Main goal was to make the power consumption bit better.

quote:

2011-3 is really a solid platform, the socket is still small enough to do 24-DIMM dual-socket builds in a normal-ish consumer form factor (EE-ATX or similar) and it's actually really got a lot of features turned on with X99 WS boards (but single-socket only, of course) with RDIMM support and (with v3s and X99) all-core-turbo unlock. Not a high clocker on memory but hey, 32GB RDIMMs are $40 a pop and it's got a ton of IO and a bunch of cheap server chips now. It's a real fun homelab/tinkerer platform imo.
I really like the versatility of the platform. NVMe just works, single core perf isn't that awful and you can get MBs from mITX to full size EE-ATX. Need tons of PCIe lanes for NVMe? You can find 2609v3 or whatever usually for just postage fee. Need more cores? MCC 2680v4s are not that bad at around sixty, and 2696v4s were mere $100 a pop when I bought mine.

I've got two set-ups, a normal ATX sized dual CPU (2696v4s) board for virtualization (games) and this tiny box for my wife:



I built it with spare parts I had lying around (mostly RDIMMs and the NCase) and some eBay finds like that $90 P2200. She actually does some CAD work on it so it's actually spot on for her.

silence_kit
Jul 14, 2011

by the sex ghost

Paul MaudDib posted:

and to be clear this is why intel will never be allowed to fail and intel will never divest themselves of the fabs because it might allow circumstances in which they could be allowed to fail.

intel is the US's sole leading-edge fab (except for that tiny national-security tsmc fab that will be years behind even intel let alone TSMC taiwan when it comes online) and there's zero chance intel lets it go down and have korea and taiwan dominate the picture.

Does the US military/government ACTUALLY design custom ASICs for their applications? The only ones I am aware of are the RF chips for the RF front-end electronics in military radios/RADARs, which are comparatively much lower tech than computer chips.

Adbot
ADBOT LOVES YOU

Lyesh
Apr 9, 2003

silence_kit posted:

Does the US military/government ACTUALLY design custom ASICs for their applications? The only ones I am aware of are the RF chips for the RF front-end electronics in military radios/RADARs, which are comparatively much lower tech than computer chips.

Even if they don't, they need a CPU supplier that's on-shore to build current-gen war stuff.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply