Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Happy_Misanthrope
Aug 3, 2007

"I wanted to kill you, go to your funeral, and anyone who showed up to mourn you, I wanted to kill them too."

Klyith posted:

Look at the Apple M1. It's spanking both Alder Lake and Ryzen in efficiency. But when you ask for something other than perf/watt it starts looking weak. Against real desktops the high-end M1s aren't as dominant, even in highly multi-core stuff like cinebench and video encoding. It can't shove more watts into each core, so it loses to x86 chips with fewer cores that can effectively use 150W.
What are you going by this, other than Cinebench? The M1's absolutely thrive in video encoding.

Here's a guy comparing the 'entry level' Mac Studio (M1 Max, Not Ultra) to his Threadripper 2080ti setup. I get for 3D work the PC still has an advantage, but the M1's have been hard to beat in video editing for a while, and this comparison is using Davinci Resolve and not something like Final Cut Pro where it's even more optimized.

Happy_Misanthrope fucked around with this message at 18:57 on Apr 17, 2022

Adbot
ADBOT LOVES YOU

Dr. Video Games 0031
Jul 17, 2004

We saw a preview of what a DDR4-only comparison would've looked like in Hardware Unboxed's initial review, and it did not favor Intel. There are some games that see a huge uplift (+10% or more) with fast DDR5 which is the only thing that keeps Intel competitive in the averages.

Klyith
Aug 3, 2007

GBS Pledge Week

Happy_Misanthrope posted:

What are you going by this, other than Cinebench? The M1's absolutely thrive in video encoding.

Here's a guy comparing the 'entry level' Mac Studio (M1 Max, Not Ultra) to his Threadripper 2080ti setup. I get for 3D work the PC still has an advantage, but the M1's have been hard to beat in video editing for a while, and this comparison is using Davinci Resolve and not something like Final Cut Pro where it's even more optimized.

Result from this review. Note that this is with CPU-based encoding, as a test of CPU vs CPU. (And those CPUs in that comparison are mobiles, so handily slower than desktop CPUs.)

Is the guy in your video using the M1's hardware encoder vs the threadripper? That's a totally sensible comparison to make for the real-world use perspective, but it's not exactly an apples to apples situation.

As a complete package the M1 is excellent for a whole lot of stuff! Not saying it isn't. But it's an example of how a CPU design / architecture can't do everything -- an M1 can't usefully employ as many watts, and it can't boost cores as much. Intel is making efficiency cores because x86 has generally had problems scaling down in power use, and I think they're just as worried about ARM as AMD.

FuturePastNow
May 19, 2014


Maybe one of these reviewers will do a memory scaling comparison with the 5800X3D to see how much speed and latency matter. Probably less than they do with the CPUs that have less cache but it would be good to see some testing.

hobbesmaster
Jan 28, 2008

Does anyone have experience with the Arctic freezer ii? I got a new 5900x and new Arctic freezer ii 280 but it’s not cooling the CPU - idles in the 50s-60s, light loads in the 70s, clocks are very random in the 2-3ghz range under light load. All cores go down to 700mhz under full load. I believe I’ve mounted everything properly because the thermal compound shows good coverage when I remove it so I’m kinda puzzled.

I’d just wait for Arctic’s tech support first but it warned me it’d be up to 9 days for a reply!

Kibner
Oct 21, 2008

Acguy Supremacy
Maybe there is a better thread for this, but does anyone know anything about Micron Rev. R RAM? I am trying to equip my 5950x system with some ECC UDIMM and two options I have found are from Kingston. I verified that my motherboard does support ECC UDIMMs.

KSM32ED8/16HD (Hynix D-Die): https://www.cdw.com/product/kingston-server-premier-ddr4-module-16-gb-dimm-288-pin-3200-mhz/6200231

KSM32ED8/16MR (Micron Rev. R): https://www.cdw.com/product/kingston-server-premier-ddr4-module-16-gb-dimm-288-pin-3200-mhz/6739199?enkwrd=KSM32ED8%2b16MR

The price difference on CDW is minimal: $94 for the Hynix and $100 for the Micron. If their OC potential is both the same, I am leaning towards the Micron because the spec sheet on CDW says that it supports temperature monitoring and the Hynix does not.

e: They are both rated for 3200 with 22/22/22 timings @ 1.2v but I am wanting to get the kit that will have the tightest timings if I tune them by hand.

Kibner fucked around with this message at 22:15 on Apr 17, 2022

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I have the KSM32ES8/8ME, which are single rank 8GB sticks with Micron E. I set them to 18/20/20/40 with a bump to 1.25V to be on the safe side. I suppose I could drop timings a bit more.

--edit: And they have temp sensors.

Combat Pretzel fucked around with this message at 23:12 on Apr 17, 2022

Canine Blues Arooo
Jan 7, 2008

when you think about it...i'm the first girl you ever spent the night with



Grimey Drawer

Happy_Misanthrope posted:

What are you going by this, other than Cinebench? The M1's absolutely thrive in video encoding.

The M1 is a really outstanding chip... at 45~ watts. It's still gonna get obliterated by these '105w' high end desktop chip. Apple isn't magic and are just picking a very specific set of trade offs to make.

Canine Blues Arooo fucked around with this message at 23:28 on Apr 17, 2022

Dr. Video Games 0031
Jul 17, 2004

hobbesmaster posted:

Does anyone have experience with the Arctic freezer ii? I got a new 5900x and new Arctic freezer ii 280 but it’s not cooling the CPU - idles in the 50s-60s, light loads in the 70s, clocks are very random in the 2-3ghz range under light load. All cores go down to 700mhz under full load. I believe I’ve mounted everything properly because the thermal compound shows good coverage when I remove it so I’m kinda puzzled.

I’d just wait for Arctic’s tech support first but it warned me it’d be up to 9 days for a reply!

Put your ear close to the socket while the computer is running and see if you can hear the pump going. The pump should be audible, so if you can't hear it at all, then either it's not plugged in properly or the pump came dead on arrival. If you can hear the pump, then I guess double check that you've removed the plastic film on the water block before mounting it. If you can hear the pump and you've verified that the plastic film is removed and that the water block is making contact with the CPU (there's thermal paste on the water block), then there may be some blockage in the tubing or radiator or something somewhere. If that's the case, or if the pump is dead, then you're gonna need to return it.

Klyith
Aug 3, 2007

GBS Pledge Week

hobbesmaster posted:

Does anyone have experience with the Arctic freezer ii? I believe I’ve mounted everything properly because the thermal compound shows good coverage when I remove it so I’m kinda puzzled.

Additionally to Dr VG31's ideas, do you have the pump plugged into a PUMP header, or a fan header with enough current output and set to 100% speed?

Dr. Video Games 0031 posted:

Put your ear close to the socket while the computer is running and see if you can hear the pump going.

Heh possibly easier said than done since the freezer ii is the one with the little VRM fan on it.

I guess you could just stick your finder onto the VRM fan to stop it while you listen.

hobbesmaster
Jan 28, 2008

Klyith posted:

Additionally to Dr VG31's ideas, do you have the pump plugged into a PUMP header, or a fan header with enough current output and set to 100% speed?

CPU_FAN. That’s a good point though, it has a somewhat unique setup where the pump, it’s little VRM fan and the two 140mm’s are all run off the 4 pin that goes to the motherboard. Still, the 140mm fans seem to be getting plenty of power.

quote:

Heh possibly easier said than done since the freezer ii is the one with the little VRM fan on it.

I guess you could just stick your finder onto the VRM fan to stop it while you listen.

Yeah I was trying to puzzle out how to do that. The closest I could figure was to sit in bios and set the CPU “fan” to 0 and see what happened. The temp did start to climb so it’s doing something. We have a stethoscope but there’s nowhere flat!

Rollie Fingers
Jul 28, 2002

hobbesmaster posted:

Does anyone have experience with the Arctic freezer ii? I got a new 5900x and new Arctic freezer ii 280 but it’s not cooling the CPU - idles in the 50s-60s, light loads in the 70s, clocks are very random in the 2-3ghz range under light load. All cores go down to 700mhz under full load. I believe I’ve mounted everything properly because the thermal compound shows good coverage when I remove it so I’m kinda puzzled.

I’d just wait for Arctic’s tech support first but it warned me it’d be up to 9 days for a reply!

I also have a 5900X and Arctic Freezer ii 280. My CPU idles at ~33c.

If the fans are working fine and you've ruled out a bad mount (unmount and mount it again just to be sure), then it certainly sounds like a pump issue.

SwissArmyDruid
Feb 14, 2014

by sebmojo

PC LOAD LETTER posted:

I think all the chip companies do this when they get stuck due to the long production and design cycles and their competition is beating them out. Always some exec out there who gets the bright idea to pump the power/clocks to win on a few benches and claim some sort've marketing victory.

The classic examples were the P4's and higher clocked Bulldozer chips for x86.

Given the way the process tech is starting to really run out of head room I kind've expect everyone's stuff to use more power in the future in general though.

There hints of this given the way power is going up lots for GPU's.

300W for a GPU at stock clocks used to be considered nuts not too long ago but I think when the R7xxx and NV4xxx GPU's come out that will either be the norm. Or perhaps even look low in comparison given some of the rumors of how much power the top end NV4xxx GPU's are going to use (700W+ = WTFFF). That and how AMD apparently is going to have some 170W stock TDP chips for AM5.

I dunno if default 200W+ TDP CPU's will ever become normal for everyday PC's (I'd actually expect them to stay around 90W or less for the CPU stock) but I could see it being common for enthusiast and power user's PC's eventually over the next few years.

Once the process tech is tapped out mo' power to pump clocks is the path of least resistance to more performance. Major redesigns matter but take too long.

Am I crazy for feeling like Nvidia is hitting the power/frequency juice too soon? The RTX chips aren't THAT old yet, are they? Or am I still stuck thinking in terms of purely rasterizing boards? Because RDNA3 boards are on the horizon, and we haven't heard anything nearly that insane out of AMD.

Never mind that the RDNA architecture was built on an expectation of a certain power budget due to their inclusion in the consoles.

SwissArmyDruid fucked around with this message at 03:23 on Apr 18, 2022

shrike82
Jun 11, 2005

not really - you can lower the TDP on Nvidia cards significantly and still see very good performance

for whatever reason, they're caught up in some benchmark war with AMD where they feel the need to pump up the heat literally to get a marginal performance improvement

hobbesmaster
Jan 28, 2008

Maybe super flower is giving kickbacks to increase sales of the 1600W power supplies.

Klyith
Aug 3, 2007

GBS Pledge Week

SwissArmyDruid posted:

Am I crazy for feeling like Nvidia is hitting the power/frequency juice too soon? The RTX chips aren't THAT old yet, are they? Or am I still stuck thinking in terms of purely rasterizing boards? Because RDNA3 boards are on the horizon, and we haven't heard anything nearly that insane out of AMD.

Nvidia isn't just in a battle with AMD, they have to fight their own previous generation to sell new cards. Remember how the 2000 series didn't have much performance gain over the ti refreshes of the 1000s, but it had ray tracing? And it sold fairly poorly, despite no amazing competition from AMD? Yeah. Lesson learned.

Anyways GPUs are a bit different from CPUs. On a GPU, more transistors = more performance in a fairly direct way. And more power consumption, all else equal. The 3000s are power-hungry because they're loving massive.

PC LOAD LETTER
May 23, 2005
WTF?!

SwissArmyDruid posted:

Am I crazy for feeling like Nvidia is hitting the power/frequency juice too soon?
The rumor mill is that AMD hit a home run with RDNA3, at least as far as pure raster performance goes, and with sane and sensible TDP's NV4xxx would lose by a fair amount to it.

So, again according to rumors, NV flipped out and pushed their clocks up as high as possible to try and maintain a lead or at least get performance parity. So the power budget got blown out. Again this is for pure raster performance.

RDNA3 is still supposed to lose to NV4xxx at raytracing...but not by much. Its supposed to be around ~3-4x faster than RDNA2 at raytracing so they got a major boost there too.

Meanwhile RSR isn't perfect but appears to be close enough to DLSS that NV doesn't think its going to be that effective anymore as a major draw to NV4xxx to keep up sales.

They're reeeeeally worried about another NV2xxx sales slump coinciding with a buttcoin bust. And NV4xxx is going to be real expensive to produce due to its, even by GPU standards, quite large monolithic die + being on 5nm which is still real pricey which limits their ability to compete on price and still stay profitable.

Basically they're worried about a potential perfect storm of problems, combined with stiff competition from AMD, coming together to gently caress their bottom line and they're trying to get ahead of it any way they can.

Realistically so long as they're close enough to RDNA3 pure raster perf., have a good cooler (think something like the 3090Ti's cooler for those heat loads), and they keep their prices sensible they should sell plenty fine even if the power usage is as high as rumored. Not many people buy the top end OC'd by default cards anyways so for the market at large if a few 700W+ TDP cards exist it doesn't really matter.

PC LOAD LETTER fucked around with this message at 07:27 on Apr 18, 2022

shrike82
Jun 11, 2005

Nvidia's handicapped by needing to support its various types of users (data center, desktop compute, gamers etc) in a way that AMD and Intel aren't. The low hanging fruit that benefited all user categories - namely sped-up FP16 compute (and DLSS) was a one-off so they've switched tack to push RT heavily for gamers.

They're still able to push >100% performance increase for enterprise AI compute every gen but it's difficult to do the same for pure raster, and increasingly so for RT.

SwissArmyDruid
Feb 14, 2014

by sebmojo

PC LOAD LETTER posted:

Realistically so long as they're close enough to RDNA3 pure raster perf., have a good cooler (think something like the 3090Ti's cooler for those heat loads), and they keep their prices sensible they should sell plenty fine even if the power usage is as high as rumored. Not many people buy the top end OC'd by default cards anyways so for the market at large if a few 700W+ TDP cards exist it doesn't really matter.

I'm looking at it and man, the bar is low for AMD here. Just so long as RDNA3 doesn't need triple-slot coolers, because gently caress me, I can't read "Fatboy" on that line of XFX SKUs without laughing.

PC LOAD LETTER
May 23, 2005
WTF?!

shrike82 posted:

Nvidia's handicapped by needing to support its various types of users (data center, desktop compute, gamers etc) in a way that AMD and Intel aren't.
AMD doesn't sell HPC oriented GPU's or support GPGPU in their consumer GPU's since when?

AMD certainly has a much smaller market share in GPU HPC markets but that is a whole other issue.

SwissArmyDruid posted:

I'm looking at it and man, the bar is low for AMD here. Just so long as RDNA3 doesn't need triple-slot coolers,
The top end RDNA3 cards will need them still. Rumor mill is saying around 400W for something like a 7970XT. If you're fine with a 7900XT that miiiight be doable with a good 2 slot cooler since its expected to be around 300W TDP.

PC LOAD LETTER fucked around with this message at 09:47 on Apr 18, 2022

orcane
Jun 13, 2012

Fun Shoe
Well, two issues (no resources for competitive professional drivers/support, and CUDA) :v:

Cygni
Nov 12, 2005

raring to post

You kinda don’t need to overanalyze this. Performance isn’t free, and 4K, 120hz+ displays, and esports going mainstream have massively increased consumer demand for performance. In response, the big 3 are all juicing their designs to be more performant, because people like the big line on the bar chart and consumers on average are much more tolerant to heat, power draw, and cost then they were 10 years ago. Hence hugely performant, hot, expensive parts that are selling better (with tastier margins) than at any time before.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

SwissArmyDruid posted:

Am I crazy for feeling like Nvidia is hitting the power/frequency juice too soon?

It seems that every new rumor about the 4xxx series has the power profile moving downwards ("new" rumors have the 4090 at 600W now), so it's quite possible that the info that's been dribbling out so far has been about unoptimized test boards, and the final product will demand something more in line with existing lineups. Also entirely possible that it's due to even more fine-tuned self-overclocking routines to really squeeze every ounce of power out of the card that it possibly can, and you'll still be able to get 95% of the performance at -200W or something equally silly. Which seems to be the way everyone is going these days: have it max the gently caress out of itself out of the box to post the highest performance numbers possible, and anyone who cares about wattage can spend 5 minutes with a slider dragging it back to a more sane "sweet spot."

Everyone trivially understand that the bigger FPS bar is better. Very few are going to give a gently caress that the perf/watt bar is slightly better. No one wants to be stuck trying to sell cards based on "FPS per inch," either.

There's also the part where the 4090 (600W?) is supposed to be packing 18k cores, compared to the 3090 Ti's 10k (450W) or the 2080 Ti's 4k (250W). So still a spicy meatball if true, but jfc is that a lot of cores.

Rinkles
Oct 24, 2010

What I'm getting at is...
Do you feel the same way?

DrDork posted:

and you'll still be able to get 95% of the performance at -200W or something equally silly. Which seems to be the way everyone is going these days: have it max the gently caress out of itself out of the box to post the highest performance numbers possible, and anyone who cares about wattage can spend 5 minutes with a slider dragging it back to a more sane "sweet spot."

Props to AMD for making this easy without 3rd party software.

shrike82
Jun 11, 2005

"just lower TDP" sucks if the boards are juiced because they're going to be big 3-4 slot monsters like Ampere

orcane
Jun 13, 2012

Fun Shoe
600W is not downward, 600W is what it has been for a while and what all current signs are pointing towards (connector, board layout, the 3090 Ti's VRM design smoothing voltage spikes so the 480W card doesn't spike above 600W anymore).

Even if it's slightly lower, the idiot AIBs will juice the gently caress out of the chips for their gaming models as usual (for their margins and longest bar purposes), so they will use 600W if they're allowed to, even if that's only 5% faster than 350W eg. But yeah they will be 4 slot monsters that cost 50% more than "MSRP". And it usually takes a bit more than just 5 minutes moving a slider to actually undervolt these cards properly. Sure, just pulling the TDP setting back a bit does, but comes with side effects - you want to at least edit the power curve and that's beyond what the majority of people will do with their SUPRIM GAMING XXX FTW OC cards they bought for over $2000.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

shrike82 posted:

Nvidia's handicapped by needing to support its various types of users (data center, desktop compute, gamers etc) in a way that AMD and Intel aren't.

Gaming vs. data center is exactly the reason for the RDNA/CDNA split in Radeon products. But also Nvidia is in no way, shape, or form, "handicapped" in any area.

PC LOAD LETTER posted:

AMD doesn't ... support GPGPU in their consumer GPU's since when?

Driver support, yes. But based on my experiences, OpenCL is somewhere between "second class citizen" and "lol" in terms of programmer/product support these days (unless you're, say, a national lab and have a cadre of undergrads/postdocs you can task with writing your code). CUDA is the default because Nvidia made a good bet and put huge piles person-hours and money behind its development, tooling, and education during the years between the Operon and Ryzen when AMD was on financial life support.

SwissArmyDruid
Feb 14, 2014

by sebmojo
And while AMD has tools to port CUDA code to OpenCL..... well. Turns out there's still no replacement for running CUDA on CUDA.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

shrike82 posted:

"just lower TDP" sucks if the boards are juiced because they're going to be big 3-4 slot monsters like Ampere

I've no doubt there will be sensible 2-slot options for the 4070, and probably for the 4080. You can very reasonably look at it as "I can get the power of a 3090 in the form factor of a xx70 card" if that's where your personal values drive you.

If you want top-of-the-line power in the form of a xx90, but don't want a 3-slot cooler...I don't know what to tell ya. The xx80Ti/xx90 has been effectively a 3-slot design for a while. Yeah, there have been first-party 2-slot offerings in the form of the FE's, but every AIB has gone bigger because 2-slots can't really cool that much effectively (and haven't been able to for years), and if you're spending that amount of cash on the fastest card out there, a big honkin' HSU is probably what you're also after.

If you want an "efficient" card, the halo option has never been a good pick, in any generation.

As much as it personally annoys me, I think most people are willing to accept additional power use in exchange for Moar Numberz. At least undervolting is way easier than overvolting/overclocking unless you're the same type of person who really feels the need to dial in the perfect curve.

e; but back to CPUs, I'm probably gonna slam buy on a 5800X3D because I have poor impulse control, have zero interest in video encoding, and already have an AM4 setup I'm planning on keeping until DDR5 prices get under control and speeds shoot up to something compelling. Here's hoping I can flip my 5600x on eBay for $200 and at least reduce the cost of this silly exercise.

DrDork fucked around with this message at 20:36 on Apr 18, 2022

Cygni
Nov 12, 2005

raring to post

shrike82 posted:

"just lower TDP" sucks if the boards are juiced because they're going to be big 3-4 slot monsters like Ampere

Why does that suck, though? Big heatsink means big thermal mass, and larger surface area so you dont have to crank loud, high RPM fans. As someone who has been running a Geforce 4 Ti 4200 with a tiny little high RPM fan on it on my retro bench lately, it is insane how loud and annoying cards used to be 100% of the time. Even the more recent retro cards I have like a 2 slot HD 7700 are annoying to listen to. Give me a big ol silent chonker any day.

The only folks i can think of that really feel the hurt with the mondo heatsinks are the SFF enthusiasts, but we (i will include myself in that group) tend to have to select from specific cards anyway due to length or case design. And there are a bunch of 3 slot case options on the market too.

Klyith
Aug 3, 2007

GBS Pledge Week

Cygni posted:

The only folks i can think of that really feel the hurt with the mondo heatsinks are the SFF enthusiasts,

There are also people who have more than one PCIe card in their system you know.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Klyith posted:

There are also people who have more than one PCIe card in their system you know.

Then buy a motherboard set up to enable that sort of thing? That's the same complaint people had when 2-slot cards started coming out and people were trying to shove that, a NIC, a sound card, and maybe another random PCI card all in at once--and it has the same solution.

I say, as someone who had to buy a 3080 FE because it was the only card that would physically fit (and even then, only when watercooled) in my awkwardly shaped special snowflake SFF case of thermal terror.



But seriously, I think it's weird to argue that companies shouldn't offer a big honkin' balls to the wall card. If you don't want it / can't fit it / whatever...just don't buy it, and pick up something further down the stack. It's not like if they didn't make a 600W 4-slot xx90 that benchmarks at 10k FancyMarks then they'd make a 300W 2-slot xx90 making the same 10k FancyMarks. No, you'd get a 300W 2-slot card making like 7k FancyMarks, and you're already going to get it: it'll be called a xx70.

You're still going to have sensibly sized, thermally reasonable cards showing enormous performance uplift over the current generation. That both AMD and NVidia (much like Intel on the CPU side) are gonna throw out some cranked-to-11 wildly inefficient options for those people who really want the highest performance at any cost doesn't change that, and it comes off more as not wanting other people to be able to have a faster card because said card doesn't fit your particular use cases.

hobbesmaster
Jan 28, 2008

It’s less of a concern now that SLI is dead. mATX has space for a 3 slot GPU and something in the bottom pcie slot. Full size ATX would be room for 2 after a theoretical 4 slot. Though you’d think a hybrid CLC would be cheaper than 4 slots of copper fins.

Rinkles
Oct 24, 2010

What I'm getting at is...
Do you feel the same way?
When/will pcie be replaced?

Zedsdeadbaby
Jun 14, 2008

You have been called out, in the ways of old.

Rinkles posted:

When/will pcie be replaced?

We're gonna go back to AGP tomorrow

orcane
Jun 13, 2012

Fun Shoe
Yes the manufacturers that defaulted to 3-slot designs for almost anything north of a 3060 will definitely release regular sized cards again for the next generation that's supposedly using up to 25% more power. And if any sort of supply chain bottlenecks hit again (why wouldn't they), the super juiced monstrosity that trades $20 in materials and 50% more power for 5% longer bars at a $100 markup is the one that gets pushed to manufacturing.

hobbesmaster posted:

It’s less of a concern now that SLI is dead. mATX has space for a 3 slot GPU and something in the bottom pcie slot. Full size ATX would be room for 2 after a theoretical 4 slot. Though you’d think a hybrid CLC would be cheaper than 4 slots of copper fins.
They're almost never using copper or appropriate numbers of heatpipes. AIBs are known to downsize their heatsink BoM at all cost, if the cheap aluminium block can't do it we'll just stick a third small fan onto the card and spin it higher. We can still sell this as the BIG DADDY OC edition.

shrike82
Jun 11, 2005

Yeah people are being a little glib about the issues with both CPUs and GPUs getting hotter

hobbesmaster
Jan 28, 2008

Zedsdeadbaby posted:

We're gonna go back to AGP tomorrow

There will be a completely different form factor but it’ll be called PCIE6.2 gen 2x2

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

shrike82 posted:

Yeah people are being a little glib about the issues with both CPUs and GPUs getting hotter

Personal opinions and all, but I don't really think so.

Yes, things are getting hotter...when run full-out. Which on a desktop CPU for most people is very rarely, and on a GPU is the entire point of the thing in the first place. In the past we had people juicing the hell out of their chips/cards to get that last 5% through overclocking, volt mods, whatever silliness. Now that's all basically baked in. If you want to return to the good ol days...just drag that slider left a little. If you don't want a 200W+ CPU, don't buy one--there are tons of exceptionally good offerings at much lower TDPs, after all.

I get the annoyance of trying to shove huge GPU coolers into SFF cases (again, the only viable option I had for a high-end card for mine was a water-cooled 3080 FE--if I was a reasonable person I'd have replaced the case with something not so terrible). But numerous AIB's put out competent, almost pedestrian 3060 and 3070's in 2-slot cooler configurations which worked just fine. Several put out slightly-over-2-slot 3080's which worked fine. The FE 3070 and 3080 were 2-slot and worked well.

So, yeah, honestly this sounds more like "I want there to be nothing more powerful than the 250W 2-slot card that can fit in my particular case" than anything, given you can trivially rein in the actual power draw with 5 minutes of work and can still find 2-slot cards for the small portion of the market that's rocking a motherboard with both more than 1 PCIe slot and yet not enough space between them to support a 3-slot cooler. Or, you know, just not buy the absolute top-of-the-stack product.

Again: there will still be xx60 and xx70 (and maybe xx80) 2-slot designs that provide more performance than you can get in today's fatter cards, so...what exactly is the issue here?

Adbot
ADBOT LOVES YOU

Theris
Oct 9, 2007

Rinkles posted:

When/will pcie be replaced?

Very soon. In only a year or two you'll be able to buy CPUs and motherboards that support a new expansion bus standard with double the bandwidth of current pcie devices. The coolest thing about the replacement standard is that it uses the same connector and is fully backwards compatible with existing pcie devices.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply