Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Klyith
Aug 3, 2007

GBS Pledge Week

silence_kit posted:

Are the continuous power dissipation numbers going up for real applications, or only for fake applications which are designed to turn on all of the sub-circuits of the computer chip?

Real applications that use the CPU. For a lot of regular enthusiasts, games are the only thing that does it.

Most apps don't really put that much CPU pressure on the system while active.

K8.0 posted:

Also I think we're past the days of benches being the judge of motherboards, but maybe I'm wrong.

intel has made them relevant again

Adbot
ADBOT LOVES YOU

Klyith
Aug 3, 2007

GBS Pledge Week
Anyways the last 4-5 pages of this thread are retrospectively super funny now.

Intel: We want everyone to use ATX12VO so PCs will be more efficient #green
Also Intel: Here's our new 500W CPU!

Klyith
Aug 3, 2007

GBS Pledge Week

Shipon posted:

to be fair someone stuck at home on their gaming computer pulling down ~800 W from the wall is still using about as much electricity as it takes a tesla to drive a little over 3 miles so if someone's hobby or entertainment has them driving more than 10 miles or so per outing, tsk tsk

to be fair what if their hobby was actually only 1 mile away and instead of just going there they spent 10 minutes looping around the neighborhood racing between the lights for no reason?

500W CPU : 150W CPU :: rolling coal pickup : prius

Klyith
Aug 3, 2007

GBS Pledge Week

silence_kit posted:

Yeah, I wonder how power usage during a game compares with power usage during a benchmark software which is designed to turn on all of the sub-circuits of the computer chip.

That's not how benchmarks work. Benchmarks can't just make every transistor turn on. Some benchmarks are artificial in ways that mean their power results are way higher than any real-world task. (FurMark for GPUs is the best example of this.)

Other benchmarks just do stuff that you, a regular desktop user, never do. You probably don't crunch numbers or run AI with AVX-512 or render 3D CGI all day. So those power numbers aren't relevant to you. But hey, that also means you don't need a 11900K or 5950X in the first place! Save your money and buy a cheaper, less power consuming CPU.

And some benchmarks, like 3Dmark, are trying to be like real-world tasks but more intense because they're looking to the future when games & stuff will be doing that.

BlankSystemDaemon posted:

Considering most games don't even utilize all of the cores 100%, it's a lot less than some people seem to be expecting.

AFAIK the highest power draw isn't with all-core loads, it's during max boost to a subset of cores. Clock boosting pushed extra volts for clock speed, which generates a ton of waste heat. It can't do that on all-core loads. But when only a couple cores are loaded, they can boost to Ludicrous Speed and effectively use the rest of the silicon as a heatsink.

Results may vary depending on Intel vs AMD, chip size, game choice, and other factors. But in general I would say that a game isn't necessarily using less power than other workloads that you might assume to be heavier because they use more cores. That's kinda why Intel is doing Ludicrous Watts, they want to keep their Highest FPS title.


(OTOH the highest power draws on Intel mostly come from using AVX-512, which isn't games. And the maximum 500W numbers are momentary peaks, not sustained.)

Klyith
Aug 3, 2007

GBS Pledge Week

VorpalFish posted:

I don't believe this is the case - in general you are going to run into a frequency wall such that you can't stably match the power consumption of most all core workloads on 1-2 cores, even though you are boosting higher.

See: https://www.guru3d.com/articles_pages/intel_core_i9_11900k_processor_review,5.html

Ugh yeah I don't know for sure with Intel, on AMD's CPUs the highest power is not with all-core, especially with PBO. You'd need new charts on that page between the single-thread & multithread to see it. And I think Intel was the same pre-Rocket Lake.

But the Intel "Adaptive Boost Technology" is probably throwing a wrench into that and pushing way more power in all-core loads:

quote:

When in a turbo mode, if 3 or more cores are active, the processor will attempt to provide the best frequency within the power budget, regardless of the TB2 frequency table. The limit of this frequency is given by TB2 in 2-core mode. ABT overrides TVB when 3 or more cores are active.
So that's just saying gently caress it, max the frequency all-core until we either run out of juice or thermals.

Klyith
Aug 3, 2007

GBS Pledge Week
I've been wrong the whole time and what I've been thinking about is max observed temperature. Naively max temp would equal max heat generation would equal max power consumption, but conduction is important.

Klyith
Aug 3, 2007

GBS Pledge Week

Perplx posted:

An even more optimistic view, Apple has an architectural advantage over intel and is competitive while being behind a node.

Huh? M1s are on TSMC 5nm, they're ahead a node over AMD on TSMC 7nm and "Intel 7".

silence_kit posted:

People have also speculated that Apple has a system-level advantage over its competitors because, being a computer systems company and not a computer chip company, they can spend more money on larger area chips. They aren’t as pressured on chip price as computer chip companies, who need to make a profit on the chip.

1 bigass SOC is definitely more efficient than 2-4 chips with their own packaging.

However, Apple also has an ecosystem-level advantage over the competition and that's their biggest advantage. They designed a CPU that is really great at a more limited set of systems and applications. They can do this because Apple only cares about being competitive in some areas and is ok not giving a poo poo about the rest. They don't have to, they're Apple.

An M1 is not going to compete with Intel & AMD desktop CPUs if you put a big heatsink on it and shove 90 watts down its mouth. And the M1, while having a good GPU that makes it pretty good vs the competition in ultrabook laptops, is not a great gaming CPU. It's wide and shallow and has a massively huge out-of-order window and instruction cache. They had priorities when designing the chip, and "dominate web & javascript benches" was near the top of the list. "Dominate AAA gaming" was not.

Klyith
Aug 3, 2007

GBS Pledge Week

BobHoward posted:

lol that you've come up with a mental model of the world where dominating web and javascript workloads somehow cannot translate to other things.

lol that you can't see some mild hyperbole without becoming the apple defender.


M1 is great for a whole lot of things that can be loosely grouped under "productivity" -- javascript, compiling code, cinebench, encoding, and many others. It's a great chip for that! Though some of Apple's choices were pretty pointed:

AnandTech posted:

On the cache hierarchy side of things, we’ve known for a long time that Apple’s designs are monstrous, and the A14 Firestorm cores continue this trend. Last year we had speculated that the A13 had 128KB L1 Instruction cache, similar to the 128KB L1 Data cache for which we can test for, however following Darwin kernel source dumps Apple has confirmed that it’s actually a massive 192KB instruction cache. That’s absolutely enormous and is 3x larger than the competing Arm designs, and 6x larger than current x86 designs, which yet again might explain why Apple does extremely well in very high instruction pressure workloads, such as the popular JavaScript benchmarks.

But no, not all tasks are the same and performance is not universal. It it was then bulldozer wouldn't have sucked, Vega would be king poo poo of GPUs, we'd have Cell processors in everything, and the descendant of Netburst or Itanium would be powering Intel's architecture. Being really good at javascript and compiling code does not mean you're equally good at everything. Some things don't benefit from a super-wide design that trades some clockspeed and latency for a ginormous OoO buffer and 4+4 ALUs & FPUs, because they don't fill that width. Among tasks that consumers care about, video games are a prominent example. Some of this is inherent tradeoffs of CPU design that go back a looooong time.


Apple can make that tradeoff. And while Apple's C-suite doesn't give a poo poo about games other than how much they can rake from the ios store, Tim Cook isn't designing the CPU. I'm sure that Apple's engineers are eager to have advantage wherever they have it. If they could have designed a CPU that spanked Intel & AMD in both productivity and games, they would have. If the Rosetta people could make games perform better, they would. If the M1 actually was amazing when you put under a big desktop cooler and attacked a dedicated GPU, they'd have demoed it doing that.

Klyith
Aug 3, 2007

GBS Pledge Week
meteor lake will apparently have L4 cache

cache rules everything around me

Klyith
Aug 3, 2007

GBS Pledge Week

VelociBacon posted:

Is the UHD 770 iGPU significantly better than the 730 on the lower end chips?

If all you care about on the GPU is the video de/encode, there's no difference.

Klyith
Aug 3, 2007

GBS Pledge Week

Cygni posted:

Also... what in the world is with IO die having its own 2 crestmont cores?

They are there to run the all new next-generation RGB effects on connected devices.

(Real answer: ultra-low-power efficiency cores.)

Klyith
Aug 3, 2007

GBS Pledge Week

hobbesmaster posted:

The question sounded more like they want to learn the material that would be in a university “intro to computer organization” (ie Patterson & Hennessy) class to get some of the general basics of what computers do.

Yeah, and that is also potentially a lot more useful than knowing how an 8-bit chip from the 1970s is built with individual transistors. There's a whole lot of stuff between the transistors and how a modern CPU works. Like, if you know how a modern chip works from transistors up, you probably know enough to be a lead architecture engineer or some poo poo.


And particularly if you know some programming / are a programmer, knowing about what pipelines are and how a branch predictor functions is good stuff.

I have a friend who worked on stuff for the nintendo DS, so a 32bit arm CPU that's not fast. One of the things they did was a standard addition to IF when you needed every bit of speed, called IF UNLIKELY. That made sure the branch predictor wouldn't waste time on something that probably wouldn't happen. Pretty simple, and still very top-level for "how a CPU works", but much more useful than knowing how to build an 8bit adder from nand gates.



simmyb posted:

Am I staring down a path of madness?

absolutely yes, but that doesn't mean you shouldn't do it!

Klyith
Aug 3, 2007

GBS Pledge Week

SwissArmyDruid posted:

I don't know why this is a surprise. Blindly copying without understanding the reason why things are done the way they are done, with all the little nuances and details is peak Chinese.

It's peak china, but in a very different way. I guarantee you these are Intel chips, but with the IHS sanded down and their own label re-printed. Probably they muck with the bios / OS to report the CPU name as "Powerstar P3" as well.

That way they can pretend it's their own domestic chip, and sell PCs to the government -- who are mandated to only buy 'chinese' chips. Everyone in the government pretends not to notice that this chip, that performs way better than other domestic chips, is an Intel i3 with a new label. Hooray, Made in China 2025 is very success, all hail great leader!


(All of this is more about the constant face-saving idiocy of authoritarian regimes than anything uniquely Chinese.)



There was also a deal AMD was doing with a chinese company, where the chinese fab made first-gen Zen chips for the domestic market. I dunno if those are still being made.

lmao: "AVX/AVX2 was also disabled, but the research has suspected that it happened due to a bug rather than was done intentionally."

Klyith
Aug 3, 2007

GBS Pledge Week

Rexxed posted:

Is Solidigm going to be around to handle warranty requests in five years? I'm tempted to pick one up but always cautious about new companies even if they're spun off. I guess there's really no way to predict the future like that.

Solidigm is a subsidiary of SK Hynix, one of the big 3 memory manufacturers, and itself a part of a huge south korean conglomerate. So not a new company, just a new brand name. And not going anywhere.

Klyith
Aug 3, 2007

GBS Pledge Week
Where exactly was this dumbass seeing everyone saying that the 12gen Intel or Ryzen 3000 were "less snappy", because that is not a thing that happened.


And his demos of windows being unresponsive when looking at a bunch of files in explorer has nothing to do with anything. The problem is by default explorer reads files for metadata (id3 tags etc) when looking at music / video folders. At 6:27 is a perfect example, he opens a folder, explorer stalls out, then switches to the media view with different column headers and slowly populates the list. It's annoying as poo poo, but has zero to do with hardware.

(The fix is to change to list view in a music/video folder and then do "apply this view to all folders of this type". Or customize the column headers to not be metadata and just be the standard date size & type ones.)

Klyith
Aug 3, 2007

GBS Pledge Week

redeyes posted:

Hey speaking of this, the lag sucks. I cant seem to make the fix work for me. Dunno why. Is there a reg flag or something to just stop explorer from parsing all this extra crap?

By default it saves the view settings for every folder you've previously looked at.

To clear that, regedit to:
HKEY_CURRENT_USER\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell
and delete the Bags and BagsMRU sub-folders. Log off/on or reboot.

At that point all the media folders will pick up the standard view for their type, which you've set to not read metadata. All the other view customization you've intentionally done are also gone, which sucks, but at least explorer won't take 30 seconds to display a file list.


The stupidest thing about this is how they were able to make explorer not poo poo itself over thumbnails. It may take a minute to display them, but you get the file list instantly. But they somehow didn't think to use the same method for metadata.

Klyith
Aug 3, 2007

GBS Pledge Week

Beef posted:

It make sense seeing how the pinout is a bottleneck. Each gen is an opportunity to cram even more pins in the socket.

lmao at this point they're adding more pins for power than data

the bottleneck is the contact resistance against 150 amps

Klyith
Aug 3, 2007

GBS Pledge Week
I still use an A9-something craptop. And get this, most of the time I limit the CPU speed to 70%. :sickos:

(Because it sits next to my bed and I don't want the fan to turn on.)

If I want to watch a youtube at higher than 720p I download it with yt-dlp, because a browser decoder drops frames.



It sucks, but the two things I do with it are watch video or type plain text and given that it's actually ok. The GPU has better video decode than whatever celeron was in $300 laptops in 2018. TBQH I don't hate it!


FuturePastNow posted:

I had an FX-8350 and it was fine for daily use and gaming, but when it died (power supply failed so badly it fried the CPU) I replaced it with a cheap used 6c/12t X58 Gulftown system that performed much better despite being a year older.

The later FXes got through the worst deficits of the Construction arch through clockspeed and extravagant TDP.

Funny how that pattern keeps repeating!

Klyith
Aug 3, 2007

GBS Pledge Week
Sᴀɴᴛᴀ Cʟᴀʀᴀ (Bloomberg Business Wire) Intel today moved to sell its unprofitable "Intel" brand of CPUs, graphics chips, and semiconductor fabs. Going forward the company will concentrate on the remaining profitable sectors: bunnysuit plushies, youtube 90s tech nostalgia channels, and royalties from the valuable "Moore's Law" trademark.

Klyith
Aug 3, 2007

GBS Pledge Week

shrike82 posted:

does upping e-core count do much for desktop usage?

Generally, no. A few things yes (code compile, encoding). But like, the 13400F had pretty good desktop performance and the thing holding it back wasn't that it had 4 E cores instead of the 13600K's 8. It was the lower boost and having less L2/L3.


TBQH for a general desktop / gaming user I wouldn't be excited for more P or E cores. 6P seems to be plenty for current and near-future (ie next several years) games. E cores are a nice value-add, not a decision factor. I would get excited for C.R.E.A.M in the midrange parts. Cache rules everything around me.

Klyith fucked around with this message at 03:16 on Aug 14, 2023

Klyith
Aug 3, 2007

GBS Pledge Week

priznat posted:

My work is locked in to using RPis for controlling setups and I'm trying to convince people that we should just be using cheap x86 systems instead if only to get off those drat sd cards.

Get them to move to Compute Modules, that's what those are made for. Built in eMMC means you have reliable storage.

Klyith
Aug 3, 2007

GBS Pledge Week

Kibner posted:

1) You have to pick and choose whether you want to attach a pcie4 m.2 to the cpu or a pcie5 m.2 that will cut the pcie5 lanes in the gpu slot from 16x to 8x. You can't use both, either, as there just isn't the physical room. At least it comes with three other pcei4 m.2 slots, for a total of four usable m.2 slots.

This is universal on all Z790 boards with a gen 5 m.2 slot, because the CPU only has 16 gen 5 lanes.

(which, lmao, I guess means 790 will be another one-and-done chipset-cpu combination. since if they were planning to have a more upgradable platform they'd have just done a single gen 5 "supported" slot.)



Regardless it does not matter at all, gen 5 NVMe drives barely exist yet and gen 4 drives have more performance than anything in desktop & games can use. History says that worrying about which PCIe gen your slots have is almost always a waste of time.

Klyith
Aug 3, 2007

GBS Pledge Week

priznat posted:

That's just bizarre. What a massive waste of time!

Every news article like this I ask myself "what the hell were intel doing for the last yen years?" and I think of this ancient penny arcade

Klyith
Aug 3, 2007

GBS Pledge Week

Twerk from Home posted:

Am I crazy or did the -K parts previously have a higher price premium? The answer to extreme power usage in the default configuration of any -K part is to save a few bucks and get a non-K part, but those seem pretty dang expensive now compared to what I think I remember it used to be.

You are not crazy, Ks used to be about +$25-40.


I don't know whether to credit competition with AMD, or the fact that as shown by that chart a K would be an obviously lousy value if it still had any price premium.

Klyith
Aug 3, 2007

GBS Pledge Week

SpaceDrake posted:

Having been somewhat detached from the CPU rat race for the past half-decade or so, I'm kind of curious how we got here, even as I sit here and gawk at the shrines to thermal waste the Intel Corporation has decided to throw into the high end market.

"If your tech can't compete, shove more power into that bitch!" is a strategy with a loooong history:

James Mickens posted:

John began to attend The Church of the Impending Power
Catastrophe. He sat in the pew and he heard the cautionary
tales, and he was afraid. John learned about the new hyper-
threaded processor from AMD that ran so hot that it burned
a hole to the center of the earth, yelled “I’ve come to rejoin my
people!”, discovered that magma people are extremely bigoted
against processor people, and then created the Processor
Liberation Front to wage a decades-long, hilariously futile
War to Burn the Intrinsically OK-With-Being-Burnt Magma
People.


DoombatINC posted:

It's basically the story of the tortoise and the hare, except here the hare spent their near-decade lead in the race putting all their profits up their nose and now they're in a shrieking panic trying to get back into first place

When ryzen first came out I posted something like "it's great that AMD is competitive again, but I'm sure Intel hasn't spent the last 10 years just sitting on its rear end doing zero R&D so this may be a pretty short revival." Lmao.

The fact that I now worry about the market becoming uncompetitive in the other direction is just :psyduck:

Klyith
Aug 3, 2007

GBS Pledge Week

Cygni posted:

Thats with the hardware encoders explicitly disabled (so no QuickSync for Intel), so its basically putting all of the CPUs to their power limit for the length of the encode. The 14900K has its comical 250W power limit, so yeah... its gonna look extremely bad. Comically, i bet you could limit it 95W and not have a huge change in the length of the encode. Probably would land around the M3 Max.





Oh, hmmm, something I just noticed there: 1080p encoding does not use a ton of threads with x264. (Which is what I presume they're doing, though they just say 'MP4'. But pretty all modern encoders have resolution-dependent limits on how many threads they can effectively use.)

So in that test the M3 Max is saturated and the 14900K has idle cores, which means the M3 is operating at best efficiency and the 14900K has extra headroom to shove watts at.

If instead they tested a 4k encode, I believe the results would look somewhat different: the high-core-count CPUs (including the M2 Ultra) would pull much further ahead of the M3 in encode time. The 14900K would still use 250W, but the E cores would get more use and it would be a bit more efficient. And instead of the results being "you spent 4 times the power to go 10% faster" it might be "you spent 3x the power to go 25% faster". The 14900k still looks insane in power budget, but it stops looking quite so much like you get nothing from it.

Klyith
Aug 3, 2007

GBS Pledge Week

BlankSystemDaemon posted:

For x264, that depends entirely on the preset used; with slow or slower, the implementation can do multithreaded encoding of h264.

Pretty sure this is outdated info, x264 changed their threading model and all presets use threading now (and can employ more threads than the old model).

x264 --fullhelp posted:

- fast:
--rc-lookahead 30 --ref 2 --subme 6
--weightp 1
- medium:
Default settings apply.
- slow:
--direct auto --rc-lookahead 50 --ref 5
--subme 8 --trellis 2
- slower:
--b-adapt 2 --direct auto --me umh
--partitions all --rc-lookahead 60
--ref 8 --subme 9 --trellis 2

So none of the presets touch the --threads parameter, which by default these days is multi-threaded and chooses the number of threads based on resolution + number of threads supported by the CPU.

Klyith
Aug 3, 2007

GBS Pledge Week

SamDabbers posted:

Is there a technical benefit from CNVi over regular PCIe? They just split out the MAC block and moved it into the southbridge so only the PHY is still on the M.2 card, right? Power savings? Cost for integrators? Platform lock-in for Intel?

I'm sure cost. Same as how the ethernet chips on most Intel mobos used to be just PHY chips. I guess now that wifi is more popular than ethernet, they've switched to putting wifi MAC & processing on the chipset instead of ethernet?

Also I would not be surprised if the RF signal processing stuff preferred a different process, so you have a mismatch between the RF wanting bigger transistors and the data processing that works fine with normal ones. Which would again be cost, but if you're already making the chipset chip it's way cheaper to add more stuff to that than make a second chip on the wifi module.


Only technical benefit I can think of is saving PCIe lanes, now that those have value.

Definitely not platform lock-in: nobody's gonna say "well I have to stick with Intel for my next upgrade so I can re-use my $25 wifi card". The only people really affected are those buying cheap ebay wifi modules that turn out to be platform-specific.

Klyith
Aug 3, 2007

GBS Pledge Week

FuturePastNow posted:

does anything other than Geekbench actually use OpenCL

does anything other than mac reviews and headline-baiting "leaks" actually use Geekbench?


Upcoming Product Spotted on Geekbench with BIGNUM Score is not something I'd base any expectations on

Klyith
Aug 3, 2007

GBS Pledge Week
"Correlates well with SPECint" is not as big a plus to me as you seem to think. SPECint is also high on my list where, if someone showed me one number being lots bigger, I'd shrug and wait for real desktop application benchmarks. And doubly so when those numbers were provided by a marketing department.

CPU designers are free to love themselves whatever tools they like. You could design a CPU using Chip's Challenge 95 as the performance target. I'm not a CPU designer and I don't run specint or geekbench for fun. It's not that the tool is objectively bad, it's that it is:
a) trying to test a vague general performance that is not necessarily correlated to any particular real app (spec say this themselves)
b) much easier for marketing departments to abuse by putting their fingers on the scales than real apps

And that second bit is the part where geekbench, and more particularly tech news that reports the appearance of pre-release submitted benchmarks in the geekbench database, is utterly worthless. Nuts if you believe that some engineer ran geekbench and then was like "oops I accidentally clicked upload score, now everyone knows about our secret prototype!" That's the marketing department uploading it, and they could be immersing the thing in a tank of liquid nitrogen for all you know. And it is totally deniable -- if the Meteor Lake GPU turns out to suck, nobody who pre-ordered based on news hype can say false advertising.



As for geekbench itself, any time I've noticed a results lineup for geekbench with x86 CPUs I have other performance metrics of, my reaction has been "well that's not the order I'd put those CPUs in". That's not an indictment of geekbench at all, there are plenty of other CPU benchmarks like sci compute and server that I don't give a poo poo about and gloss over. But geekbench gets extra prominence in mac reviews so I notice it more.

Klyith
Aug 3, 2007

GBS Pledge Week

gradenko_2000 posted:

calling AMD snake oil and used car salesmen is... strong language, but it's also kinda funny because they sort of did this to themselves?

Selling an old CPU with a model number that hides the fact that it's a warmed-over refresh in the third digit: bad.

Selling an old CPU and calling it the new "14th generation" to hide the fact that it's a warmed-over refresh: fine.


(AMD's model number thing is bullshit, but lmao)

Klyith
Aug 3, 2007

GBS Pledge Week

slidebite posted:

Honest question: how important is hypertheading?

I seem to recall it being a big deal years ago, but I have no clue if it's even utilized anymore..? Have CPU speeds increased so much it's really no longer relevant?

CPU cores have increased so much that it's not as cool as it was. It's still very relevant and a big deal if you have more threads of work to do than your CPU has physical cores. But for desktop stuff, we are getting to the point where even midrange CPUs have more cores than our apps can use even without hyper-threading.

That doesn't make it useless. You can do some useful tricks -- for example, mix lightweight and heavy threads on the same cores and run those at max boost, leaving other cores completely idle so you can put the more power & thermal budget into clockspeed.


The rentable units thing is like, we have more silicon than we can shake a stick at so let's just give every P core a dedicated E core as a co-processor.

Klyith
Aug 3, 2007

GBS Pledge Week

Hasturtium posted:

To my knowledge AMD hasn't made a peep about removing it as an architectural feature

Seeing as they went in the direction of C cores that are full-fat on features but downclocked to hit the greater efficiency and compact size, they don't need to. For now at least.

And since Intel still has a pretty big advantage on the software side, IMO this makes sense. Intel has the means to get major changes in OSes schedulers. AMD does not. So it makes sense for them to be conservative on stuff like very-heterogeneous P/E cores or Airbnb For Threads.

Klyith
Aug 3, 2007

GBS Pledge Week

Paul MaudDib posted:

14th gen is bad because it’s not a performance step, and because of what a rebrand-gen says about intel’s health as a company, not because it’s deceptive, anymore than RX500 series was deceptive, or the GTX 700 series was deceptive, or APUs are deceptive etc. It seems like people collectively forgot that yearly cadence (with rebrands if needed) is a thing oems want and that and low-end SKUs exist.

Did anyone say that 14th was deceptive? I think everyone just said it was no more or less deceptive than the Ryzen they were bitching about.

Now if you want actually deceptive, go back to that presentation and check the two CPUs that Intel is putting up bar charts for performance comparisons, and then go check the prices that laptops containing those CPUs are selling for. The intel one rarely sells for under $600, the AMD one rarely goes above $400. I would be there's a substantial different in tray cost for those 2, and if Intel had picked 2 equal-cost CPUs to compare the whole "latest" thing would be a moot point.

Klyith
Aug 3, 2007

GBS Pledge Week

mdxi posted:

Fab 11X had previously been the home of Optane (rest in piss), and I legit have no idea what's going on there now.

https://www.anandtech.com/show/20042/intel-foundry-services-to-make-65nm-chips-for-tower-semiconductor

Analog and RF processors, apparently.

Klyith
Aug 3, 2007

GBS Pledge Week
Huh, I've generally seen that the one good thing about homes built in the last 20-30 years has been ample circuits.

You want to see a whole lot of rooms working off a single 15a breaker, try living in an old house.

Klyith
Aug 3, 2007

GBS Pledge Week

Twerk from Home posted:

What's the preferred burn in tool to test if CPU/RAM are actually stable these days? There's no way that it's Prime95 anymore, right?

I still like good old prime95 & memtest, because while they're not perfect these days they're at least definitive when they detect a failure. Add OCCT for whole-system stress testing both CPU and GPU at the same time to test power and overall stability.

poo poo is so complex now, and so many of the recent bugs have been "well this one specific operation under these conditions is a problem", that there's no one tool that can prove your system is stable.


Twerk from Home posted:

Also, is Epic telling people to turn their 6ghz processors that they paid very dearly for down to 5.3ghz, a more than 10% loss?

They're telling people to turn down their 6ghz processors to prove that it's not their fault, it's the processor that's hosed.

From there you have to make your own decision -- live with a 10% loss on a CPU that you paid dearly for, get a refund, or do RMA exchanges with Intel until you get a non-faulty CPU.

Klyith
Aug 3, 2007

GBS Pledge Week

Twerk from Home posted:

I've got one more thought: Is there a motherboard vendor or product line that one can buy to have it Just Work at stock settings, including Intel's turbo as designed?

"As defined" for Intel's turbo on K processors is very fuzzy. Turbo is "we'll run at 6ghz as long as the motherboard can deliver the juice", but the spec for socket power is based on what a non-K CPU can use, so there's some negotiation involved.

(This is not unique to Intel, though AMD has much less slop these days because they're shoving enough electricity into a CPU to do a DBZ power up montage.)

Klyith
Aug 3, 2007

GBS Pledge Week

GiantRockFromSpace posted:

What the gently caress are exactly e-cores in 12th Gen and above Intel CPUs? From what I can gather they're related to thread scheduling optimization or something. Can their behavior be manipulated in the BIOS without it being a permanent thing or something? Basically from what I can gather they are the reason my i7-12700KF CPU is having issues with some games cause they're coded like poo poo and don't handle their behaviour properly and wanted to know if there was something I could directly do to help without relying on external programs messing with CPU behaviors.

They are a CPU core with very different internal guts than the "normal" P-cores -- no hyperthreading, physically smaller, highly optimized for low power use. They make your games run like poo poo if the important game threads land on them.

You can fix this by:
1. updating to Windows 11 if you haven't already, because 11 has a better thread scheduler that is aware of non-uniform cores and won't send high-performance threads to those cores
2. using Process Lasso to keep the threads on the P cores
3. some mobos have options to disable E-cores in BIOS, but this is a pretty drastic step and not the best idea (It's strictly worse than 1 or 2 because then the OS threads and other stuff that doesn't need max power cores can't use an E-core, which means your CPU thermal budget is being wasted.)

Adbot
ADBOT LOVES YOU

Klyith
Aug 3, 2007

GBS Pledge Week

GiantRockFromSpace posted:

Quick and concise, thanks a lot! Being more specific, it seems to be an issue on 12th and 13th gen Intel CPUs with some Koei-Temco games and others that use the same engine, that seems it keeps switching threads between P and E cores so it stutters constantly despite having no trouble running the game otherwise. Cause no other game has done that for me and I usually play beefier games with 8 browser tabs open lol.

CPU scheduling: :iiam:

GiantRockFromSpace posted:

I guess if Process Lasso is safe and the options I want are in the free version that's my solution. I'm not one of those insane schizos who are still running Windows 7 but I'd prefer for more kinks to be ironed out and more compatibility being tested before moving from 10 to 11. also I can use my dad as a canary, if he of all people updates there's no reason for me not to

Process Lasso is def safe, really it's just a nice UI on standard windows functions that control CPU affinity. You can do this in task manager or command line if you wanted.


As for Win11, what I would say is, you're gonna have to move to 11 in 2025 anyways so it's worth thinking about the options.

FWIW, as someone who switched to linux on my main machine rather than use 11, I don't think compatibility is an issue. Anything that works on 10 should work on 11, they didn't change that much under the hood. The problems are not "this OS is new and hasn't had the kinks ironed out". They are "MS is looking for new and exciting ways to monetize the OS and doesn't care how annoying they are". And that's unlikely to change in the next year.
linux is pretty rad these days

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply