Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Khorne
May 1, 2002

Harik posted:

Unlikely to be able to exploit this via you have to not only convince it to emit the exact opcodes required to trigger it but then to somehow reveal the half of the register it wasn't using where the data was leaked. Maybe a zero-fill loop using wide registers that ends up filling with non-zero data? It'd be one hell of a talk at a security conference if someone pulls it off.
Javascript could do metldown and specter. Browsers adding specific protections to stop it from happening.

There's also webasm to potentially do this.

quote:

e: what on earth, is there a wordfilter
If you put the devil's language with a colon after it then under a number of circumstances browsers execute arbitrary js code. This is probably a forums lazy fix on top of other lazy fixes to prevent people from inserting it into html elements that get added with some of the bbcode/link parsing stuff.

Khorne fucked around with this message at 18:53 on Jul 26, 2023

Adbot
ADBOT LOVES YOU

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

Khorne posted:

Javascript could do metldown and specter. Browsers adding specific protections to stop it from happening.

There's also webasm to potentially do this.

If you put the devil's language with a colon after it then under a number of circumstances browsers execute arbitrary js code. This is probably a forums lazy fix on top of other lazy fixes to prevent people from inserting it into html elements that get added with some of the bbcode/link parsing stuff.

ahh that makes sense.

meltdown and spectre were timing measurements which were patched by removing access to cycle-accurate timers from javascript. This requires you to convince javascript to (essentially) read uninitialized memory. Webassembly doesn't help you here much because it's not x86_64 assembly but an idealized virtual machine that's transpiled to the underlying platform. you'd still have to convince it to emit the exact opcodes but if you can get that reading back the leaked data is a lot more trivial.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop
I think the disconnect on javascript is from a security standpoint you have to assume that it's possible until mitigations are proven to stop it because people keep coming up with incredibly clever ways to pull this poo poo off. It makes sense to issue that as a warning since the impact is fairly major if achieved.

There's already workarounds for non-epyc zen2 processors but they involve a larger performance hit than the microcode patch would.

Cygni
Nov 12, 2005

raring to post

https://www.reddit.com/r/Amd/comments/159xoao/stable_128gb4x32gb_ddr56000_cl30_on_agesa_1007b/

Reddit rando is reporting they can run 4x32 at DDR6000 CL30 in coupled mode on AM5 with the latest AGESA. 2DPC, 4RPC on AM5, didn't think we would see the day until Zen5. YMMV and take with a grain of salt cause of the source obvi.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Cygni posted:

https://www.reddit.com/r/Amd/comments/159xoao/stable_128gb4x32gb_ddr56000_cl30_on_agesa_1007b/

Reddit rando is reporting they can run 4x32 at DDR6000 CL30 in coupled mode on AM5 with the latest AGESA. 2DPC, 4RPC on AM5, didn't think we would see the day until Zen5. YMMV and take with a grain of salt cause of the source obvi.

I’ve got 64 more gigabytes of RAM that has been waiting for this news!

BlankSystemDaemon
Mar 13, 2009



Khorne posted:

Javascript could do metldown and specter. Browsers adding specific protections to stop it from happening.

There's also webasm to potentially do this.

If you put the devil's language with a colon after it then under a number of circumstances browsers execute arbitrary js code. This is probably a forums lazy fix on top of other lazy fixes to prevent people from inserting it into html elements that get added with some of the bbcode/link parsing stuff.
If by specific protections you mean they slowed down JavaScript implementations so that the timings weren't tight enough to be able to do the side-channel attacks, sure.

repiv
Aug 13, 2009

pretty sure no JS implementation slowed down actual execution speed to mitigate spectre, they tackled it by making it harder to measure passage of time with high accuracy

the high resolution timer API had its resolution quantized and WASM threads were disabled in shared contexts since a background thread counting in a tight loop was a good enough approximation of a high res timer

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

BlankSystemDaemon posted:

If by specific protections you mean they slowed down JavaScript implementations so that the timings weren't tight enough to be able to do the side-channel attacks, sure.

Who slowed down what?

BlankSystemDaemon
Mar 13, 2009



Subjunctive posted:

Who slowed down what?
Well, as repiv pointed out, they didn't - I was simply misremembering stuff.

Klyith
Aug 3, 2007

GBS Pledge Week
Pretty much every other meltdown / spectre / retbleed mitigation has produced some performance impact, and for whatever reason javascript benchmarks are some of the most heavily affected. It's not the javascript<->VM layer that's the problem, it's the VM<->CPU level.


I have the retbleed specific mitigations turned off -- retbleed has (or had at time of release) a pretty big performance loss, versus an extremely low danger for anyone but cloud / vm operators. If zenbleed mitigations also suck for performance I may just go whole hog mitigations=off and say gently caress it.

PC LOAD LETTER
May 23, 2005
WTF?!

Cygni posted:

Reddit rando is reporting they can run 4x32 at DDR6000 CL30 in coupled mode on AM5 with the latest AGESA. 2DPC, 4RPC on AM5, didn't think we would see the day until Zen5. YMMV and take with a grain of salt cause of the source obvi.

Yeah this is the first BIOS that actually made a difference in my RAM overclocking.

Not that it mattered much for me.

I can get 2x 32GB sticks to DDR5 6400 stable with loosened timings and 6600 unstable with even looser timings but its still not worth going past 6000, which is where I set it back to, so its kinda cool but mostly pointless IMO unless you looking to break records.

Just be aware the 1st training boot time is l o o o o n g after you change some memory settings. After that its about the same as normal though.

BlankSystemDaemon
Mar 13, 2009



I can imagine that 192GB of memory can take quite a long time to train up to 6000MT/s, yeah.

Cao Ni Ma
May 25, 2010



Yeah the memory training is longer even when using stock expo.

Toalpaz
Mar 20, 2012

Peace through overwhelming determination
I upgraded to the 58003ds and it feels very similar to my 1800x. I would not recommend. I don't think I'm getting any extra frames on my visual novel and chrome. Plus I had to press del on my computer and touch the bios to update it. Chrome still only can open 426 tabs before it starts chugging. Selling my system for a wintel rig.🤧0/10

LRADIKAL
Jun 10, 2001

Fun Shoe
Who are you parodying?

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Dr. Video Games 0031 posted:

Right, Intel still relies heavily on kernel-level scheduling, but I doubt it would perform anywhere near as well without the thread director. AMD lacks anything like that, and it's not something they can just patch in either.

I know this is a necro but I think you & others have overestimated how significant Thread Director is, probably because Intel's marketing has encouraged it and tech media has mostly gone along. I read the spec once and it's just an engine to automate collection of several per-core statistics (instruction mix, current clock speed, power, temperature, etc).

The resulting data can be useful to a scheduler, but the scheduler's still 100% in charge of making the actual decisions. If and when AMD does heterogeneous core types, they may not need something like this at all - Intel's need is driven by things which are probably going to remain unique to Intel. (What I'm thinking of: their overcomplicated DVFS, poorly matched core types, and use of SMT in the big cores. Speaking of SMT, when big cores are running 2 threads and those threads are competing for execution resources, making optimal scheduling decisions gets really complicated. This is probably why Intel chose to capture instruction mix.)

Even if AMD ends up needing this, it ain't that complicated. Both Intel and AMD have had the raw data sources (performance counters, sensors) for a long time. The only fundamentally new thing in Thread Director is that to minimize data collection overhead, Intel threw in a tiny microcontroller core whose only job is to periodically scan performance counters and sensors and dump the data into a table for easy ingestion by the kernel.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
https://www.youtube.com/watch?v=1nu-GKq58Og

minisforum appears to have stepped it up. this is a really nice looking mini pc, all metal, quiet, upgradeable, but using LM in the cooler so don't open it. 2x usb4/thunderbolt allow either dock/display expansion or some interesting networking possibilities.

the triangle networking (with 2-port) is a very common strategy for small clusters in general. works with 2-port enterprise ethernet adapters or infiniband adapters in homelab servers too, just direct-connect your way to success. QSFP/QSFP28 copper DAC or SFP+ copper DAC cables are cheap, as are QSFP28 to SFP+ breakout cables.

Kazinsal
Dec 13, 2011
The whole "efficiency cores can't execute AVX-512" problem is the stupidest thing. It's something that could be solved in the scheduler (CPU topology discovery is a solved problem and the scheduler could just flag e-cores as not able to run anything with AVX-512 and the first time an AVX-512 instruction causes an #UD fault on an e-core the offending thread gets flagged as needing to be run on a p-core), but because people are stupid and refuse to update their operating systems, 12th and 13th gen Intel desktop processors can't do AVX-512.

Wibla
Feb 16, 2011

People don't like Win11, which is entirely fair. Don't buy intel if you want to stay on Win10.

repiv
Aug 13, 2009

Kazinsal posted:

The whole "efficiency cores can't execute AVX-512" problem is the stupidest thing. It's something that could be solved in the scheduler (CPU topology discovery is a solved problem and the scheduler could just flag e-cores as not able to run anything with AVX-512 and the first time an AVX-512 instruction causes an #UD fault on an e-core the offending thread gets flagged as needing to be run on a p-core), but because people are stupid and refuse to update their operating systems, 12th and 13th gen Intel desktop processors can't do AVX-512.

the trouble with that approach is it would lead to weird performance cliffs where every thread ends up getting pinned to the P cores, possibly regressing performance compared to just using AVX2 and running on both the P and E cores.

if any task dispatched to your work scheduler can use AVX512 then every thread in the worker pool will end up getting pinned. if your libc has AVX512 versions of common routines like memcpy then every thread will end up getting pinned. any 3rd party libraries you call might run AVX512 code and pin the calling thread without you knowing.

in practice it's something that software would have to be aware of and work around, so it would be a breaking change in spirit even if the code technically runs, and if you're making a breaking change you might as well go all the way (see AVX10)

Klyith
Aug 3, 2007

GBS Pledge Week
So I was like "ARM android and ios have been doing big.little for a decade, how do they handle this stuff?"

And apparently the answer is that every time in that decade there was a possibility to have two cores with different instruction set compatibility, they said "gently caress no!". They ran big cores in aarch32 mode when the little cores were still 32bit. They've been doing the same thing as AVX10 (big cores run full width at full speed, little cores run half speed) since the beginning of NEON.

It is kinda astonishing that Intel took that history and decided to ship the CPU the way they did.


Wibla posted:

People don't like Win11, which is entirely fair. Don't buy intel if you want to stay on Win10.

At this point that's just a two year delay, unless you have Enterprise LTSC.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Klyith posted:

At this point that's just a two year delay, unless you have Enterprise LTSC.

Ridin' that train to 2032! Yeah!

Wibla
Feb 16, 2011

Klyith posted:

At this point that's just a two year delay, unless you have Enterprise LTSC.

Considering how long it has taken them to unfuck the taskbar, I don't mind waiting a little longer :colbert:

They're rolling out W11 to everyone at work soon, that'll be fun.

repiv
Aug 13, 2009

Klyith posted:

They've been doing the same thing as AVX10 (big cores run full width at full speed, little cores run half speed) since the beginning of NEON.

AVX10 isn't about running instructions at different rates, it's about gating the especially wide 512bit instructions so that small cores don't have to implement them at all

the implication is that intel believes that 512bit operations are too heavy for E-cores even if they're decomposed into 2x256bit ops (like they did with early AVX, implemented as 2x128bit) whether due to the very large amount of register state required, or some instructions not being decomposable, or both, so presumably AVX10 on big/little chips will only support 256bit ops and AVX10 on server chips will support the optional 512bit ops.

Klyith
Aug 3, 2007

GBS Pledge Week

repiv posted:

AVX10 isn't about running instructions at different rates, it's about gating the especially wide 512bit instructions so that small cores don't have to implement them at all

Oh, duh, it's AMD that's doing AVX512 as doubled-up 256.

repiv posted:

the implication is that intel believes that 512bit operations are too heavy for E-cores even if they're decomposed into 2x256bit ops (like they did with early AVX, implemented as 2x128bit) whether due to the very large amount of register state required, or some instructions not being decomposable, or both, so presumably AVX10 on big/little chips will only support 256bit ops and AVX10 on server chips will support the optional 512bit ops.

Man that feels really stupid. If I make a piece of software that uses the 512bit width, how am I supposed to communicate CPU compatibility? I can't say "requires AVX10" like I could with AVX-512, it's now "requires one of this list of chips: :words:". Like I know this is pretty theoretical because hardly anything uses 512 bits. But that's not going to be the case forever!


For gently caress's sake, just make an E core that can run 512 even if it takes ten cycles to complete an operation and makes the E core emit an audible shrieking noise. Whatever thread is doing that will peg the E core to 100% load and get shifted to a P-core ASAP. The scheduler knows how to handle that. The most pessimistic case would be a program that infrequently uses a burst of 512 ops on an otherwise low-budget thread, but that seems... unlikely.

repiv
Aug 13, 2009

re-reading the announcement it's not actually clear (to me at least) whether intel is going to expose the optional 512bit mode on a per-core basis, or make it consistent across the entire chip

they could make it so that a thread can execute 512bit ops only when it's running on a P-core, but that would get pretty messy so i'm not sure if they'll bother

i think the 512bit mode will probably be reserved for the giant xeons that are made entirely of P cores

FuturePastNow
May 19, 2014


I suppose AMD's approach to making smaller cores by simply giving them less cache will work better for PCs, especially since AMD can glue more cache on top of them for gamer versions

Klyith
Aug 3, 2007

GBS Pledge Week

FuturePastNow posted:

I suppose AMD's approach to making smaller cores by simply giving them less cache will work better for PCs, especially since AMD can glue more cache on top of them for gamer versions

It's not just less cache. I saw a neat article about it, a whole lot of what makes them compact is just removal of "extra" space. Some of that means that the compact cores have to run at lower frequencies, due to signal interference between components that are now closer together, and also power density.

But another part is actually wasted space, because the first run of the design is more modular and "blocky" with all the various sub-components. Makes the design and fab prototyping a lot faster when you chop stuff up into highly partitioned modules.


So I don't know if there will ever be an AMD CPU that uses both normal and compact at the same time. It seems likely that the C version will always be a later follow-up to the standard core, analogous to the old tick-tock cycle. The reason they can squish it down is because they've got complete understanding of how the base model worked.

SwissArmyDruid
Feb 14, 2014

by sebmojo

Klyith posted:

It's not just less cache. I saw a neat article about it, a whole lot of what makes them compact is just removal of "extra" space. Some of that means that the compact cores have to run at lower frequencies, due to signal interference between components that are now closer together, and also power density.

But another part is actually wasted space, because the first run of the design is more modular and "blocky" with all the various sub-components. Makes the design and fab prototyping a lot faster when you chop stuff up into highly partitioned modules.


So I don't know if there will ever be an AMD CPU that uses both normal and compact at the same time. It seems likely that the C version will always be a later follow-up to the standard core, analogous to the old tick-tock cycle. The reason they can squish it down is because they've got complete understanding of how the base model worked.

Wendell from Level 1 seems to think that they *can* package a Zen chiplet and a ZenC chiplet together and completely sidestep all the problems that Intel has with P/E switching, and that they should largely present themselves as homogenous to the OS.

https://www.youtube.com/watch?v=mquzak69LOI

hobbesmaster
Jan 28, 2008

SwissArmyDruid posted:

Wendell from Level 1 seems to think that they *can* package a Zen chiplet and a ZenC chiplet together and completely sidestep all the problems that Intel has with P/E switching, and that they should largely present themselves as homogenous to the OS.

https://www.youtube.com/watch?v=mquzak69LOI

That’d be cpu migration or cluster switching and not an HMP implementation like Intel, right?

Malloc Voidstar
May 7, 2007

Fuck the cowboys. Unf. Fuck em hard.

Klyith posted:

So I was like "ARM android and ios have been doing big.little for a decade, how do they handle this stuff?"

And apparently the answer is that every time in that decade there was a possibility to have two cores with different instruction set compatibility, they said "gently caress no!". They ran big cores in aarch32 mode when the little cores were still 32bit. They've been doing the same thing as AVX10 (big cores run full width at full speed, little cores run half speed) since the beginning of NEON.

It is kinda astonishing that Intel took that history and decided to ship the CPU the way they did.
https://www.mono-project.com/news/2016/09/12/arm64-icache/
samsung made big.little cores that were different enough to cause data corruption (causing crashes) because basically all the code that dealt with the relevant area assumed all cores were symmetrical
this specific choice got a Linux kernel workaround
I'd bet this discouraged CPU designers who saw this from trying weird poo poo like that.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

hobbesmaster posted:

That’d be cpu migration or cluster switching and not an HMP implementation like Intel, right?

The scheduling problems might not be as bad, but any situation where you've got fast cores and slow cores creates scheduling problems. The small Zen cores might have the same instruction set but they don't clock or behave quite the same.

hobbesmaster
Jan 28, 2008

Twerk from Home posted:

The scheduling problems might not be as bad, but any situation where you've got fast cores and slow cores creates scheduling problems. The small Zen cores might have the same instruction set but they don't clock or behave quite the same.

That’s still be HMP, on ARM SoCs implementing cpu migration or cluster switching it’s treated more like dvfs states on either a per core pair or cluster basis.

Hasturtium
May 19, 2020

And that year, for his birthday, he got six pink ping pong balls in a little pink backpack.

repiv posted:

AVX10 isn't about running instructions at different rates, it's about gating the especially wide 512bit instructions so that small cores don't have to implement them at all

the implication is that intel believes that 512bit operations are too heavy for E-cores even if they're decomposed into 2x256bit ops (like they did with early AVX, implemented as 2x128bit) whether due to the very large amount of register state required, or some instructions not being decomposable, or both, so presumably AVX10 on big/little chips will only support 256bit ops and AVX10 on server chips will support the optional 512bit ops.

I was wondering if it’d be tenable to break a 512-bit operation into four 128-bit ones, which would align with Gracemont’s 4x128b ALU configuration. Something tells me the answer is “no,” that Intel instead crammed Gracemont alongside Golden Cove in a reactionary way and ran into a hard engineering issue that tanked AVX512 on the architecture. And it’s taken developing AVX10 to make it possible to salvage it. Granted, from what I’ve read as a layman it sounds like AVX512 offers advantages over SSE/AVX2 beyond the operation size, so maybe this all works out in the long term…

repiv
Aug 13, 2009

when intel was designing AVX they did go out of their way to structure it to enable 2x128bit implementations (most horizontal operations treat the 256bit registers as two independent 128bit banks) but i'm not sure if they made the same affordance for AVX512 to be broken down all the way to 4x128bit

no matter how you divide up the compute work AVX512 still demands a huge register file though, between the doubled vector width, doubled number of registers, and extra mask registers it needs more than quadruple the scratch space that AVX did

repiv fucked around with this message at 19:58 on Jul 30, 2023

Cygni
Nov 12, 2005

raring to post

Klyith posted:

So I don't know if there will ever be an AMD CPU that uses both normal and compact at the same time. It seems likely that the C version will always be a later follow-up to the standard core, analogous to the old tick-tock cycle. The reason they can squish it down is because they've got complete understanding of how the base model worked.

Strix Point, Zen5’s premium mobile part, will be a 4+8 design with Zen5/Zen5c cores and SMT enabled for all cores. It’s already been caught testing a few times. Specs could change by its 2024 launch time tho, obvi.

buglord
Jul 31, 2010

Cheating at a raffle? I sentence you to 1 year in jail! No! Two years! Three! Four! Five years! Ah! Ah! Ah! Ah!

Buglord
I think my idea of what a 7800X3D can do was a bit overstated. I want to upgrade my i7 8700 (non-k) to either a R7 7700 or a R7 7800X3D. The difference between the two on Amazon is ~$125. The local microcenter has the R7 7700 for similar prices but you can also get RAM and and a motherboard for only $100 more total (which I'd resell because I'm on ITX) In typical cases I would probably get the 7700 and call it a day, but something I've been playing a lot more of (ever since about 2016) were Paradox games. Those apparently benefit more from the X3D suffix, as do things like Dwarf Fortress and Factorio. And while it appears that they do benefit, my wonder is by exactly how much. Here's a link with simulation games benchmarks:

https://www.anandtech.com/show/18795/the-amd-ryzen-7-7800x3d-review-a-simpler-slice-of-v-cache-for-gaming/4

While Factorio seemingly plays noticeably better, it looks like these tests are not necessarily made to show how much better the CPU would be in normal play. Dwarf Fortress shows that the X3D chips are better at worldgen and can shave off a decent amount of time on bigger maps, but, again im left wondering how much of anything would be felt after the worldgen.



GamersNexus shows that there's a 10% performance gap in Stellaris simulation speed between the 7700 and the 7800X3D. That seems to be significant on paper, but I'm not sure if that would actually be perceived unless you put a stopwatch on your desk and did A/B testing. There's also benchmarks floating around of Factorio ticks per second and Gonna wager that the upgrade from an i7 8700 is actually noticeable here.

I guess my ask is, how much should I be considering the 7800X3D? $125 extra to this platform upgrade (especially when this is ITX and AM5 motherboards are oof) is annoying.

Dr. Video Games 0031
Jul 17, 2004

The 7800X3D absolutely crushes flight sims like Microsoft Flight Simulator and DCS, and it's also good with racing sims like Asseto Corsa Competizione.

I agree though that there's a lack of testing around simulation games in general. And when some outlets do test those games, they don't always seem to test them properly, in the way fans of the games might want to see. I've seen a bunch of places test the frame rate in cities skylines for instance, but not the simulation speed.

edit: When looking for Teardown benchmarks (couldn't find any), I found out that Star Citizen loves X3D. So... there's that. This lead me to seeking out more space sim benchmarks, and I couldn't find any benchmarks with X4 for the 7800X3D, but this post indicates that the 5800X3D offered a huge boost over the 5800X, so the 7800X3D likely offers an even bigger leap in performance. The talk in KSP forums seems to be that those games see a large benefit from X3D chips too, though I can't find any hard data to back that up.

We need someone to start benchmarking Teardown by slicing the Titanic in half with a laser sword

https://www.youtube.com/watch?v=LEy6xnVx5h8&t=6522s

I can only guess, but I imagine this is the kind of thing that's in the wheelhouse of the X3D chips, though there probably won't a CPU for the next decade that that wouldn't chug at least a little here

Dr. Video Games 0031 fucked around with this message at 20:33 on Jul 30, 2023

Klyith
Aug 3, 2007

GBS Pledge Week

SwissArmyDruid posted:

Wendell from Level 1 seems to think that they *can* package a Zen chiplet and a ZenC chiplet together and completely sidestep all the problems that Intel has with P/E switching, and that they should largely present themselves as homogenous to the OS.

https://www.youtube.com/watch?v=mquzak69LOI

They definitely *can*, I was only doubtful of *will*. Particularly for standard desktop.

Since the ZenC will always come a half-cycle after the main architecture, I don't think that AMD will make that combo Zen+ZenC as a desktop part. Like, it would need to be a Zen 5 performance chiplet + Zen4c efficiency chiplet. Would people want that? I mean, I guess if they want to shut down Intel on "MOAR CORES!" then a 24-core 48-thread desktop CPU is pretty much the last word.


Cygni posted:

Strix Point, Zen5’s premium mobile part, will be a 4+8 design with Zen5/Zen5c cores and SMT enabled for all cores. It’s already been caught testing a few times. Specs could change by its 2024 launch time tho, obvi.

Yeah, that makes much more sense for mobile since that's also a half-cycle later than the lead desktop and already includes plenty of re-engineering.

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



Dr. Video Games 0031 posted:

The 7800X3D absolutely crushes flight sims like Microsoft Flight Simulator and DCS, and it's also good with racing sims like Asseto Corsa Competizione.

I agree though that there's a lack of testing around simulation games in general. And when some outlets do test those games, they don't always seem to test them properly, in the way fans of the games might want to see. I've seen a bunch of places test the frame rate in cities skylines for instance, but not the simulation speed.

edit: When looking for Teardown benchmarks (couldn't find any), I found out that Star Citizen loves X3D. So... there's that. This lead me to seeking out more space sim benchmarks, and I couldn't find any benchmarks with X4 for the 7800X3D, but this post indicates that the 5800X3D offered a huge boost over the 5800X, so the 7800X3D likely offers an even bigger leap in performance. The talk in KSP forums seems to be that those games see a large benefit from X3D chips too, though I can't find any hard data to back that up.

We need someone to start benchmarking Teardown by slicing the Titanic in half with a laser sword

https://www.youtube.com/watch?v=LEy6xnVx5h8&t=6522s

I can only guess, but I imagine this is the kind of thing that's in the wheelhouse of the X3D chips, though there probably won't a CPU for the next decade that that wouldn't chug at least a little here
In X4, with a late-game PHQ, capable of producing everything including small, medium, large and extra-large ships, I was getting 30-32 fps while being in the sector - which you can compare to the scores posted in this threað - which is also where I got the test-savegame from.

EDIT: To be more precise, I did three different benchmark runs with a cold boot in between, and one got 30, one got 31 and one got 32.

BlankSystemDaemon fucked around with this message at 20:51 on Jul 30, 2023

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply