Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy
trying to look at this from K8.0's perspective:

If you already had a 5600X, and were wondering about moving to a 7600X, or a 5800X3D, or an i5-12600K, or an i9-13900K:

- a baseline of 130 FPS
- a 7600X would require the full cost of the CPU, the motherboard, and the DDR5 RAM; 870 USD for +45 FPS
- a 5800X3D would only require the cost of the CPU; 400 USD for +44 FPS
- an i5-12600K (with DDR4 RAM) would require the cost of the CPU and the board; 510 USD for +12 FPS
- an i9-13900K (with DDR4 RAM) would require the cost of the CPU and the board; 890 USD for +48 FPS

calculating cost-per-uplift, and putting that in order:

1. 5800X3D: 9.09 dollars per 1 FPS of uplift
2. i9-13900K: 18.54 dollars per 1 FPS of uplift
3. 7600X: 19.33 dollars per 1 FPS of uplift
4. i5-12600K: 42.5 dollars per 1 FPS of uplift

HWUB doesn't have the i5-13600K yet, so that's throwing it off, but if we assume something like 3% less FPS than a 13900K, and a cost of 320 USD, then it would land at about 172 FPS for the cost of the CPU and the board, or 460 USD for +42 FPS of uplift, or about 10.95 dollars per 1 FPS of uplift, putting it solidly in second place behind the 5800X3D.

of course, if you already have a 5600X then I don't really think you'd be hankering for an upgrade - it's a lot more interesting if you're still on something like a Ryzen 1600AF or a Ryzen 3600, but we don't have charts for that so you kind of have to eyeball it against, say, techpowerup's rankings where an i3-10100 is 66% of an i5-13600K, which moves the baseline down to 103 FPS, but also changes around the platform costs because now moving to a 5800X3D requires a board, where it wouldn't if you were already on AM4.

you could defray some of those costs by assuming you can sell the old parts you're moving away from, but then that just complicates the calculation even further, to the point where I'd understand using a baseline of zero because there are otherwise too many factors to account for in a youtube video.

Adbot
ADBOT LOVES YOU

lih
May 15, 2013

Just a friendly reminder of what it looks like.

We'll do punctuation later.
the big problem (as K8.0 already said) is that HUB's RAM and motherboard prices are pretty unrealistic for the average buyer because they're trying to maximise performance not represent like, a decent value build with those chips, which throws the whole comparison off

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot

Cygni posted:

My point is that is still cost per frame, but its cost per frame with an arbitrary adjusted value. From a reviewer standpoint, that arbitrary adjustment is relative to the literal thousands of different CPUs someone might own (if any at all). That seems tough to meaningfully address in a standardized environment. As a reviewer, the best thing you can do for a reader would be to assume a zero starting point and do a... cost/frame graph using your standardized platforms and data with outside variables as controlled as you can make them. Gathering the retail cost and performance mean onto one slide is useful at-a-glance comparative information for lots of consumers with different uses and starting points, not just in situ upgrades from other recent products, which is only one use case.

On the "uplift"/price concept specific to upgrades, attempting to calculate comparative numbers based on looking at your current CPU model's performance in someone else's test configuration (or maybe worse, trying to run the same software at home and calculate with those numbers), then trying to compare it to a speculative purchase mathematically is going to inherently create bad data. The purpose of benchmarking in a standardized configuration is to compare those devices to each other across that test suite in those conditions, not directly to yours at home with its own configuration and 42 instances of BonziBuddy. At best, those calculated numbers would create a ball park maybe sorta kinda expectation of performance uplift, but extremely imprecise. At worst, you are doing math with made up numbers mixed in and producing nothing of value. That seems like more of a "garbage metric" than a reviewers price/performance chart based on actual data, to me. But thats just my opinion as someone who hasn't taken stats in like two decades, maybe im wrong! Someone please correct me if im missing something.

Ignoring the catty part of this, i will again say that I don't believe the majority or even plurality of DIY PC builders are regularly flipping their old PC parts on ebay. I would also probably say that by the time most people upgrade, very few parts are going to be worth hundreds of dollars each. A 6700K is like $80 on ebay. Ryzen 1700s are often in the 60s after shipping. Personally, I am not bothering listing, packing, boxing, shipping, or dealing with rando strangers and potential scammers unless I get a fairly significant amount of money. I would frankly rather do anything else with my time, and im not that well off. They got beer at the store, man.

All that said, if you as a buyer want to factor this in to your own decisions, and flip fast enough that you are leaving lots of value on the table and dont want to pass it to a family member or whatever, have at it! I just don't think thats most people.

I wasn't being catty. I didn't mean you specifically, just any buyer. There are a bunch of people here with income high enough not to care, but the average review consumer is not that person. And yes, it's not realistic for reviewers to try to estimate numbers based on resale, but they should be reminding viewers to take it into account any time they are talking about upgrade value.

There's nothing stopping them from benching some older configurations. GN benched a 1700X for their review. Throw in a few CPUs that sold a lot, like 2700X, 8700k, 3600X. Unlike the new hardware, you don't need to bench older systems for as many passes. Expected performance is already known, so as long as your data falls in line with it you can hit one or two passes and be done. Even if you're off some, it doesn't matter that much. This isn't hard science, and reviewers are constantly off from each other by several percent. Just being in the right ballpark will set the baseline effectively enough that people can understand what they're actually getting for their money. The closer you get to a realistic baseline, the better the data is, not worse. Especially when the average person considering a CPU has a CPU that is at least 60-80% of the performance of it for games. Additionally, that kind of work can generally be done ahead of time, meaning it doesn't have to contribute to the massive benchmarking grind of a short review embargo. And once you have the data, producing several different charts showing uplift value relative to several baselines is trivial if you have a sane workflow. You can throw them up on screen for a few seconds each and let people pause and consider the closest approximation for them. I'm fine with also showing a value from zero in that scenario, because there are plenty of people who are building systems from scratch - I'm just not fine with it being the only thing, because it creates a grossly inaccurate impression of value in the minds of people who lack the critical thinking to understand what's wrong with it.

Yes, users estimating where their current system falls based on benches of a few older systems is going to be approximating where performance lies - but it's a hell of a lot more accurate and meaningful than implying that a 5600X is almost as good of an upgrade as a 5800X3D, or that a 7700X is a worse value than a 5800X. For anyone with an existing system, neither of those is even in the ballpark of reality. If you're going to bother producing value data, it should not be done the way HWUB does it - although I do give them credit for picking motherboard and memory prices, even if they aren't always realistic.

K8.0 fucked around with this message at 04:57 on Oct 21, 2022

Josh Lyman
May 24, 2009


Reusing your old hardware obviously affects which upgrade offers best bang for buck, but reselling doesn’t affect the rank order of which new system is best bang for buck.

In any case, don’t the majority of people build all new machines these days? It’s not like the Athlon days where you’d want to upgrade your CPU every year because you’d get huge improvements in real world usage. Quite a few people, including myself, are on DDR3 systems so we won’t be reusing anything.

forest spirit
Apr 6, 2009

Frigate Hetman Sahaidachny
First to Fight Scuttle, First to Fall Sink


I'm going to get the 5800x3d to swap out my 3600, which I got at release to let my Xeon monstrosity die. I think a lot of people are in the same boat

Yudo
May 15, 2003

Josh Lyman posted:

Reusing your old hardware obviously affects which upgrade offers best bang for buck, but reselling doesn’t affect the rank order of which new system is best bang for buck.

In any case, don’t the majority of people build all new machines these days? It’s not like the Athlon days where you’d want to upgrade your CPU every year because you’d get huge improvements in real world usage. Quite a few people, including myself, are on DDR3 systems so we won’t be reusing anything.

I am using the case and PSU from a Haswell build as well as some of the storage. Like you I had to ditch the MB and RAM, but nearly every other part I have recycled. My AM4 MB has been host to two different Ryzen CPUs, now a 5900x that was on Amazon firesale that should be a nice boost to ride out the first generation AMD DDR5 parts: that AM4 has been so rock solid (knocking on wood) makes me quite reluctant to jump ship. I have been using the same ex-miner 1080 for years, over 3 different CPUs and the same DDR4 for...a long time. Ditto with the cooler, the fans, storage, etc.

Prices have sucked for so long that it is necessary to reuse as many parts as possible for me not to break a budget or even just not to feel ripped off. In the next 6 months or so, I want a new PSU and video card, but there will still be parts in my PC that are nearly a decade old.

CaptainSarcastic
Jul 6, 2013



forest spirit posted:

I'm going to get the 5800x3d to swap out my 3600, which I got at release to let my Xeon monstrosity die. I think a lot of people are in the same boat

That's basically what I did - dropped a 5800X3D in my main system (X570) and upgraded the GPU, RAM, and case, and I'm building a secondary system in my old case around the 3600X, RAM, and 2070 Super I upgraded from. I already had a 650W eVGA PSU on hand, so the only things I bought were a B550 motherboard and a new CPU cooler. Eh, I guess I also bought a couple NVME drives for it, but that wasn't strictly necessary - I have spare SATA SSDs on-hand, too. Between the two systems the perf/cost ratio probably averages out okay.

ijyt
Apr 10, 2012

hobbesmaster posted:

The problem with all these games that that a real time game loop’s “tick” rate and amdahl’s law conspire to make that not work as well as you’d hope.

Could you give a quick'n'simple rundown of that law?

Cantide
Jun 13, 2001
Pillbug

ijyt posted:

Could you give a quick'n'simple rundown of that law?

https://en.wikipedia.org/wiki/Amdahl%27s_law

The theoretical speedup of the latency of the execution of a program as a function of the number of processors executing it, according to Amdahl's law. The speedup is limited by the serial part of the program. For example, if 95% of the program can be parallelized, the theoretical maximum speedup using parallel computing would be 20 times.

Edit: Games are probably more in the 50% can be parallelized graph region ...

Cantide fucked around with this message at 09:02 on Oct 21, 2022

BlankSystemDaemon
Mar 13, 2009



ijyt posted:

Could you give a quick'n'simple rundown of that law?
To add to what Cantide said, none of which I disagree with, I think it's important to remember that in basically every single game published to date, the bottleneck for performance comes from the part of the engine that's responsible for making sure things get rendered in time to be displayed on a screen.

If you optimize for its logic states to update at 60Hz, that puts a hard realtime limit at around 17 ms - which isn't exactly a whole lot of time.

Add to that that you have to program in order to avoid lock contention and race conditions, and to make best use of the OS' job control/scheduler pre-emption, any of which are problems with no one-size solution, and it's perhaps easier to understand why throwing more cores at games aren't ever gonna make them much faster.

What some game developers have gotten better at is putting non-critical tasks on separate threads and having the fast path of the code be as light as possible, but since a lot of game development is still proprietary, this knowledge isn't really shared broadly so everyone gets to reinvent the wheel.

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy
having lots of cores is nice because you can watch all the boxes in Task Manager :)

Truga
May 4, 2014
Lipstick Apathy

gradenko_2000 posted:

having lots of cores is nice because you can watch all the lines in top :)
fyp
realtalk though, I bought a 3950x a couple years ago and it's loving amazing. i don't load it as much anymore these days and will probably upgrade to 5800x3d when 7800x3d kills its price, but it's still really really nice to be able to just ssh home and run something at 4.4ghz and 32 threads for work when I want to and wait a couple hours less because all the cheap 16core epycs at work top out at 2.4ghz :v:

ijyt
Apr 10, 2012

Learned something new today, thanks both!

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

hobbesmaster posted:

The problem with all these games that that a real time game loop’s “tick” rate and amdahl’s law conspire to make that not work as well as you’d hope.

I don’t expect linear improvement, obviously, but DF at least has so many independent systems that are updated every N ticks that I think it could fan out quite widely. Processing all the thermal transfer, fluid propagation, pathing and “detection” for every moving thing, decay on each item, mood updates, food/drink/healing/disease, animal reproduction, plant growth, etc. it might need to be built on something like MVCC if it doesn’t want to race freely (hard for debugging) but it seems like such a perfect case.

Kibner
Oct 21, 2008

Acguy Supremacy

Subjunctive posted:

I don’t expect linear improvement, obviously, but DF at least has so many independent systems that are updated every N ticks that I think it could fan out quite widely. Processing all the thermal transfer, fluid propagation, pathing and “detection” for every moving thing, decay on each item, mood updates, food/drink/healing/disease, animal reproduction, plant growth, etc. it might need to be built on something like MVCC if it doesn’t want to race freely (hard for debugging) but it seems like such a perfect case.

I do believe the devs have talked about this before and the problem they have been unable to solve is the dependency chain. Things have to be updated in a certain order or everything breaks and they have not been able to untangle that. At least, that is what I remember the last time this came up a year or two ago.

distortion park
Apr 25, 2011


The price difference between the 5700x and 5800x3d is bigger than I was expecting - about 270eur vs 440eur. They're both gonna be good value though I think until the new generation mobo and ram prices come down (although maybe the 13600k and the 13400 when released change that?)

hobbesmaster
Jan 28, 2008

Subjunctive posted:

I don’t expect linear improvement, obviously, but DF at least has so many independent systems that are updated every N ticks that I think it could fan out quite widely. Processing all the thermal transfer, fluid propagation, pathing and “detection” for every moving thing, decay on each item, mood updates, food/drink/healing/disease, animal reproduction, plant growth, etc. it might need to be built on something like MVCC if it doesn’t want to race freely (hard for debugging) but it seems like such a perfect case.

An update at tick x in a simulation generally requires the state at tick x-1. The processes would have to be completely independent of outside game state.

Klyith
Aug 3, 2007

GBS Pledge Week
Dwarf Fortress is not a well-architected codebase (by the creator's own admissions) and is a pretty bad example for anything.

In principle these types of sim games with lots of independent or semi-independent systems are a good place for multithreading... but you have to design for that from the start. And basically all of these games that have gotten big started out as tiny or small-ish indie projects. DF, Cities Skylines, Factorio, etc. They don't run into scaling problems until they're expanding everything 10x bigger and better with all the money they got from their game being super-popular.

By then it's way too late.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Klyith posted:

Dwarf Fortress is not a well-architected codebase (by the creator's own admissions) and is a pretty bad example for anything.

In principle these types of sim games with lots of independent or semi-independent systems are a good place for multithreading... but you have to design for that from the start. And basically all of these games that have gotten big started out as tiny or small-ish indie projects. DF, Cities Skylines, Factorio, etc. They don't run into scaling problems until they're expanding everything 10x bigger and better with all the money they got from their game being super-popular.

By then it's way too late.

If these games get successful enough, they get Switch ports, where you have to find a way to live with 1GHz ARM cores that are much slower per clock than desktop cores to boot: https://www.factorio.com/blog/post/factorio-on-nintendo-switch

Klyith
Aug 3, 2007

GBS Pledge Week

Twerk from Home posted:

If these games get successful enough, they get Switch ports, where you have to find a way to live with 1GHz ARM cores that are much slower per clock than desktop cores to boot: https://www.factorio.com/blog/post/factorio-on-nintendo-switch

Yeah, and Factorio is single-threaded. I doubt that will change on switch. Which is why they say:

quote:

But don't expect to be able to build mega-bases without UPS starting to drop, sometimes significantly.

Factorio has fine performance even single-threaded, the maps that people build to benchmark CPU performance are insane. If you are just playing the game like a normal person the 1ghz switch CPU will be fine or at least acceptable.

Yudo
May 15, 2003

distortion park posted:

The price difference between the 5700x and 5800x3d is bigger than I was expecting - about 270eur vs 440eur. They're both gonna be good value though I think until the new generation mobo and ram prices come down (although maybe the 13600k and the 13400 when released change that?)

Either will be fine for some time, though the 5800x3d is perhaps a better long term bet for games vs. the 5700x. As for the Intel stuff, you can use an older motherboard and DDR4: you don't need the latest and greatest if you want to save money.

lih
May 15, 2013

Just a friendly reminder of what it looks like.

We'll do punctuation later.

distortion park posted:

The price difference between the 5700x and 5800x3d is bigger than I was expecting - about 270eur vs 440eur. They're both gonna be good value though I think until the new generation mobo and ram prices come down (although maybe the 13600k and the 13400 when released change that?)
the 13600K is out already and B660/Z690 motherboard prices aren't too bad (Z790 is totally unnecessary). it's only the AM4 motherboards that are priced terribly at the moment

sensible starting points are 5800X3D if you already have a compatible motherboard, 13600K if you want the best value new CPU, and 5600 if you're on a budget. upgrading beyond the 13600K doesn't make too much sense unless you have a heavy productivity workload

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Klyith posted:

Dwarf Fortress is not a well-architected codebase (by the creator's own admissions) and is a pretty bad example for anything.

Up until fairly recently, didn't he not even have a way to see what the execution time of each section of code was? I remember in the DF thread everyone was flabbergasted that he couldn't actually check the performance gain/loss on a change without actually playtesting it and going 'yeah, seems faster to me'.

Klyith
Aug 3, 2007

GBS Pledge Week

Methylethylaldehyde posted:

Up until fairly recently, didn't he not even have a way to see what the execution time of each section of code was? I remember in the DF thread everyone was flabbergasted that he couldn't actually check the performance gain/loss on a change without actually playtesting it and going 'yeah, seems faster to me'.

I have not paid much attention to DF in quite a while, but the particular bit I know about was that they put out the source of a older game, which was shared the same graphics code for rendering sprites, so the community could help get that redone in SDL. And it was :chloe:

Dwarf Fortress code is about as sane and organized as Dwarf Fortress dwarves

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Yeah, he needs to put out little “challenge” programs like that so the community can hyper-optimize them and then he can take the little wins back to the real game.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Klyith posted:

I have not paid much attention to DF in quite a while, but the particular bit I know about was that they put out the source of a older game, which was shared the same graphics code for rendering sprites, so the community could help get that redone in SDL. And it was :chloe:

Dwarf Fortress code is about as sane and organized as Dwarf Fortress dwarves

"Why do forts get so slow the longer you play them?"

One of the top "I swear to god if this is true I will scream" guesses was "The emotion system runs through every item the dwarf sees and experiences, does a bunch of math based on the Dwarf's individual tastes, sorts that list using the worst implementation of simple sort possible, then goes down the ranking adjusting various moods. It does this per frame, as part of the core game logic loop, wedged slightly behind and underneath the path-finding code because if a dwarf sees something distressing enough, we want him to turn around and go another way!"

Someone tested it with a bare fort, and a fort covered in thousands of intricate engravings, and the 2nd one was measurably slower.

AARP LARPer
Feb 19, 2005

THE DARK SIDE OF SCIENCE BREEDS A WEAPON OF WAR

Buglord

forest spirit posted:

I'm going to get the 5800x3d to swap out my 3600, which I got at release to let my Xeon monstrosity die. I think a lot of people are in the same boat

Yep. The last hurrah for AM4. I plan on a long run with this one. Cheers!

KingKapalone
Dec 20, 2005
1/16 Native American + 1/2 Hungarian = Totally Badass
My friend was taking out his big cooler to make room in his case but it ripped the 5800x3d out of the socket when he lifted it out. He had just installed it. Very strange that the paste alone stuck them together.

Anyway, no pins broke but tons are bent. He went and bought a new one since he's impatient and they're selling out. I'm going to take the old one. What's the best way to bend them back and is it risky to install it in my mobo? Would it just not boot or would it fry it?

ijyt
Apr 10, 2012

KingKapalone posted:

My friend was taking out his big cooler to make room in his case but it ripped the 5800x3d out of the socket when he lifted it out. He had just installed it. Very strange that the paste alone stuck them together.

Anyway, no pins broke but tons are bent. He went and bought a new one since he's impatient and they're selling out. I'm going to take the old one. What's the best way to bend them back and is it risky to install it in my mobo? Would it just not boot or would it fry it?

No real risk, the best way to bend them back (from my experience) is a metal ruler between the avenues and/or a mechanical pencil tip.

And this is why you should twist the cooler first before lifting :eng101:

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy
I've used razor blades to straighten pins on three different AMD CPUs so far

CaptainSarcastic
Jul 6, 2013



gradenko_2000 posted:

I've used razor blades to straighten pins on three different AMD CPUs so far

Yeah, that was my usual go-to for fixing bent pins.

kliras
Mar 27, 2021
boxcutter is probably a little safer. it's what i used on top of some small flat ifixit screwdriver heads just to fix some of the worse cases

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Klyith posted:

It's like, generally correct but when it's wrong it's really wrong. For ex bulldozer getting perversely better over time, or the 8600k getting hosed by having only 4 threads.

bulldozer never got perversely better over time. bulldozer scoring 10fps in some random title where a 2500K only scores 8fps is not the win people think it is.

nobody wanted to be using a 8150 a day past 2017

like good lord imagine holding a candle for loving bulldozer lol

also, the 8600K having 6 threads isn't really a problem in any title except far cry 5 really, which is the title where a stock 2C4T pentium is outperforming a 5.2 GHz 8600K, which is completely and obviously something wrong with either the game or the benchmark. other than that like... it's not the best performer in battlefield V/2042 I guess, but, that series completely imploded on itself so who cares.

8600K generally performs the same as a 7700K at equivalent clocks and while a 7700K is obviously on the slower side as far as MT perf these days, it's still very playable in the overwhelming majority of titles. The exceptions are some games that just poo poo themselves inexplicably like FC5, and I'm not convinced that isn't just some quirk of the engine given that (again) it's being outperformed by A Literal Pentium 2C.

and in contrast remember that the 1600 was and is garbage at gaming too. You really don't want to play a 1600 in modern titles either. Single-thread still very much matters.

Paul MaudDib fucked around with this message at 04:52 on Oct 23, 2022

Indiana_Krom
Jun 18, 2007
Net Slacker
Far Cry 3/4/5 (don't have 6) are heavily single thread dependent and don't scale well or sometimes at all with more cores. And they probably also hit memory bandwidth/cache really hard, especially the encrypted DRM flavors of the later games (not only is the engine poorly optimized, but it has to have its memory and executable constantly decrypted/encrypted on the fly in software).

Like seriously, Far Cry 3 doesn't perform any better on a 9900k/GTX 3080 Ti than it does on a 2700k/GTX 680. It is really odd when Far Cry 2/Dunia 1 was one of the first game engines that showed a major benefit from going to 4 cores instead of 2.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
The DRM in Farcry 6 was that bad, that it created really hard stutter to the point audio cut out. Farcry 5 flawlessly.

Palladium
May 8, 2012

Very Good
✔️✔️✔️✔️

Combat Pretzel posted:

The DRM in Farcry 6 was that bad, that it created really hard stutter to the point audio cut out. Farcry 5 flawlessly.

thats why the chinese gamers has a term called "legit version victim"

lih
May 15, 2013

Just a friendly reminder of what it looks like.

We'll do punctuation later.
yeah both far cry 5 & 6 significantly benefit from the 5800x3d's extra cache, makes sense if goofy drm stuff is why

Rinkles
Oct 24, 2010

What I'm getting at is...
Do you feel the same way?

Indiana_Krom posted:

Far Cry 3/4/5 (don't have 6) are heavily single thread dependent and don't scale well or sometimes at all with more cores. And they probably also hit memory bandwidth/cache really hard, especially the encrypted DRM flavors of the later games (not only is the engine poorly optimized, but it has to have its memory and executable constantly decrypted/encrypted on the fly in software).

Like seriously, Far Cry 3 doesn't perform any better on a 9900k/GTX 3080 Ti than it does on a 2700k/GTX 680. It is really odd when Far Cry 2/Dunia 1 was one of the first game engines that showed a major benefit from going to 4 cores instead of 2.

FWIW, I saw a massive boost in FC3 going from a 6400 to a 11600K.

Rinkles fucked around with this message at 14:31 on Oct 23, 2022

Alchenar
Apr 9, 2008

We're at the stage where everyone who bought AM5 is starting to complain about the regular crashes, so one more reason to wait a little and let the bios versions mature a bit.

Adbot
ADBOT LOVES YOU

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I hope that the X3D versions are a hardware revision, fixing some egregious poo poo having identified shortly before the release of the current ones.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply