|
trying to look at this from K8.0's perspective: If you already had a 5600X, and were wondering about moving to a 7600X, or a 5800X3D, or an i5-12600K, or an i9-13900K: - a baseline of 130 FPS - a 7600X would require the full cost of the CPU, the motherboard, and the DDR5 RAM; 870 USD for +45 FPS - a 5800X3D would only require the cost of the CPU; 400 USD for +44 FPS - an i5-12600K (with DDR4 RAM) would require the cost of the CPU and the board; 510 USD for +12 FPS - an i9-13900K (with DDR4 RAM) would require the cost of the CPU and the board; 890 USD for +48 FPS calculating cost-per-uplift, and putting that in order: 1. 5800X3D: 9.09 dollars per 1 FPS of uplift 2. i9-13900K: 18.54 dollars per 1 FPS of uplift 3. 7600X: 19.33 dollars per 1 FPS of uplift 4. i5-12600K: 42.5 dollars per 1 FPS of uplift HWUB doesn't have the i5-13600K yet, so that's throwing it off, but if we assume something like 3% less FPS than a 13900K, and a cost of 320 USD, then it would land at about 172 FPS for the cost of the CPU and the board, or 460 USD for +42 FPS of uplift, or about 10.95 dollars per 1 FPS of uplift, putting it solidly in second place behind the 5800X3D. of course, if you already have a 5600X then I don't really think you'd be hankering for an upgrade - it's a lot more interesting if you're still on something like a Ryzen 1600AF or a Ryzen 3600, but we don't have charts for that so you kind of have to eyeball it against, say, techpowerup's rankings where an i3-10100 is 66% of an i5-13600K, which moves the baseline down to 103 FPS, but also changes around the platform costs because now moving to a 5800X3D requires a board, where it wouldn't if you were already on AM4. you could defray some of those costs by assuming you can sell the old parts you're moving away from, but then that just complicates the calculation even further, to the point where I'd understand using a baseline of zero because there are otherwise too many factors to account for in a youtube video.
|
# ? Oct 21, 2022 04:29 |
|
|
# ? May 31, 2024 18:34 |
|
the big problem (as K8.0 already said) is that HUB's RAM and motherboard prices are pretty unrealistic for the average buyer because they're trying to maximise performance not represent like, a decent value build with those chips, which throws the whole comparison off
|
# ? Oct 21, 2022 04:36 |
|
Cygni posted:My point is that is still cost per frame, but its cost per frame with an arbitrary adjusted value. From a reviewer standpoint, that arbitrary adjustment is relative to the literal thousands of different CPUs someone might own (if any at all). That seems tough to meaningfully address in a standardized environment. As a reviewer, the best thing you can do for a reader would be to assume a zero starting point and do a... cost/frame graph using your standardized platforms and data with outside variables as controlled as you can make them. Gathering the retail cost and performance mean onto one slide is useful at-a-glance comparative information for lots of consumers with different uses and starting points, not just in situ upgrades from other recent products, which is only one use case. I wasn't being catty. I didn't mean you specifically, just any buyer. There are a bunch of people here with income high enough not to care, but the average review consumer is not that person. And yes, it's not realistic for reviewers to try to estimate numbers based on resale, but they should be reminding viewers to take it into account any time they are talking about upgrade value. There's nothing stopping them from benching some older configurations. GN benched a 1700X for their review. Throw in a few CPUs that sold a lot, like 2700X, 8700k, 3600X. Unlike the new hardware, you don't need to bench older systems for as many passes. Expected performance is already known, so as long as your data falls in line with it you can hit one or two passes and be done. Even if you're off some, it doesn't matter that much. This isn't hard science, and reviewers are constantly off from each other by several percent. Just being in the right ballpark will set the baseline effectively enough that people can understand what they're actually getting for their money. The closer you get to a realistic baseline, the better the data is, not worse. Especially when the average person considering a CPU has a CPU that is at least 60-80% of the performance of it for games. Additionally, that kind of work can generally be done ahead of time, meaning it doesn't have to contribute to the massive benchmarking grind of a short review embargo. And once you have the data, producing several different charts showing uplift value relative to several baselines is trivial if you have a sane workflow. You can throw them up on screen for a few seconds each and let people pause and consider the closest approximation for them. I'm fine with also showing a value from zero in that scenario, because there are plenty of people who are building systems from scratch - I'm just not fine with it being the only thing, because it creates a grossly inaccurate impression of value in the minds of people who lack the critical thinking to understand what's wrong with it. Yes, users estimating where their current system falls based on benches of a few older systems is going to be approximating where performance lies - but it's a hell of a lot more accurate and meaningful than implying that a 5600X is almost as good of an upgrade as a 5800X3D, or that a 7700X is a worse value than a 5800X. For anyone with an existing system, neither of those is even in the ballpark of reality. If you're going to bother producing value data, it should not be done the way HWUB does it - although I do give them credit for picking motherboard and memory prices, even if they aren't always realistic. K8.0 fucked around with this message at 04:57 on Oct 21, 2022 |
# ? Oct 21, 2022 04:52 |
|
Reusing your old hardware obviously affects which upgrade offers best bang for buck, but reselling doesn’t affect the rank order of which new system is best bang for buck. In any case, don’t the majority of people build all new machines these days? It’s not like the Athlon days where you’d want to upgrade your CPU every year because you’d get huge improvements in real world usage. Quite a few people, including myself, are on DDR3 systems so we won’t be reusing anything.
|
# ? Oct 21, 2022 05:09 |
|
I'm going to get the 5800x3d to swap out my 3600, which I got at release to let my Xeon monstrosity die. I think a lot of people are in the same boat
|
# ? Oct 21, 2022 05:12 |
|
Josh Lyman posted:Reusing your old hardware obviously affects which upgrade offers best bang for buck, but reselling doesn’t affect the rank order of which new system is best bang for buck. I am using the case and PSU from a Haswell build as well as some of the storage. Like you I had to ditch the MB and RAM, but nearly every other part I have recycled. My AM4 MB has been host to two different Ryzen CPUs, now a 5900x that was on Amazon firesale that should be a nice boost to ride out the first generation AMD DDR5 parts: that AM4 has been so rock solid (knocking on wood) makes me quite reluctant to jump ship. I have been using the same ex-miner 1080 for years, over 3 different CPUs and the same DDR4 for...a long time. Ditto with the cooler, the fans, storage, etc. Prices have sucked for so long that it is necessary to reuse as many parts as possible for me not to break a budget or even just not to feel ripped off. In the next 6 months or so, I want a new PSU and video card, but there will still be parts in my PC that are nearly a decade old.
|
# ? Oct 21, 2022 05:29 |
|
forest spirit posted:I'm going to get the 5800x3d to swap out my 3600, which I got at release to let my Xeon monstrosity die. I think a lot of people are in the same boat That's basically what I did - dropped a 5800X3D in my main system (X570) and upgraded the GPU, RAM, and case, and I'm building a secondary system in my old case around the 3600X, RAM, and 2070 Super I upgraded from. I already had a 650W eVGA PSU on hand, so the only things I bought were a B550 motherboard and a new CPU cooler. Eh, I guess I also bought a couple NVME drives for it, but that wasn't strictly necessary - I have spare SATA SSDs on-hand, too. Between the two systems the perf/cost ratio probably averages out okay.
|
# ? Oct 21, 2022 07:34 |
|
hobbesmaster posted:The problem with all these games that that a real time game loop’s “tick” rate and amdahl’s law conspire to make that not work as well as you’d hope. Could you give a quick'n'simple rundown of that law?
|
# ? Oct 21, 2022 08:09 |
|
ijyt posted:Could you give a quick'n'simple rundown of that law? https://en.wikipedia.org/wiki/Amdahl%27s_law The theoretical speedup of the latency of the execution of a program as a function of the number of processors executing it, according to Amdahl's law. The speedup is limited by the serial part of the program. For example, if 95% of the program can be parallelized, the theoretical maximum speedup using parallel computing would be 20 times. Edit: Games are probably more in the 50% can be parallelized graph region ... Cantide fucked around with this message at 09:02 on Oct 21, 2022 |
# ? Oct 21, 2022 08:59 |
ijyt posted:Could you give a quick'n'simple rundown of that law? If you optimize for its logic states to update at 60Hz, that puts a hard realtime limit at around 17 ms - which isn't exactly a whole lot of time. Add to that that you have to program in order to avoid lock contention and race conditions, and to make best use of the OS' job control/scheduler pre-emption, any of which are problems with no one-size solution, and it's perhaps easier to understand why throwing more cores at games aren't ever gonna make them much faster. What some game developers have gotten better at is putting non-critical tasks on separate threads and having the fast path of the code be as light as possible, but since a lot of game development is still proprietary, this knowledge isn't really shared broadly so everyone gets to reinvent the wheel.
|
|
# ? Oct 21, 2022 09:35 |
|
having lots of cores is nice because you can watch all the boxes in Task Manager
|
# ? Oct 21, 2022 10:01 |
|
gradenko_2000 posted:having lots of cores is nice because you can watch all the lines in top realtalk though, I bought a 3950x a couple years ago and it's loving amazing. i don't load it as much anymore these days and will probably upgrade to 5800x3d when 7800x3d kills its price, but it's still really really nice to be able to just ssh home and run something at 4.4ghz and 32 threads for work when I want to and wait a couple hours less because all the cheap 16core epycs at work top out at 2.4ghz
|
# ? Oct 21, 2022 10:47 |
|
Learned something new today, thanks both!
|
# ? Oct 21, 2022 10:50 |
|
hobbesmaster posted:The problem with all these games that that a real time game loop’s “tick” rate and amdahl’s law conspire to make that not work as well as you’d hope. I don’t expect linear improvement, obviously, but DF at least has so many independent systems that are updated every N ticks that I think it could fan out quite widely. Processing all the thermal transfer, fluid propagation, pathing and “detection” for every moving thing, decay on each item, mood updates, food/drink/healing/disease, animal reproduction, plant growth, etc. it might need to be built on something like MVCC if it doesn’t want to race freely (hard for debugging) but it seems like such a perfect case.
|
# ? Oct 21, 2022 17:23 |
|
Subjunctive posted:I don’t expect linear improvement, obviously, but DF at least has so many independent systems that are updated every N ticks that I think it could fan out quite widely. Processing all the thermal transfer, fluid propagation, pathing and “detection” for every moving thing, decay on each item, mood updates, food/drink/healing/disease, animal reproduction, plant growth, etc. it might need to be built on something like MVCC if it doesn’t want to race freely (hard for debugging) but it seems like such a perfect case. I do believe the devs have talked about this before and the problem they have been unable to solve is the dependency chain. Things have to be updated in a certain order or everything breaks and they have not been able to untangle that. At least, that is what I remember the last time this came up a year or two ago.
|
# ? Oct 21, 2022 17:34 |
|
The price difference between the 5700x and 5800x3d is bigger than I was expecting - about 270eur vs 440eur. They're both gonna be good value though I think until the new generation mobo and ram prices come down (although maybe the 13600k and the 13400 when released change that?)
|
# ? Oct 21, 2022 18:27 |
|
Subjunctive posted:I don’t expect linear improvement, obviously, but DF at least has so many independent systems that are updated every N ticks that I think it could fan out quite widely. Processing all the thermal transfer, fluid propagation, pathing and “detection” for every moving thing, decay on each item, mood updates, food/drink/healing/disease, animal reproduction, plant growth, etc. it might need to be built on something like MVCC if it doesn’t want to race freely (hard for debugging) but it seems like such a perfect case. An update at tick x in a simulation generally requires the state at tick x-1. The processes would have to be completely independent of outside game state.
|
# ? Oct 21, 2022 18:47 |
|
Dwarf Fortress is not a well-architected codebase (by the creator's own admissions) and is a pretty bad example for anything. In principle these types of sim games with lots of independent or semi-independent systems are a good place for multithreading... but you have to design for that from the start. And basically all of these games that have gotten big started out as tiny or small-ish indie projects. DF, Cities Skylines, Factorio, etc. They don't run into scaling problems until they're expanding everything 10x bigger and better with all the money they got from their game being super-popular. By then it's way too late.
|
# ? Oct 21, 2022 19:58 |
|
Klyith posted:Dwarf Fortress is not a well-architected codebase (by the creator's own admissions) and is a pretty bad example for anything. If these games get successful enough, they get Switch ports, where you have to find a way to live with 1GHz ARM cores that are much slower per clock than desktop cores to boot: https://www.factorio.com/blog/post/factorio-on-nintendo-switch
|
# ? Oct 21, 2022 20:03 |
|
Twerk from Home posted:If these games get successful enough, they get Switch ports, where you have to find a way to live with 1GHz ARM cores that are much slower per clock than desktop cores to boot: https://www.factorio.com/blog/post/factorio-on-nintendo-switch Yeah, and Factorio is single-threaded. I doubt that will change on switch. Which is why they say: quote:But don't expect to be able to build mega-bases without UPS starting to drop, sometimes significantly. Factorio has fine performance even single-threaded, the maps that people build to benchmark CPU performance are insane. If you are just playing the game like a normal person the 1ghz switch CPU will be fine or at least acceptable.
|
# ? Oct 21, 2022 20:09 |
|
distortion park posted:The price difference between the 5700x and 5800x3d is bigger than I was expecting - about 270eur vs 440eur. They're both gonna be good value though I think until the new generation mobo and ram prices come down (although maybe the 13600k and the 13400 when released change that?) Either will be fine for some time, though the 5800x3d is perhaps a better long term bet for games vs. the 5700x. As for the Intel stuff, you can use an older motherboard and DDR4: you don't need the latest and greatest if you want to save money.
|
# ? Oct 21, 2022 21:03 |
|
distortion park posted:The price difference between the 5700x and 5800x3d is bigger than I was expecting - about 270eur vs 440eur. They're both gonna be good value though I think until the new generation mobo and ram prices come down (although maybe the 13600k and the 13400 when released change that?) sensible starting points are 5800X3D if you already have a compatible motherboard, 13600K if you want the best value new CPU, and 5600 if you're on a budget. upgrading beyond the 13600K doesn't make too much sense unless you have a heavy productivity workload
|
# ? Oct 21, 2022 22:40 |
|
Klyith posted:Dwarf Fortress is not a well-architected codebase (by the creator's own admissions) and is a pretty bad example for anything. Up until fairly recently, didn't he not even have a way to see what the execution time of each section of code was? I remember in the DF thread everyone was flabbergasted that he couldn't actually check the performance gain/loss on a change without actually playtesting it and going 'yeah, seems faster to me'.
|
# ? Oct 22, 2022 00:21 |
|
Methylethylaldehyde posted:Up until fairly recently, didn't he not even have a way to see what the execution time of each section of code was? I remember in the DF thread everyone was flabbergasted that he couldn't actually check the performance gain/loss on a change without actually playtesting it and going 'yeah, seems faster to me'. I have not paid much attention to DF in quite a while, but the particular bit I know about was that they put out the source of a older game, which was shared the same graphics code for rendering sprites, so the community could help get that redone in SDL. And it was Dwarf Fortress code is about as sane and organized as Dwarf Fortress dwarves
|
# ? Oct 22, 2022 01:34 |
|
Yeah, he needs to put out little “challenge” programs like that so the community can hyper-optimize them and then he can take the little wins back to the real game.
|
# ? Oct 22, 2022 01:37 |
|
Klyith posted:I have not paid much attention to DF in quite a while, but the particular bit I know about was that they put out the source of a older game, which was shared the same graphics code for rendering sprites, so the community could help get that redone in SDL. And it was "Why do forts get so slow the longer you play them?" One of the top "I swear to god if this is true I will scream" guesses was "The emotion system runs through every item the dwarf sees and experiences, does a bunch of math based on the Dwarf's individual tastes, sorts that list using the worst implementation of simple sort possible, then goes down the ranking adjusting various moods. It does this per frame, as part of the core game logic loop, wedged slightly behind and underneath the path-finding code because if a dwarf sees something distressing enough, we want him to turn around and go another way!" Someone tested it with a bare fort, and a fort covered in thousands of intricate engravings, and the 2nd one was measurably slower.
|
# ? Oct 22, 2022 02:28 |
|
forest spirit posted:I'm going to get the 5800x3d to swap out my 3600, which I got at release to let my Xeon monstrosity die. I think a lot of people are in the same boat Yep. The last hurrah for AM4. I plan on a long run with this one. Cheers!
|
# ? Oct 22, 2022 08:13 |
|
My friend was taking out his big cooler to make room in his case but it ripped the 5800x3d out of the socket when he lifted it out. He had just installed it. Very strange that the paste alone stuck them together. Anyway, no pins broke but tons are bent. He went and bought a new one since he's impatient and they're selling out. I'm going to take the old one. What's the best way to bend them back and is it risky to install it in my mobo? Would it just not boot or would it fry it?
|
# ? Oct 22, 2022 10:54 |
|
KingKapalone posted:My friend was taking out his big cooler to make room in his case but it ripped the 5800x3d out of the socket when he lifted it out. He had just installed it. Very strange that the paste alone stuck them together. No real risk, the best way to bend them back (from my experience) is a metal ruler between the avenues and/or a mechanical pencil tip. And this is why you should twist the cooler first before lifting
|
# ? Oct 22, 2022 11:16 |
|
I've used razor blades to straighten pins on three different AMD CPUs so far
|
# ? Oct 22, 2022 13:10 |
|
gradenko_2000 posted:I've used razor blades to straighten pins on three different AMD CPUs so far Yeah, that was my usual go-to for fixing bent pins.
|
# ? Oct 22, 2022 21:03 |
|
boxcutter is probably a little safer. it's what i used on top of some small flat ifixit screwdriver heads just to fix some of the worse cases
|
# ? Oct 22, 2022 21:07 |
|
Klyith posted:It's like, generally correct but when it's wrong it's really wrong. For ex bulldozer getting perversely better over time, or the 8600k getting hosed by having only 4 threads. bulldozer never got perversely better over time. bulldozer scoring 10fps in some random title where a 2500K only scores 8fps is not the win people think it is. nobody wanted to be using a 8150 a day past 2017 like good lord imagine holding a candle for loving bulldozer lol also, the 8600K having 6 threads isn't really a problem in any title except far cry 5 really, which is the title where a stock 2C4T pentium is outperforming a 5.2 GHz 8600K, which is completely and obviously something wrong with either the game or the benchmark. other than that like... it's not the best performer in battlefield V/2042 I guess, but, that series completely imploded on itself so who cares. 8600K generally performs the same as a 7700K at equivalent clocks and while a 7700K is obviously on the slower side as far as MT perf these days, it's still very playable in the overwhelming majority of titles. The exceptions are some games that just poo poo themselves inexplicably like FC5, and I'm not convinced that isn't just some quirk of the engine given that (again) it's being outperformed by A Literal Pentium 2C. and in contrast remember that the 1600 was and is garbage at gaming too. You really don't want to play a 1600 in modern titles either. Single-thread still very much matters. Paul MaudDib fucked around with this message at 04:52 on Oct 23, 2022 |
# ? Oct 23, 2022 04:45 |
|
Far Cry 3/4/5 (don't have 6) are heavily single thread dependent and don't scale well or sometimes at all with more cores. And they probably also hit memory bandwidth/cache really hard, especially the encrypted DRM flavors of the later games (not only is the engine poorly optimized, but it has to have its memory and executable constantly decrypted/encrypted on the fly in software). Like seriously, Far Cry 3 doesn't perform any better on a 9900k/GTX 3080 Ti than it does on a 2700k/GTX 680. It is really odd when Far Cry 2/Dunia 1 was one of the first game engines that showed a major benefit from going to 4 cores instead of 2.
|
# ? Oct 23, 2022 12:20 |
|
The DRM in Farcry 6 was that bad, that it created really hard stutter to the point audio cut out. Farcry 5 flawlessly.
|
# ? Oct 23, 2022 13:16 |
|
Combat Pretzel posted:The DRM in Farcry 6 was that bad, that it created really hard stutter to the point audio cut out. Farcry 5 flawlessly. thats why the chinese gamers has a term called "legit version victim"
|
# ? Oct 23, 2022 13:18 |
|
yeah both far cry 5 & 6 significantly benefit from the 5800x3d's extra cache, makes sense if goofy drm stuff is why
|
# ? Oct 23, 2022 13:28 |
|
Indiana_Krom posted:Far Cry 3/4/5 (don't have 6) are heavily single thread dependent and don't scale well or sometimes at all with more cores. And they probably also hit memory bandwidth/cache really hard, especially the encrypted DRM flavors of the later games (not only is the engine poorly optimized, but it has to have its memory and executable constantly decrypted/encrypted on the fly in software). FWIW, I saw a massive boost in FC3 going from a 6400 to a 11600K. Rinkles fucked around with this message at 14:31 on Oct 23, 2022 |
# ? Oct 23, 2022 14:29 |
|
We're at the stage where everyone who bought AM5 is starting to complain about the regular crashes, so one more reason to wait a little and let the bios versions mature a bit.
|
# ? Oct 23, 2022 14:33 |
|
|
# ? May 31, 2024 18:34 |
|
I hope that the X3D versions are a hardware revision, fixing some egregious poo poo having identified shortly before the release of the current ones.
|
# ? Oct 23, 2022 15:07 |