|
Get the 3700x, you'll have an upgrade path to zen3 without having to do a board swap. DDR5 is going to be a wash performance-wise compared to mature DDR4 so that isn't something to be holding your breath on. I haven't seen anything with regards to zen2 clocking that wasn't down to iffy motherboard firmware that can be corrected or insufficient cooling capacity.
|
# ? Jul 22, 2019 15:46 |
|
|
# ? May 31, 2024 03:27 |
|
uhhhhahhhhohahhh posted:Having a nightmare deciding between building a 3700X or 9900k system for gaming. There's £36 difference between them according to pcpartpicker at the moment. I'm playing at x1440 144hz at the moment, but considering moving to a 1080 240hz at some point. Them few extra FPS the 9900k gets are taunting me, and the boosting not working properly on the ryzen3000s - maybe ever? - got me second guessing it. But them Intel vulnerabilities and the possibility of upgrading to Zen2020 that could comfortably beat the 9900k at some point is making me hesitate also FWIW I built a 8700K system when it launched, have a 1440p 165Hz screen and would definitely buy a 3700X over a 9900K today*. Ryzen is far less memory dependant due to the large cache and wringing out a perfect overclock with good memory speed/timings on Intel is a science in itself unless you buy very expensive RAM and XMP happens to work. Ongoing Security issues with Hyperthreading should also be a concern. The original Zen 1 launch was also a bit rocky but AMD eventually got things sorted out after a few months. * I'd probably buy a 3600 and upgrade to a refreshed, higher core count SKU next year
|
# ? Jul 22, 2019 15:54 |
|
eames posted:* I'd probably buy a 3600 and upgrade to a refreshed, higher core count SKU next year Deffo what I'm going for personally. Zen2's pretty much demonstrated AMD's stuff is not likely to get worse all of a sudden for no reason on AM4 chipsets and if Zen3 winds up being not that much eh...I still got a nice 3600 that's super reliable on low-FPS and will last me a fairly long time anyway. And if it does wind up actually matching Intel's lineup in FPS performance, then I just get one of their higher-tier x versions whenever I need it and I'm good for like half a decade most likely. But ˇˇ is also true. I mean...in my case I'm still gonna go for 144Hz maximum and often on 1440p in terms of my gaming preference, but if you don't see yourself doing anything other than gaming at 1080p at +144Hz for the next five years, that argument can still be made. CrazyLoon fucked around with this message at 16:51 on Jul 22, 2019 |
# ? Jul 22, 2019 16:47 |
|
For highend gaming (especially that high fps) GN still recommends the 9700K. The Ryzen 3600 for everyone else.
|
# ? Jul 22, 2019 16:48 |
|
sauer kraut posted:For pure gaming (especially high fps) GN still recommends the 9700K Only high fps, as any processor won't be a bottleneck at 60fps, and only for a build with the budget.
|
# ? Jul 22, 2019 16:52 |
|
uhhhhahhhhohahhh posted:Having a nightmare deciding between building a 3700X or 9900k system for gaming. There's £36 difference between them according to pcpartpicker at the moment. I'm playing at x1440 144hz at the moment, but considering moving to a 1080 240hz at some point. Them few extra FPS the 9900k gets are taunting me, and the boosting not working properly on the ryzen3000s - maybe ever? - got me second guessing it. But them Intel vulnerabilities and the possibility of upgrading to Zen2020 that could comfortably beat the 9900k at some point is making me hesitate also I went with the 9900k over the 3900x because I was going to have to get an AIO cooler anyway because I want my machine to be quiet and I know from experience that with my 2700x the wraith prism is louder than I wanted. Sold my 2700x and the motherboard and picked up a 9900k, RTX 2080, motherboard, and h115i pro AIO cooler and I’m very happy. This thing will last me many years now. I do mostly games but also some web dev with docker. Edit: also availability was a factor. 9900k was available when I wanted it. 3900x not so much.
|
# ? Jul 22, 2019 16:55 |
|
eames posted:* I'd probably buy a 3600 and upgrade to a refreshed, higher core count SKU next year I'm seriously considering doing this and then passing the 3600 down to my brother. In light of that, are there any good reasons for me to buy a x470/570 board over a B450 Tomahawk (Max)? My 'needs are' at least 1 NVMe, 4 SATA and at least 4 (preferably 6) PWM fan headers with good BIOS control. I'd slightly prefer buying a more expensive mobo ($200-ish) now with the 3600 and buying a 4000 with an inexpensive B-series for bro next fall.
|
# ? Jul 22, 2019 17:06 |
|
uhhhhahhhhohahhh posted:Having a nightmare deciding between building a 3700X or 9900k system for gaming. There's £36 difference between them according to pcpartpicker at the moment. I'm playing at x1440 144hz at the moment, but considering moving to a 1080 240hz at some point. Them few extra FPS the 9900k gets are taunting me, and the boosting not working properly on the ryzen3000s - maybe ever? - got me second guessing it. But them Intel vulnerabilities and the possibility of upgrading to Zen2020 that could comfortably beat the 9900k at some point is making me hesitate also Do you mean 9700k, cause 9900k's are like 150 pounds more than a 3700x (£480 vs £320).
|
# ? Jul 22, 2019 17:43 |
|
Arzachel posted:Do you mean 9700k, cause 9900k's are like 150 pounds more than a 3700x (£480 vs £320). I was assuming he meant the price he can actually get it for at the moment since there are markups when the thing is available at all.
|
# ? Jul 22, 2019 18:02 |
|
I meant the price of the total build, because the X570 board i picked is like ~£210, and the equivalent X390 was £160. The CPU is £405 on Amazon UK at the moment. I'll be going with an AIO this time as well, normally I'm against them but the case I picked won't fit the higher end air coolers so I thought I'd give one a go. Thanks all. I have been leaning towards the 3700X myself. I'm mainly trying to weigh up the gamble of which one is going to last the longest, but it's based on a bunch of unknowns at the moment, vs a bit more FPS if I decide to go x1080 240hz soon. If they sort out the boosting behaviour that might be a nice bonus, Windows and game patches might give a bit more and maybe the next line of consoles being Zen cores might make a difference. My thinking was buying a 9700k would be risky for the same reason, so I wasn't considering it. It's hard not to think long term when I got 8 years out of my current rig. I'm in annoying situation where I'm still on a 2600k system (was a 2500k until I swapped it out just before BFV released) and an R9 Fury. BFV is the only game I'm playing a lot - despite it being poo poo - and it runs really badly, dropping as low as 50fps sometimes with everything on low. Replacing my GPU does nothing for it, because I'm completely CPU bound. I can't play other games because my SSDs are completely full and I can't make any space, but other games would be better served from a GPU upgrade (which I don't really want to do at the moment) and another sata SSD, but I'd like to give an NVMe one a go -> can't do that without upgrading everything -> don't want to upgrade my CPU just for a bad game like BFV -> cycle continues. uhhhhahhhhohahhh fucked around with this message at 18:26 on Jul 22, 2019 |
# ? Jul 22, 2019 18:23 |
|
uhhhhahhhhohahhh posted:I meant the price of the total build, because the X570 board i picked is like ~£210, and the equivalent X390 was £160. The CPU is £405 on Amazon UK at the moment. I'll be going with an AIO this time as well, normally I'm against them but the case I picked won't fit the higher end air coolers so I thought I'd give one a go. If all you do is game and you don't mind the price increase, I would just get the 9900K or 9900KF. It's slightly faster than the 3700X in gaming especially at higher refresh rates and I don't see the 3700X becoming magically faster overtime, that said the 3700X will still give you a good gaming experience.
|
# ? Jul 22, 2019 18:47 |
|
Yeah, high framerate gaming is the niche where Coffee Lake still makes sense if you want to squeeze out the last bit of performance. OverclockersUK seem to have the 3700x in stock for msrp, if you're not tied to Amazon.
|
# ? Jul 22, 2019 18:57 |
|
Also, AMD's boosting behavior is in fact already incredibly sorted. It's essentially just as good as manually overclocking. There's no reason to expect any sort of frequency gains.
|
# ? Jul 22, 2019 19:09 |
|
K8.0 posted:Also, AMD's boosting behavior is in fact already incredibly sorted. It's essentially just as good as manually overclocking. There's no reason to expect any sort of frequency gains. Yeah, but how will anyone else know how big of an e-penis I have if I can't improve on stock?
|
# ? Jul 22, 2019 19:23 |
|
Are most 3000 ryzens now boosting to the advertised clocks?
|
# ? Jul 22, 2019 19:32 |
|
If you have a good chip you can get some multithreaded gains especially with per-CCX OC on the 3900X because it seems like one CCD is a better bin than the other. For gaming though stock is usually the best because you want the maximum single core turbo.fargom posted:Are most 3000 ryzens now boosting to the advertised clocks? Some of the 3900Xs I have seen are like 20-30MHz short of their rated max boost but for the most part yeah they are.
|
# ? Jul 22, 2019 19:37 |
|
fargom posted:Are most 3000 ryzens now boosting to the advertised clocks? 25-50mhz over given sufficient cooling from what I have seen from reviewers.
|
# ? Jul 22, 2019 19:41 |
|
it may be all hyperbole but there's a ton of salty people around because their CPU's almost never hit the advertised boost, even in non-threaded games, or hit it for a couple of seconds before downclocking. Seen a bunch of posts where people reporting that one of the newer BIOS lowers boosting by 100-200mhz compared to a previous one, but that might be ASUS specific.
|
# ? Jul 22, 2019 20:05 |
|
fargom posted:Are most 3000 ryzens now boosting to the advertised clocks? I still see a lot of people with problems on r/AMD but at this point the mods are just deleting any thread that discusses the topic under their "no tech support" rule. der8auer put out another video discussing the problem (and the AMD community's reaction to anyone who points out there is a problem) just yesterday. it kinda is what it is, AMD's stock clocks are now subject to silicon lottery just like OC has always been. What they're advertising is a theoretical maximum, not something you will necessarily see on every single chip. A lot will get close (25-100 MHz away) especially for brief periods but relatively few will reliably sit at boost clocks under real-world loads. The 3600 specifically ships with a cooler that's not sufficient to keep it from throttling, and the box coolers really are not that great in general. Better than Intel, worse than a cheapo aftermarket cooler, far worse than a Scythe Fuma you could get with the $50 that "X" will cost you. Regardless, the difference is not enough to get most chips to advertised clocks even on a single core, for the most part. The performance still stands on its own merits but you might see let's say 5% less performance than a reviewer who got a golden sample, or 5% more than a reviewer who got a turd. And there is no longer any OC headroom left on the table, the chips are pretty much maxed out of the box. SiliconLottery has commented that if this trend keeps up they're out of a job, there is practically no spread on most parts for them to work with. Aggressive binning means that a good 3700X gets sold as a 3800X, a good 3800X gets sold as a 3950X, and so on. They commented they're not going to be binning any 3700Xs at all anymore, it just wasn't worth it with this batch. Long term the silicon quality will probably pick up, just like with the 1000 series, and you will see more chips reliably boosting to their advertised level. Part of the underlying problem here is that the silicon quality of this launch batch is really really poor and that will probably get better over time. Better AGESA versions may help a little bit too, etc etc. But right now no, most chips don't hit their advertised clocks. Paul MaudDib fucked around with this message at 20:27 on Jul 22, 2019 |
# ? Jul 22, 2019 20:20 |
|
HalloKitty posted:AMD's already passed a monumental barrier: they've achieved better IPC. AMD needs to get clocks a little up and latencies a little down, and then there's nothing really standing in their way. On the other hand, Intel seems to have made an enormous mess of things. It's almost unbelievable: Intel's profits and overall turnover dwarfs that of AMD's, AMD shouldn't even have a reasonable chance, but here we are Did they suffer brain drain from their CPU design department over the years of milking it since Sandy Bridge? They sound a lot like IBM internally (which I previously worked for). I still think ARM is a massive threat to Intel right now and is part of the reason Intel is going into dedicated GPUs. I think X86/AMD64 in general is dying a very slow death (as in 10 years). ARM can do general end-user computing (Office, web browsing), GPU can do AI/Big data/Scalable crunching. Little room for X86 except for legacy apps/gaming once ARM catches up to mainstream Intel laptop CPUs, which they almost have.
|
# ? Jul 22, 2019 20:27 |
|
Someone on r/Amd was able to get their 3900X to 4.625 GHz by running a program that was just an infinite loop of nops.
|
# ? Jul 22, 2019 20:28 |
|
uhhhhahhhhohahhh posted:R9 Fury It won't matter which processor you get between any Zen 2 or 9th gen Intel with that GPU. You'll be GPU limited with any modern game. If you are interested in 240hz gaming with something like a RTX 2080 Ti or better in the future, go Intel. With a lesser GPU the differences are likely to be minimal. I have a Ryzen 5 3600 that regularly hits 4.3GHz (+100 from the stock max boost) with games. I do get all-thread app benchmark scores like 1-2% below what I see on reviews, though.
|
# ? Jul 22, 2019 20:32 |
|
Alpha Mayo posted:Did they suffer brain drain from their CPU design department over the years of milking it since Sandy Bridge? They sound a lot like IBM internally (which I previously worked for). They have a bunch of pent-up uarch advancements ready to go... but they're all designed around the 10nm node. If 10nm had been ready to go, Cannon Lake was ready to go in like 2015 or something. Ice Lake will probably put their IPC back over the top of Zen2, maybe on par with Zen3, the problem is they can't even hardly produce laptop chips let alone performance desktop. And 10nm will probably not clock tremendously higher than 7nm did. In hindsight they should have started porting these architectural improvements back to 14nm when they started having problems, but why spend the money when 10nm was only a year or two away? Without backporting or without 10nm all they can do is keep stamping out additional Kaby Lake cores. BK apparently really cut back on R&D and 10nm fell through as a result, and it's just been screwing them super hard ever since. Paul MaudDib fucked around with this message at 20:38 on Jul 22, 2019 |
# ? Jul 22, 2019 20:36 |
|
Alpha Mayo posted:Did they suffer brain drain from their CPU design department over the years of milking it since Sandy Bridge? They sound a lot like IBM internally (which I previously worked for). "ARM taking over x86" is the next "tablets are going to replace laptops" x86 is way too entrenched in the consumer and corporate world, and the last time we saw a major maker try to to switch archs, we got Itanium. I can see it making some inroads into the laptop market, but at that point you're effectively talking a chromebook. iospace fucked around with this message at 21:18 on Jul 22, 2019 |
# ? Jul 22, 2019 21:14 |
|
ARM software support has improved dramatically in the past view years and ARM core performance is pretty close to x86 now. That doesn't mean they're going to take over but there is definitely a lot of interest and resources being out into making ARM viable for servers and laptops. It might not be successful in the end but I wouldn't write it off, software support was the largest barrier and that is finally starting to change.
|
# ? Jul 22, 2019 21:24 |
|
things probably aren't going to get better without IP law reform, it's just not tenable long-term to let two companies have a legal monopoly on producing computers. This is really just the tip of the problems with patents and that really is going to have to be addressed at some point. It probably won't be, because big companies have big patent portfolios and are relatively unaffected while it effective serves as a protectionist measure to keep out upstarts, but that just means our IP development will keep getting strangled until it does... not to mention putting US companies at a disadvantage against countries that just don't give a gently caress about IP laws at all. Patents really need to be reduced to something like 5 years, 28 years made sense when you might make the same piece of farm equipment for 100 years or something but it just doesn't work with modern 12/18-month product cycles. By the time modern patents expire the products they cover will have been out of service for 20 years. And yeah you can nominally use ARM or RISC-V, if you don't mind throwing away all your software and rewriting everything. The only way I could see that working is if someone like Amazon subsidized the poo poo out of development and instance costs of a competitor. But realistically it's just not going to happen, legacy codebases are a kind of natural monopoly and it's just not cost-effective (or even possible) for a competitor to re-write everything. More realistically I guess the chromebook idea is probably the actual next step. Everything runs on the web, your web browser becomes your OS, and software is thus platform-agnostic. Javascript becomes your "instruction set", and that's controlled by a council of interested parties that is at least relatively neutral. It's not going to be a complete transition though, there is too much stuff like games that really depend on native performance, the ability to peep at other processes in the system for anticheat, highly secure DRM, etc. WebGL and such just aren't going to be good enough for companies to commit. Paul MaudDib fucked around with this message at 21:46 on Jul 22, 2019 |
# ? Jul 22, 2019 21:36 |
|
fargom posted:Are most 3000 ryzens now boosting to the advertised clocks? My 3700x is sitting at 4.2ghz constantly in games and hitting 4.3 out of the 4.4 rated boost speed. This is in a B450 Tomahawk and it takes 10 seconds on power on for the no cpu error led to turn off before it posts. I've counted it's a long time. With it being a beta BIOS I expect the boot time to get sorted and hopeful it's a BIOS issue for the last 100mhz but if I don't get it I'm still really happy with this CPU. It's hilariously fast compared to my 2500k. I could also just have cooling issues, it's the middle of a heatwave and the room isn't exactly cool even with the computer off. I'm positive this thing can hit 4.4 if I want mess with settings.
|
# ? Jul 22, 2019 21:51 |
|
I guess we should be happy that Intel in 2012-2017 didn't follow the pharma model of 50% YoY price hikes 😣
|
# ? Jul 22, 2019 22:13 |
|
MaxxBot posted:I guess we should be happy that Intel in 2012-2017 didn't follow the pharma model of 50% YoY price hikes 😣 It would have been tougher for Intel than pharma - you can always use old CPUs for a while longer but it's extremely difficult to recapture drugs from your urine stream.
|
# ? Jul 22, 2019 22:18 |
There are lots of boards out there that either haven't been updated to an AGESA where boosting works properly or have had a BIOS update that actually breaks boosting again. If you have good cooling and aren't hitting advertised boosts it's more likely that's why than silicon quality. (Though it's entirely possible you did get a turd)
|
|
# ? Jul 22, 2019 22:22 |
|
Just noticed when tinkering with CPU-Z that having a game launcher open in the background (Steam, EGS) means my 3600 is always running at 4.2Ghz / 1.4V. Is this normal? Closing them immediately drops it to 3.6Ghz and under 1V. Should I be worried? I generally have my launchers open in the background as I stream from my PC to the TV downstairs.
|
# ? Jul 22, 2019 22:29 |
|
Paul MaudDib posted:And yeah you can nominally use ARM or RISC-V, if you don't mind throwing away all your software and rewriting everything. The only way I could see that working is if someone like Amazon subsidized the poo poo out of development and instance costs of a competitor. But realistically it's just not going to happen, legacy codebases are a kind of natural monopoly and it's just not cost-effective (or even possible) for a competitor to re-write everything. I know this is a total "I'll believe it when I see it" thing but I think it's pretty likely we'll see Apple try it sometime in the next 5 years, just based on years of supply chain leaks, etc.
|
# ? Jul 22, 2019 23:01 |
|
I'm all about Apple destroying the last remnants of their desktop user base, hope they go for it.
|
# ? Jul 22, 2019 23:06 |
|
Party Boat posted:Just noticed when tinkering with CPU-Z that having a game launcher open in the background (Steam, EGS) means my 3600 is always running at 4.2Ghz / 1.4V. Is this normal? Closing them immediately drops it to 3.6Ghz and under 1V. Should I be worried? I generally have my launchers open in the background as I stream from my PC to the TV downstairs. lot's of people are having similar issues. some people are hitting 60c while idling from it - but the main focus has been on monitoring programs causing it. messing with the power plans and disabling pbo fixes it for some people from what i've seen, but it seems bios and setup dependant. the AMD guy who posts on their subreddit has said it's normal and not worry about it but that theyre also looking into. he's supposed to be posting an update about it this week. uhhhhahhhhohahhh fucked around with this message at 23:54 on Jul 22, 2019 |
# ? Jul 22, 2019 23:19 |
|
Why wouldn't apple just do the same thing they did when moving away from PowerPC to X86, and emulate things until native copies come out? Isn't that what the latest Windows 10 ARM does?
|
# ? Jul 22, 2019 23:24 |
|
Yes, but only for 32 bit software. And it's slow as hell.
|
# ? Jul 22, 2019 23:24 |
|
Electric Bugaloo posted:I know this is a total "I'll believe it when I see it" thing but I think it's pretty likely we'll see Apple try it sometime in the next 5 years, just based on years of supply chain leaks, etc. Apple, every five years: Here's some conveniently leaked benchmarks we ran on ARM and AMD chips. So, Intel, how about that contract we have that we get massive discounts from you from. Wanna keep doing it? Sure? Sounds good to me!
|
# ? Jul 22, 2019 23:59 |
|
Alpha Mayo posted:Why wouldn't apple just do the same thing they did when moving away from PowerPC to X86, and emulate things until native copies come out? Isn't that what the latest Windows 10 ARM does? Emulation is bad for performance and battery life, which would be the primary driver for arm adoption. Apple has the advantage of a non-poo poo app store ecosystem that people actually use, so they have much more leverage to force devs to recompile to support a new architecture, and to some extent have been training them to expect it between the ppc->x86 migration followed by the mandatory 64-bit binaries on iOS. If anyone can pull it off, it would be Apple. Windows' win32 legacy will kill any effort to do the same.
|
# ? Jul 23, 2019 00:52 |
|
BangersInMyKnickers posted:Emulation is bad for performance and battery life, which would be the primary driver for arm adoption. Apple has the advantage of a non-poo poo app store ecosystem that people actually use, so they have much more leverage to force devs to recompile to support a new architecture, and to some extent have been training them to expect it between the ppc->x86 migration followed by the mandatory 64-bit binaries on iOS. If anyone can pull it off, it would be Apple. Windows' win32 legacy will kill any effort to do the same. I think the biggest advantage Apple has is a lot of the apps are capable of running on ARM. Consider what the iPhone and iPad run on
|
# ? Jul 23, 2019 00:56 |
|
|
# ? May 31, 2024 03:27 |
|
BangersInMyKnickers posted:Emulation is bad for performance and battery life, which would be the primary driver for arm adoption. Apple has the advantage of a non-poo poo app store ecosystem that people actually use, so they have much more leverage to force devs to recompile to support a new architecture, and to some extent have been training them to expect it between the ppc->x86 migration followed by the mandatory 64-bit binaries on iOS. If anyone can pull it off, it would be Apple. Windows' win32 legacy will kill any effort to do the same. Pretty sure you're wrong and Apple has a poo poo app store ecosystem.
|
# ? Jul 23, 2019 00:57 |