Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

njsykora posted:

Well that sucks, feels super weird to do a single region product launch in this day and age.

I'm just glad that AMD is selling these rather than throwing them in the garbage. I love weirdo runt products from the clippings.

Adbot
ADBOT LOVES YOU

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Kerbtree posted:

Hah, looked at the 5800x3d userbenchmark page for a giggle and they're certainly saying the quiet parts out loud:

:shepface:

God I wish userbenchmark had been around during the Pentium 4 days.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

BlankSystemDaemon posted:

RT is great for screenshots, but I sure as poo poo don't notice it when I'm playing a game where it's enabled.

It's entirely possible that's a me problem, though.

You're not alone that most things driving ultra graphics over the last few years feel like marginal improvements at huge performance costs.

Then occasionally something comes out like TLOU where the only settings options are "looks amazing but needs 12GB VRAM" or "PS2 textures".

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

CodFishBalls posted:

To chirp in, Micro Center is likely going to do a 5600x3d bundle, bundling it with a B550 board (my suspicion is the Tuf B-550 since they got quite a bit in stock at my local store ) and 16 gb of 3200 cl16 g.skill ram (probably ripjaw again, what they have a lot of)

Speculation is bundle price will be around 329ish, which, compared to 229 for 5600x3d, 170 for B550, and 40 for ram, is not bad, but locking yourself into the AM4 ecosystem.

drat, hell of a bundle if that's it, maybe enough to make me road trip.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Cygni posted:

isn’t there like 1 thing that actually uses avx512 that anyone cares about, and it’s a ps3 emulator?

Why do all the work to use hardware features that aren't common? In the one place where AVX512 should have mattered, Intel Xeons, they've hosed that up too.

Low end Xeons only have 1AVX 512 FMA port per core, so they have half the throughput of the more expensive Xeons. If you do pony up for the more expensive Xeons, those reduce clocks on significantly if you actually use AVX512 instructions: https://travisdowns.github.io/blog/2020/08/19/icl-avx512-freq.html

By the time Intel finally got their act together, with Ice Lake, AMD had better performing processors around that don't have AVX512, and is starting to make huge gains in server market share.

If we start actually getting AVX512 adoption in software, it's going to be because of AMD. Hell, I've hand written some vectorized code with intrinsics recently, and AVX 512 was way, way nicer to work with than AVX2. My worse, less efficient AVX2 code path ended up being faster on Skylake, and we have a lot of Skylake. It was slightly faster on Ice Lake, but not enough to keep around.

I wish they would have called it AVX3 and made the 512 bit part optional. That seems like most of the source of problems but there's a lot of nice instructions in there that are perfectly usable with 256 bit vectors.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

AARP LARPer posted:

Hi everyone,

I'm living in an apartment with terrible power, and I'd like to reduce my wattage during heavy gaming. I've heard about a voltatge "offset" and I went poking around, but I don't know where these setting might be or what they're called. I've got an MSI B550 tomahawk with a 5800X3D.

I'm not looking to push the envelope, but would like to know what a safe-and-sane reduction might be... and where I might find it.

Thanks!

You can use Ryzen Master to set power targets, and that's going to be more effective at reducing wattage than messing around with offsets: https://support.punchtechnology.co.uk/hc/en-us/articles/6438162546461-How-to-set-a-TDP-PPT-Power-Limit-on-AMD-Ryzen-CPU-Processors.

Is your machine on a UPS or any other device that measures power usage? How low are you looking to get? What power supply are you using, and is it a correctly sized, efficient one?

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

hobbesmaster posted:

That’d be cpu migration or cluster switching and not an HMP implementation like Intel, right?

The scheduling problems might not be as bad, but any situation where you've got fast cores and slow cores creates scheduling problems. The small Zen cores might have the same instruction set but they don't clock or behave quite the same.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Josh Lyman posted:

I remember when someone, maybe from HardOCP, sold a baybus where you had to drill holes in your own front plate to install it. I think I got one back in like 2001?

I loved the DIY scene back then but fan curves are better.

A few years after that I had an off-the-shelf cooler master cooler that had a rheostat that slotted into a PCI slot cover, so you could reach behind your PC to crank it up for game time.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Paul MaudDib posted:

pre-Haswell/pre-Ryzen is coincidentally also when NVMe support started to be rolled out, and you can't play the game without NVMe storage either. it's easy to justify not writing that pre-AVX2 path when the users mostly can't play anyway for other reasons.

(technically you can use an adapter card but that's either going to cut your GPU lanes in half, or be on a chipset slot that stands a decent chance of being like pcie 2.0x1 or at best pcie 2.0x4. And while you technically don't need actual BIOS support (especially if you are not booting from the SSD/have a SATA drive for a system drive) from what I remember the Oprom thing is pretty clunky and a lot of drives don't ship with oproms anymore cause nobody actually uses them anymore. I honestly have no idea if oproms are even viable anymore for boards without native support.)

Wait what, games actually require NVMe storage now? TLOU at that? Wow, I assumed that SATA SSDs would just make for slightly slower load times.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

ConanTheLibrarian posted:

Epycs only allow one DIMM per channel and the number of people who need 4 DIMMs is a drop in the ocean so AMD don't have a lot of motivation to make a memory controller that can handle 4 DIMMs well.

All Epycs have been designed to support 2 DIMMs per channel, and AMD says that they're able to fix their latest generations memory problem with just a BIOS update. https://www.tomshardware.com/news/amd-responds-to-claims-of-epyc-genoa-memory-bug-says-update-on-track

2DPC gets Epyc to 48 DIMMs in a 2 socket server.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

VorpalFish posted:

Call me crazy, but I suspect now that they're putting igpus just good enough for office pcs on the io dies of chiplet ryzens maybe there won't be any more socketed monolithic APUs.

Given that every console and laptop part is still monolithic and will continue to be, laptops need big iGPUs more than desktops, and the current fastest iGPUs are all laptop only, AMD could pretty easily bring them to desktop if they wanted but they may not choose to.

If they don't sell them socketed they'll still be available in mini-PCs.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

BlankSystemDaemon posted:

Modern mainframes have over a GB of distributed L3 cache.

https://www.anandtech.com/show/17323/amd-releases-milan-x-cpus-with-3d-vcache-epyc-7003

You can get 1.5GB of L3 cache in a 2 socket commodity server as of last year.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Shaocaholica posted:

So X3D cache is accessible by all cores with the same penalty and its slower than L3 but orders of magnitude faster than main memory? And X3D is not available on Epic/Threadripper...yet but there are plans?

X3D has been available on Epyc for a whilez released March 2022: https://www.anandtech.com/show/17323/amd-releases-milan-x-cpus-with-3d-vcache-epyc-7003

The 3D cache is part of L3 and it makes the whole L3 a little bit slower than a non-3D part, but usually having all that extra cache helps way more than the tiny hurt you get from it being slower.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Dr. Video Games 0031 posted:

Not sure where else to post this, but details on Qualcomm's upcoming Snapdragon X Elite have leaked: https://videocardz.com/newz/snapdragon-x-elite-leak-12-core-oryon-cpu-4-6tf-adreno-gpu-lpddr5x-av1-and-wifi7

My understanding is that this thing is meant to power Windows ARM laptops. Is the ARM branch of Windows no longer a joke? Won't most apps and games be wholly incompatible? I'm confused as to how this is supposed to be a good idea. What's the benefit of using an ARM processor over an x86 one?

Is there a native port of MS Office yet? I'd bet that for a whole lot of usage cases, if you've got an ARM browser and MS Office, then the substandard emulation for the long tail of other applications is fine. Come to think of it, anything that's .NET and JIT-ed anyway will be completely fine already too.

I know that ARM is making inroads and gaining territory in the server segment, and clearly Apple has had enormous success. The last time that x86 and ARM went head to head was the Atom Phone days, but that was a weird market and time and Intel may well have sabotaged themselves. For Windows laptops, even if ARM doesn't gain market share quickly it'll limit AMD and Intel's profit margins.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Farmer Crack-rear end posted:

shsc just loves segmentation, there's like three separate work chat threads. see klyith mentioning overclock chat when, hey, there's already an overclocking megathread! :buddy:


"nuhhhh there's weird fiddly platfrom bullshit" there's nothing stopping that being discussed in a combined x86 thread. ridiculous

I'd love to see a unified x86 thread, ideally with a OP that mentions Centaur and VIA.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

VorpalFish posted:

To play devil's advocate, that's kind of how max turbo speeds end up working on everything, especially low power mobile chips. You only ever see it with 1c or 2c load, and that gets dispatched to a preferred core.

On the other hand, 7000 series APU naming was already imo the worst of anyone's current offerings, with a bunch being straight up rebranding of 6000 and 5000 series chips to make them look newer.

Friend, have you taken a look at the Intel thread and their "14th generation" desktop chips?

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Mental Hospitality posted:

The laptop Phoenix APU's seem quite performative in those mini-pcs I keep seeing on YouTube. I wonder if you'll be able to go hog wild with the power limits to get even more performance from the desktop parts. I'm sure you'd reach a point of diminishing returns but if someone already has a beefy air cooler it would be neat to see just how much performance you could extract out of the chip.

I'd posit "not much" given that we already saw the 5800H laptop vs 5700G desktop comparison and it looks broadly like this: https://www.cpubenchmark.net/compare/3907vs4323/AMD-Ryzen-7-5800H-vs-AMD-Ryzen-7-5700G



If you're able to throw way more power at the GPU, you also start running smack into memory bandwidth limitations. 2 channel DDR5 just isn't anywhere near GPU memory bandwidth.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Rinkles posted:

speaking of, the Steam Deck refresh (OLED + die shrink) apparently performs slightly better thanks to having faster memory.

5500 v 6400.

With how modern CPU clock & power management works, a die shrink alone will probably result in a very small performance boost by itself. They added a bigger heatsink & fan also, it's just a general tweak

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Cross-Section posted:

With all the tech sales coming up, should I buy a good pair of DDR5 sticks in preparation for the eventual Ryzen 9000 CPUs or is it likely that RAM prices in late 2024 (or whenever these CPUs drop) will be as low as they currently are?

I wouldn't do this for DDR5. Memory speeds, voltages, and support is still improving and if you want a bunch of memory or really fast memory I'd wait and see what next-gen processors want.

I do think this is the bottom for DDR4 prices though!

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

BurritoJustice posted:

In terms of factorio, with large bases it is completely memory bottlenecked. This means that the VCache still helps, as cache increases effective memory bandwidth, but Intel's much faster memory subsystem gives them the win in general. Amusingly, I do actually have the world record on the FactorioBox 50K benchmark (61UPS), despite using a 7950X3D, but I cheated a little by using Linux and a bunch of kernel tweaks that help massively (custom malloc with huge pages and much more). You can get similar performance using Intel on bone stock windows.

Is Factorio more memory latency sensitive or bandwidth sensitive? Does it do any better on 4-8 channel DDR5 setups, or does the higher latency that bigger memory controllers and potential RDIMMs bring hurt performance?

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Anime Schoolgirl posted:

Z1 NUClike in development

https://www.youtube.com/watch?v=uGaIO6Jbqg4

my eyes are darting about looking around for a 7020u-based minipc

What's the appeal? 7020U is Zen 2, right? Wouldn't you be better off with the 5800H, Zen 3, which you can still get with 16GB of RAM and 500GB of SSD for under $300?

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

SpaceDrake posted:

So I had the itch to play around today, and I was a little surprised to see that 2001SE actually booted and ran on my 7800X3D/4070 Win11 system... but 3DMark 03, 05 and 06 wouldn't. (I was less surprised to see that 2000 didn't start correctly, and 99 refused to even install. :v:) Sadly, they give extremely nonspecific errors when starting up, so it's hard to troubleshoot what's going on. I'm tempted to download Vantage and see how well the system crunches through that, but 2001SE was funny enough as it was, and the 11 result was impressive enough for my blood.

Funnily enough, though, I actually do get a sound error from 2001SE's demo even though the rest runs fine. I guess 2001SE doesn't play nice with the Realtek audio on the B650 Pro RS.

Maybe it wanted Creative EAX? Get yourself an Audigy, friend.

poo poo, you just reminded me that I still have a dedicated sound card in my desktop because the onboard realtek made me mad. Asus Xonar to the rescue!

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

gradenko_2000 posted:

is AMD really going to be shipping product where only the left-most digit is moved up from a 7 to an 8, but all the rest of the digits (meaning the actual technical specifications) are the same?

Its just traditional, every vendor does it. Remember the Nvidia 8800 GT / 9800 GT?

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

OhFunny posted:

Apple did this too. iPhone numbering jumped from 8 to 10, so not to be a number behind Samsung.

I thought it was because naive software checks for "iPhone 9x" would think it was running on iPhone '95.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Canned Sunshine posted:

Zen 5 is also moving from 4 to 6 ALUs, so I'm wondering now if Zen 5c will see that, or will remain at 4 if it's based off of Zen 4.

That's disappointing if it's the case.

Ostensibly the 4c/5c cores are competing against Intel E-cores that lack not just clocks but only have 128-bit wide vector units, right? They have no need to worry about getting outperformed.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Jeff Fatwood posted:

Where are the 5700X3D reviews, that drat thing released today and there's no coverage.

I think I'm going to drop one in to replace a 1700X.

Does it come with a cooler? I'm using a stock Wraith, I wonder if box coolers have gotten any better in the last 7 years. Intel's new ones look pretty sick.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Anime Schoolgirl posted:

all X3D chips don't have coolers bundled.

That's a tightwad move for some expensive chips.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Shipon posted:

absolutely no one buying an X3D was going to use the stock cooler anyway, it would have been a waste of money

Should I not? I was planning on it, after doing a hand-me-down / leftovers build where the only other new parts were $50 worth of RAM and a $40 case, only to discover that a 1700X is slower than I thought and get tempted to blow the budget and upgrade. My kids want to play Planet Zoo and other stuff that won't run on the Intel iGPU they started gaming on.

$249 feels like a pretty decent value, I guess I could cheap out and get a 5600X for half that but the X3D and 8 cores sure is neat.

Edit: I have an AMD branded cooler with an RGB LED ring around it that looks a hell of a lot like the Phenom II X4 965 box cooler I had in 2009. I thought this was a wraith and what came with the 1700X, but I got this CPU, cooler, and MB given to me by my brother in law and don't know the whole history. Rather than taking a photo and showing y'all, I googled and apparently this is a Wraith Prism. It looks like it should do OK with a 5700X3D?

Twerk from Home fucked around with this message at 01:36 on Feb 1, 2024

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

repiv posted:

RE Engine calls checkerboard rendering interlacing for some reason

That feels accurate enough, neat. I thought checkerboarding had fallen by the wayside now that we have better upscaling tech?

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

The same thing happened with Intel, the 12700K is faster than and has been cheaper than the 12700 non-K.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Josh Lyman posted:

Would it make sense to think of my 7900X as 2x 6 core CPUs rather than 1x 12 core? If so, is Intel’s P+E architecture better about this type of thing?

Intel's P+E architecture in client chips has a single L3 cache, yes. However, Intel's setup has its own big problems with caching, coherency, and inter-core bandwidth. When the E cores wake up it slows down the ring bus, slowing down how fast the P-cores are able to access L3 or memory: https://chipsandcheese.com/2021/12/16/alder-lake-e-cores-ring-clock-and-hybrid-teething-troubles/.

Also, the E-cores share an L2 cache but have to synchronize through the L3 cache, meaning that core-to-core latency is worse for E cores that share an L2 cache than P-cores with each have independent L2 caches.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

forbidden dialectics posted:

...is it possible to learn this power?

Look at your mobos QVL RAM list, pick the highest validated speed you see on the list, crank up the voltage one notch past that and run tight timings.

Every board's QVL list has some insane poo poo on there like running a 2400 kit at DDRR-4400 or faster at 1.6V or something.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Klyith posted:

I can think of lots of good reasons.

It's a thing average people do with PCs that actually requires high performance. If you don't play games and don't need grunt for some sort of professional work that relatively few people do, you can be happy with a very inexpensive CPU.

It's a thing that has been totally crippling for non-x86 platforms in the past, and you can't just get a native compile.

Macbooks still suck at games (certainly due more to apple neglect than hardware), so that's a place they can get a win and show off their thing being "faster" than a much more expensive M2 or M3 laptop.

Macs hold their own at games as long as you buy some $75 software to run Windows games that may or may not continue to work in the future: https://www.codeweavers.com/crossover/

It's incredibly annoying that a bunch of games that used to work natively on Mac no longer do. We lost a lot when Apple removed the ability to run 32 bit applications, and are losing way more as OpenGL keeps rotting.

I do wonder if Apple is going to go all the way and just kill OpenGL in the next few years.

My experience has been that tons of games that I expected to run on Mac OS do not. CS: GO, TF2, Guild Wars 2, none of them run on Macs at this point.

Twerk from Home fucked around with this message at 05:25 on Apr 9, 2024

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

VorpalFish posted:

It's more than that - the die with vcache on it is much harder to cool so they dial clocks and voltages way back - the 7950x3d has a die without vcache and that's almost certainly how it hits those clocks.

You probably will not see peak boost on the 7950x3d for any of the cores on the die with the extra cache, so your scheduler kinda has to pick whether an application will scale better with clock speed or cache when deciding where to put a thread.

I didn't realize that the X3D chips were exposed to the OS as heterogenous, I thought that part of the issue is that the OS and thus the scheduler have no idea that there's 2 different kinds of cores there?

Adbot
ADBOT LOVES YOU

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Subjunctive posted:

I actually have a use-case at work for this sort of setup right now (run many tiny web assembly functions, as parallel as we can get, over small data) but I doubt I can get GCP to deploy them for us!

Isn't that usage case of "tons of cores without significant I/O or memory bandwidth" about to be completely devoured by both ARM64, whether Ampere or in-house, or E-core Intel? I haven't looked at the GCP pricing comparison, but I do feel like we're heading for a future where for those kinds of workloads it's not cost effective to run them on mainstream Intel or AMD.

I guess we'll see how Zen 5c SKUs and pricing shake out, or if the big cloud guys buy from Ampere or just roll their own ARM.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply