Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Pablo Bluth
Sep 7, 2007

I've made a huge mistake.
If I was a betting man, I'd say Zen6 will be out before DDR6 is sufficiently mature/available for AMD to adopt it. And the evidence is that if Zen6 is DDR5, it'll use the existing socket.

Adbot
ADBOT LOVES YOU

Cygni
Nov 12, 2005

raring to post

FuturePastNow posted:

Well the guy who put AM5+ on github and caused this rumor simply forgot AM3+ existed and is sorry about the confusion

https://twitter.com/platomaniac/status/1776982918977532393

Yeah, I think this is the answer.

On the general topic of "would AMD do a +", older + revisions were to add features, normally a faster bus or more advanced power support to support big architectural chnges, and the bigger desktop socket steps were to add support for new ram generations. I don't see anything on the Zen5 change list that suggests a + would already be necessary, as its using the same IO die as Zen4. Especially after the amount of whining the board makers did about early AM5 costs. So I think AMD could bring the practice back in theory (and they almost did when they tried to bifurcate AM4), but now doesnt seem like the time.

For a bigger socket step, DDR6/PCIe6 sounds like a likely break point but thats a ways out, probably late 2025 for server and 2026 for desktop if i had to guess.

On the Intel side, Arrow Lake will get a new socket with its new chiplet/tile design, and it looks like Intel will ride that socket for a while as well... perhaps to DDR6.

Klyith
Aug 3, 2007

GBS Pledge Week

Subjunctive posted:

What’s the motivation for moving power management onto the DIMM for DDR6? Is there some limit that’s being hit when pushing it from the motherboard, or do they just want to avoid motherboards loving with voltages badly?

Seeing some references that they're gonna do dynamic frequency & voltage? That would be cool if true, main memory is one of the last things that doesn't adjust to match power use & required performance.

But there's not a lot of info on dd6 yet.

BlankSystemDaemon
Mar 13, 2009



Fun fact: if you forget to set DRAM Performance Mode on ASrock boards to competitive or aggressive, the CAS latency will be at 50 instead of what the EXPO profile defines, which for me is 32, because that was the only kit that was available when I built this system.

So it’s still possible to experience the heady days of 90s overclocking when changing the strobe rate meant a noticable performance increase.

Arivia
Mar 17, 2011
"the strobe rate" :catdrugs:

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Subjunctive posted:

It has the Neural Engine NPU (16 cores/18T op/s in M3) but it’s not clear exactly what it’s being used for. There are some PyTorch conversion tools, but not much direct API for accessing it AIUI.

This gives sort of an overview.

https://machinelearning.apple.com/research/neural-engine-transformers

Instead of explicitly requesting the ANE, you just ask Core ML to run a model and it chooses whether to use CPU, GPU, ANE, or even a combination of those resources. Makes it so that in theory, programmers don't have to worry about whether the device they're running on even has an ANE, but lots of people complain because they do want an explicit "run this on the ANE" interface.

Most of the macOS/iOS features Apple builds on Core ML seem to be about images in one way or another. Photos.app automatically classifies your picture library and somehow manages to learn things like your cat's name (it might have asked, I forget) so you can search by image content. (It's both pretty good and inaccurate, it has a hard time telling the difference between my tuxedo cat and other tuxedo cats.) They also use ML to do real-time image enhancement on the video feed from the laptop's webcam so you look better in video calls.

My actual unironic favorite is built-in near-zero-UI OCR. In a lot of contexts, you can now briefly hover the mouse cursor over text in an image (or even a paused video) and it will turn to the text selection cursor; you can then select text in the image to copy and paste it elsewhere. It handles a surprising number of languages; my main use of it is to C&P into Google Translate. Kinda neat to be able to translate signage and so forth in images you find on the Internet.

Dr. Video Games 0031 posted:

They haven't said. There's a 50/50 chance that Microsoft waffles long enough that the AI boom wanes before any of this becomes reality, they only do a half-hearted attempt at pushing "AI PCs," and then drop the matter and pretend it never happened within 5 years.

Yuuup. The best theory I've seen is that Microsoft's in it to sell expensive Azure cloud compute time to suckers who've bought into the AI hype and need to train big models. Pushing the "AI PC" is them trying to stimulate (or simulate?) demand. Once the next AI winter comes - AI has been doing boom and bust cycles since it first became a thing in the 1960s - they'll probably drop it.

Which is a shame as there really are useful things you can do with an on-device NPU, as Apple's been showing since 2017. Nothing revolutionary, nothing actually "intelligent", just nice little quality of life features. Things you could even do without a NPU, but it would be inconvenient due to battery drain.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Subjunctive posted:

What’s the motivation for moving power management onto the DIMM for DDR6? Is there some limit that’s being hit when pushing it from the motherboard, or do they just want to avoid motherboards loving with voltages badly?

Probably just wanting to move the regulators closer to the devices they're powering in order to provide better stability and tighter regulation tolerances.

Ohm's law says V = I*R. Translated: the voltage drop across a resistor is equal to the current flowing through it times its resistance. The resistance of a conductor is proportional to its length, meaning the further a regulator is from the device it's powering, the more error there will be in the voltage the device sees.

Regulator circuits often use a feedback line from the point of load to accurately sense the voltage at the point of load. This lets them compensate for IR droop on the main supply line - the regulator can simply adjust its output voltage above the nominal correct level until the voltage at the device is correct. The way this works despite the distance takes advantage of Ohm's law on the other side: when the I term, current, is next to zero, so is the voltage drop across the feedback wire.

However, IDK if they want to get into doing that kind of thing across a DIMM interface, particularly in multi-DIMM systems where each DIMM is going to be a different distance from the regulator. Physically placing the regulator as close as possible to its load is the best choice from a performance standpoint, so there you are.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Love your posts, BTW.

BobHoward posted:

My actual unironic favorite is built-in near-zero-UI OCR. In a lot of contexts, you can now briefly hover the mouse cursor over text in an image (or even a paused video) and it will turn to the text selection cursor; you can then select text in the image to copy and paste it elsewhere.

I did this the other day to move a license identifier from a non-networked computer to another computer via iPhone camera, and it mostly worked but it skipped one character in the middle which caused me to waste a license for the software.

Generally it is crazy good, though. Never had that happen before.

shrike82
Jun 11, 2005

Anyone following the Snapdragon X Elite previews? If they deliver on Apple Silicon performance for Windows laptops including the Rosetta-equivalent for non-native code, it seems like it'd be a good buy for ultrabooks.

The previews are odd in that they're trying to show off gaming performance which isn't the main thing why people would go for these laptops.

BurritoJustice
Oct 9, 2012

They already moved the power management with DDR5, if you have UDIMMs they take 5V and if you have RDIMMs they take 12V and all the regulation is done on DIMM.

Dynamic voltage/frequency scaling is coming though, which Intel experimented with on DDR5 with XMP 3.0 but nobody uses because it sucks to have your ram going back to JEDEC all the time.

Klyith
Aug 3, 2007

GBS Pledge Week

shrike82 posted:

Anyone following the Snapdragon X Elite previews? If they deliver on Apple Silicon performance for Windows laptops including the Rosetta-equivalent for non-native code, it seems like it'd be a good buy for ultrabooks.

The previews are odd in that they're trying to show off gaming performance which isn't the main thing why people would go for these laptops.

I can think of lots of good reasons.

It's a thing average people do with PCs that actually requires high performance. If you don't play games and don't need grunt for some sort of professional work that relatively few people do, you can be happy with a very inexpensive CPU.

It's a thing that has been totally crippling for non-x86 platforms in the past, and you can't just get a native compile.

Macbooks still suck at games (certainly due more to apple neglect than hardware), so that's a place they can get a win and show off their thing being "faster" than a much more expensive M2 or M3 laptop.

repiv
Aug 13, 2009

it's also a way to show that their DX12 drivers are passable, even for software that was never tested on their architecture during development

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Klyith posted:

I can think of lots of good reasons.

It's a thing average people do with PCs that actually requires high performance. If you don't play games and don't need grunt for some sort of professional work that relatively few people do, you can be happy with a very inexpensive CPU.

It's a thing that has been totally crippling for non-x86 platforms in the past, and you can't just get a native compile.

Macbooks still suck at games (certainly due more to apple neglect than hardware), so that's a place they can get a win and show off their thing being "faster" than a much more expensive M2 or M3 laptop.

Macs hold their own at games as long as you buy some $75 software to run Windows games that may or may not continue to work in the future: https://www.codeweavers.com/crossover/

It's incredibly annoying that a bunch of games that used to work natively on Mac no longer do. We lost a lot when Apple removed the ability to run 32 bit applications, and are losing way more as OpenGL keeps rotting.

I do wonder if Apple is going to go all the way and just kill OpenGL in the next few years.

My experience has been that tons of games that I expected to run on Mac OS do not. CS: GO, TF2, Guild Wars 2, none of them run on Macs at this point.

Twerk from Home fucked around with this message at 05:25 on Apr 9, 2024

Lockback
Sep 3, 2006

All days are nights to see till I see thee; and nights bright days when dreams do show me thee.
The subset of games that run without crossover isn't large (but probably bigger than people realize), but the stuff that does run, runs REALLY well. BG3 ran incredible on my M1 MBP.

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



Ironically, Microsoft getting Windows for ARM to have good DX12 performance, might actually be a decent boon to macOS gaming via Crossover, Parallels, and VMWare Fusion.

It is frustrating that gaming isn't more than just a bullet point for Apple, since they have the resources to really make a significant push via supporting developer porting efforts, especially given how cash-rich they are. And by all accounts, MetalFX upscaling is superior to FSR :haw:

repiv
Aug 13, 2009

Twerk from Home posted:

I do wonder if Apple is going to go all the way and just kill OpenGL in the next few years.

didn't they already scrap their native opengl drivers and switch over to a compatibility shim that runs on top of metal

hence why things got flakier, because the shim isn't bug for bug compatible with the old drivers

movax
Aug 30, 2008

I'm moved to Apple for most of my day-to-day productivity as I become an old + Windows continually keeps getting worse and worse... my most recent build with a 3960X I thought would be my next 10-year PC (I built it after having a 2600K for 10 years), but 1) I don't use it that much, 2) There's enough stuff I seem to have to do with Process Lasso and things like that are making it a pain. I initially got it to have oodles of RAM (128GB) and PCIe lanes (for a ton of NVMe), but now that I have a NAS + will build a separate VM box... I don't need that much RAM (probably never did) and don't need rear end loads of NVMe lanes.

I think I'll get whatever Zen 5 SKU has the highest clocks / least number of CCDs possible to just avoid any issues. I imagine the new X3D parts though will repeat the 1 die with, 1 die without thing? How well did the AMD's optimizer work on the Zen 4 SKUs to put games on the 'right' CCD?

Now, to just hope some mobo manufacturer puts a 10Gbit SFP on a mATX board... highly unlikely (probably will get 10GbE w/ a Marvell NIC) but it would save a PCIe slot. Doubt it will happen on a desktop board though.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

the 7800X3D is the part you want: one CCD, good clocks, X3D cache

movax
Aug 30, 2008

Subjunctive posted:

the 7800X3D is the part you want: one CCD, good clocks, X3D cache

I was looking at the previous generation... why doesn't it boost as high as the 7950X3D (per wiki), or is that boost listed for the non V-Cache CCD?

If naming holds, I imagine the 9800X3D will be the popular / sold out choice in Q3 this year.

Kibner
Oct 21, 2008

Acguy Supremacy

movax posted:

I was looking at the previous generation... why doesn't it boost as high as the 7950X3D (per wiki), or is that boost listed for the non V-Cache CCD?

If naming holds, I imagine the 9800X3D will be the popular / sold out choice in Q3 this year.

The X3D chiplet on the 7800X3D doesn't boost as high as the 7950X3D's X3D chiplet to upsell people on the 7950X3D. It doesn't really matter in games. It is balanced out by the 7800X3D having only one chiplet drawing power in that tiny space instead of two + interconnect + whatever else.

e: all the previous generation (5000-series) X3D parts were only on chips with a single CCD. so there was no process lassoing or anything similar involved for them

Kibner fucked around with this message at 14:52 on Apr 12, 2024

movax
Aug 30, 2008

Kibner posted:

The X3D chiplet on the 7800X3D doesn't boost as high as the 7950X3D's X3D chiplet to upsell people on the 7950X3D. It doesn't really matter in games. It is balanced out by the 7800X3D having only one chiplet drawing power in that tiny space instead of two + interconnect + whatever else.

e: all the previous generation (5000-series) X3D parts were only on chips with a single CCD. so there was no process lassoing or anything similar involved for them

Oh duh. Yeah, in that case, one CCD with the entire package's PPT / thermal envelope + water cooling will make that a non-issue.

Sorry 3960X, you just came into my life at a time when WFH was ending + I lost absolutely all interest in using Win10 as a productivity platform. Tiny mATX box mostly for gaming, here I come...

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot
If you want to build a tiny box go ahead, but do so with the understanding that you're yet again building a PC that is valuing things you probably won't care about for very long over things that you probably will care about in the long run, like upgrading GPUs or achieving quieter operation by having better airflow.

Klyith
Aug 3, 2007

GBS Pledge Week

K8.0 posted:

If you want to build a tiny box go ahead, but do so with the understanding that you're yet again building a PC that is valuing things you probably won't care about for very long over things that you probably will care about in the long run, like upgrading GPUs or achieving quieter operation by having better airflow.

mATX is generally fine for GPU compatibility and airflow. Though mATX isn't what I'd call "tiny". Versus something like the C-type Fractals, an mATX is 15" tall instead of 17", big whoop.


If movax meant ITX instead, there are decent options for ITX cases that still support all but the most chonky GPUs and work fine with air cooling. They're just big ITX.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Yeah, most mATX cases are like "standard big-OEM desktop" size. Noticeably smaller than a full tower, but not really "small form factor". It is a nice sweet spot if you're confident that you won't need all of the ATX expansion slots but you still want a relatively normal build experience, especially since mATX boards tend to be a little cheaper than full ATX boards with the same feature set and they're definitely cheaper than ITX.

You can find some SFF-ish cases out there for them though, especially if you're willing to go low profile. I use a Silverstone ML03 for my HTPC and it's great since it looks kinda like a DVR and fits easily into a TV console, but it wouldn't be ideal for a gaming build.

Eletriarnation fucked around with this message at 16:28 on Apr 12, 2024

VorpalFish
Mar 22, 2007
reasonably awesometm

Kibner posted:

The X3D chiplet on the 7800X3D doesn't boost as high as the 7950X3D's X3D chiplet to upsell people on the 7950X3D. It doesn't really matter in games. It is balanced out by the 7800X3D having only one chiplet drawing power in that tiny space instead of two + interconnect + whatever else.

e: all the previous generation (5000-series) X3D parts were only on chips with a single CCD. so there was no process lassoing or anything similar involved for them

It's more than that - the die with vcache on it is much harder to cool so they dial clocks and voltages way back - the 7950x3d has a die without vcache and that's almost certainly how it hits those clocks.

You probably will not see peak boost on the 7950x3d for any of the cores on the die with the extra cache, so your scheduler kinda has to pick whether an application will scale better with clock speed or cache when deciding where to put a thread.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

VorpalFish posted:

It's more than that - the die with vcache on it is much harder to cool so they dial clocks and voltages way back - the 7950x3d has a die without vcache and that's almost certainly how it hits those clocks.

You probably will not see peak boost on the 7950x3d for any of the cores on the die with the extra cache, so your scheduler kinda has to pick whether an application will scale better with clock speed or cache when deciding where to put a thread.

I didn't realize that the X3D chips were exposed to the OS as heterogenous, I thought that part of the issue is that the OS and thus the scheduler have no idea that there's 2 different kinds of cores there?

Kibner
Oct 21, 2008

Acguy Supremacy

VorpalFish posted:

It's more than that - the die with vcache on it is much harder to cool so they dial clocks and voltages way back - the 7950x3d has a die without vcache and that's almost certainly how it hits those clocks.

You probably will not see peak boost on the 7950x3d for any of the cores on the die with the extra cache, so your scheduler kinda has to pick whether an application will scale better with clock speed or cache when deciding where to put a thread.

No, I am saying the chiplet with the 3D cache on the 7950X3D will boost higher than the chiplet with the 3D cache on the 7800X3D. This is independent of the non-3D cache chiplet on the 7950X3D which boosts higher than the 3D cache chiplet on the same CPU.

However, in practice, the higher boost on the 3D cache chiplet of the 7950X3D rarely, if ever, comes into play because of all the other stuff I mentioned affecting the power and heat envelopes the 3D cache chiplet is allowed. Things that the 7800X3D version doesn't have to worry about. Thus, they are usually equal in gaming performance, and sometimes the 7800X3D even squeaks out some wins (even when using process lasso and similar utilities).

Kibner
Oct 21, 2008

Acguy Supremacy

Twerk from Home posted:

I didn't realize that the X3D chips were exposed to the OS as heterogenous, I thought that part of the issue is that the OS and thus the scheduler have no idea that there's 2 different kinds of cores there?

afaik, Windows can't really tell. It just knows that if XBox Game Bar recognizes an app and the CPU is a 7950X3D or a 7900X3D, then put the app's threads on cores 0-6/8/12/16 (depending on which cpu and if hyperthreading is turned on).

But maybe that has changed or my understanding has always been wrong!

VorpalFish
Mar 22, 2007
reasonably awesometm

Kibner posted:

No, I am saying the chiplet with the 3D cache on the 7950X3D will boost higher than the chiplet with the 3D cache on the 7800X3D. This is independent of the non-3D cache chiplet on the 7950X3D which boosts higher than the 3D cache chiplet on the same CPU.

However, in practice, the higher boost on the 3D cache chiplet of the 7950X3D rarely, if ever, comes into play because of all the other stuff I mentioned affecting the power and heat envelopes the 3D cache chiplet is allowed. Things that the 7800X3D version doesn't have to worry about. Thus, they are usually equal in gaming performance, and sometimes the 7800X3D even squeaks out some wins (even when using process lasso and similar utilities).

The difference in clocks between the vcache CCD on the 7950 and the 7800 will not be nearly as dramatic as you would expect looking at the spec sheet (700 mhz), which iirc was what threw the person in the original comment. Yes the 7950x3d is probably better binned but I don't believe you'll see 5.7 on any vcache core.

VorpalFish
Mar 22, 2007
reasonably awesometm

A bunch of reviewers did previews of the 7800x3d at launch by literally disabling the higher clocking CCD on a 7950. This is what I'm talking about :

https://www.guru3d.com/review/ryzen-7800x3d-preview-7950x3d-one-ccd-disabled/page-5/#power-consumption

With only the vcache CCD enabled, clocks peaked at like 5200mhz vs 5700 mhz with the non vcache CCD. Yes, it clocks higher than the 7800x3d, but the difference is much smaller than the spec sheet might lead you to believe.

AMD relies on the non vcache CCD to hit those boost clocks.

Edit:

movax posted:

I was looking at the previous generation... why doesn't it boost as high as the 7950X3D (per wiki), or is that boost listed for the non V-Cache CCD?

If naming holds, I imagine the 9800X3D will be the popular / sold out choice in Q3 this year.

This was really what I was referring to; to answer the question directly, yes the boost listed is for the non vcache CCD.

Kibner is correct in that the 7950x3d CCD with cache will still probably boost higher than the 7800x3d but no way it hits what is on the spec sheet, the difference is like 200mhz.

VorpalFish fucked around with this message at 18:09 on Apr 12, 2024

Kibner
Oct 21, 2008

Acguy Supremacy

VorpalFish posted:

The difference in clocks between the vcache CCD on the 7950 and the 7800 will not be nearly as dramatic as you would expect looking at the spec sheet (700 mhz), which iirc was what threw the person in the original comment. Yes the 7950x3d is probably better binned but I don't believe you'll see 5.7 on any vcache core.

Yeah, I think we are mostly in agreement and just talking past each other or something. The advertised boost is definitely for the non-3D cache chiplet. The 3D cache chiplet also has a higher boost limit on the 7950X3D but, yeah, it’s only like 150-250mhz or something and it is rarely ever hit. Its inconsequential.

BurritoJustice
Oct 9, 2012

Kibner posted:

afaik, Windows can't really tell. It just knows that if XBox Game Bar recognizes an app and the CPU is a 7950X3D or a 7900X3D, then put the app's threads on cores 0-6/8/12/16 (depending on which cpu and if hyperthreading is turned on).

But maybe that has changed or my understanding has always been wrong!

It doesn't actually do any thread affinity or pinning, the way the driver works is this:

- Detect game is running thanks to game bar
- Change CPPC Preferred Core order to be all cache cores before all frequency cores (by default, non-vcache cores are first so that single threaded tasks can get full boost clock).
- Change windows power plan policy to park 50% of cores, which will turn off the bottom 50% of cores in the CPPC preferred cores order (which are now all the non-vcache cores).
- If load on active cores exceeds a certain threshold, disable core parking until load goes below another threshold.

The key differences between this and actually doing core affinity are:

- Your second CCD doesn't do anything while the game is running.
- If you do load up a multithreaded task in the background, it'll just turn off the parking and because there is no affinity nothing will keep the game from moving over to the other cores (which it usually does immediately and kills game performance).
- If you do intermittent stuff in the background it'll also end up starting and stopping the parking which leads to stuttery performance.
- If you ever install the VCache driver then later turn off the second CCD in the UEFI later you'll amusingly end up parking half your cache cores whenever you play a game (down to 3/4 cores!). It's weirdly insidious, you basically need to reinstall windows to remove it.

If you want a 7950X3D for the use case of wanting background MT tasks while gaming, you've gotta get used to using affinity. I have genuinely no clue why AMD doesn't use affinity in the first place.

And to add more specific numbers to the frequency discussion, the fused Fmax for each of the SKUs is as follows:

7800X3D: 5050MHz
7900X3D: 5150MHz/5650MHz
7950X3D: 5250MHz/5750MHz

The higher VCache clocks on the dual CCD SKUs aren't just at the top end of the VFT curve, the whole curve is brought up so you're typically always running around that +100/+200MHz range over the 7800X3D.

SpaceDrake
Dec 22, 2006

I can't avoid filling a game with awful memes, even if I want to. It's in my bones...!

movax posted:

Now, to just hope some mobo manufacturer puts a 10Gbit SFP on a mATX board... highly unlikely (probably will get 10GbE w/ a Marvell NIC) but it would save a PCIe slot. Doubt it will happen on a desktop board though.

To come back to this, integrated 10Gbit is very rare on most consumer motherboards and are usually kept to the "ultra-luxury" lines. Pairing with an AM5 socket, it's offered on the MSI MEG ACE, Gigabyte AORUS EXTREME, MSI MEG GODLIKE and Asus ProArt CREATOR models. The cheapest of those is the Asus at US$440. They're all full ATX or extended ATX, too.

I would definitely say a sub-$150 MATX board of some description, with a 10Gbit NIC plugged in to the second PCIe like we're living in the 1990s, is the play. Motherboard stratification has absolutely become a problem (even for basic features like a god drat POST code readout), and it's not going to reverse any time soon, so integrated 10Gbit on an MATX board (which are treated purely as "value" boards these days) is out of the question.

Anyway, yeah, the 7800X3D is exactly what you're looking for. There's a ton of really good airflow-centric MATX cases, too. Liquid cooling isn't even necessary for the 7800, generally, unless you just prefer it for build and aesthetic purposes.

(Also, good to see you posting in PC threads again, Movax. :3: )

Kibner
Oct 21, 2008

Acguy Supremacy

Ooh, thanks for that write up! I actually hadn't heard some of those specifics before. Especially the part with the 3D cache driver.

I need to go read up on how the 7950X3D behaves on *nix since that is what I'm running it on right now. I imagine there isn't anything done automatically to put game threads on the X3D cores.

Tuna-Fish
Sep 13, 2017

SpaceDrake posted:

I would definitely say a sub-$150 MATX board of some description, with a 10Gbit NIC plugged in to the second PCIe like we're living in the 1990s, is the play.

I'd like to note that 10gbe is starting to become legacy in the server world, which is why you can get the relevant networking gear for very reasonable prices used. There are only 2 issues usually:

1. The older server adapters have obsolete PCIe connectors, that are quite wide. x4 slots that are physically x16 are great for this, if your mobo only has physical x1 slots you cannot use server network cards on it.

2. Server AIBs are generally designed for a high airflow environment. If you use one, you need to make sure that there is direct airflow on the heatsink of the card.

That being said, if you are fine with sfp, you can go a lot faster than 10Gbps for reasonable prices used, with things like 56Gbps qsfp28 adapters available for below €50.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Tuna-Fish posted:

That being said, if you are fine with sfp, you can go a lot faster than 10Gbps for reasonable prices used, with things like 56Gbps qsfp28 adapters available for below €50.

yeah but switches for that… :homebrew:

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Based on these comments I checked out prices again. drat, the ConnectX-4 are coming down in price a lot over the last few months.

I've been eyeing to upgrade my ConnectX-3 to 4 for minor performance improvements and longevity.

Tuna-Fish
Sep 13, 2017

Subjunctive posted:

yeah but switches for that… :homebrew:

You can occasionally pick up used switches the same way too, although they tend to spend a lot less time available than the network cards. Makes sense, home lab people who buy this crap used probably use a lot less devices that have network cards per switch than actual server rooms.

BurritoJustice
Oct 9, 2012

Kibner posted:

Ooh, thanks for that write up! I actually hadn't heard some of those specifics before. Especially the part with the 3D cache driver.

I need to go read up on how the 7950X3D behaves on *nix since that is what I'm running it on right now. I imagine there isn't anything done automatically to put game threads on the X3D cores.

Yep, there is nothing specific on Linux and if you enable preferred cores (which will be on by default with the 6.9 kernel) it'll actually put games on the frequency cores first. I just launch with

code:
 taskset -c 0-7 {game} 

Adbot
ADBOT LOVES YOU

movax
Aug 30, 2008

Oops… 3D Cache derail! The points made all make sense to me though. I think what would drive me insane, knowing myself, is verifying games are doing the right thing on the right cores and just taking away the problem in the first place by having one CCD will win over that part of my brain.

On the 3960X, it’s annoying enough watching games or ST tasks not live up to what they should be doing… not a gaming CP, but I really shouldn’t expect BATTLETECH or Civ to be performing mostly as it did on my 2600K.

My background MT tasks at this point is usually a web browser with a few hundred tabs / Discord / Slack / whatever which doesn’t really count compared to people whom I imagine are streaming or doing other things.

I have a Meshify 2C I built a custom WC loop in, and I designed around an ATX mobo with all PCIe slots used (creative fittings, let me tell you…) so I don’t need to go smaller but thought it might be nice to have a bit more room. You’d need a PCIe 4.0 compliant Ethernet controller to get away with 10Gbit in a x1 slot; I have a Chelsio T520 in my machine right now and it’s primary contribution is filling the Windows Event Log with 1000s of warnings per day. Probably will just get a cheap X520-based card and call it a day.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply