Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

https://www.anandtech.com/show/15324/amd-ryzen-4000-mobile-apus-7nm-8core-on-both-15w-and-45w-coming-q1

General news about Zen2 APUs.

https://www.laptopmag.com/reviews/asus-zephyrus-rog-g14-hands-on-review

Specific news about an interesting gaming laptop.

* 8 core/16 thread CPU; 2.9GHz base/4.2GHz boost; 35W
* FHD, 120Hz display
* RTX 2060 DGPU
* Slim. 3.5lbs. Does not look like it was designed by an 8 year-old with a Dragonball addiction.
* Other than that, pretty standard, except for the optional programmable lid LED array

Adbot
ADBOT LOVES YOU

movax
Aug 30, 2008

So does Intel just pay off the OEMs again this time to cockblock AMD, or ... ?

CrazyLoon
Aug 10, 2015

"..."
If so, they clearly didn't pay ASUS quite enough considering they went AMD for their flagship prebuilt. Still, this is going into laptops and not mobile phones yet. My prediction is that it'll be at that point that Intel goes: "Throw ALL the money at it!" and I guess we'll see how far cold hard cash and inferior products can get you.

CrazyLoon fucked around with this message at 06:04 on Jan 7, 2020

Drakhoran
Oct 21, 2012

EmpyreanFlux posted:



I...I have no idea why they didn't demo RDNA2 unless it's not ready, or for the same reason as Zen3, their sales of the RX 5000 series are actually really good based on internal numbers and the last thing they want to do is cutoff that cashflow too early.

It would be a bit strange to talk about RDNA2 before the final RDNA1 card (the 5600) is even available. I really wouldn't expect replacements for the current cards until second half of the year. Though if the "Big Navi" rumors are true I suppose they could launch a flagship Radeon RXX 6900XTX EXXXTREME for a thousand bucks a few months ahead of the rest of their RDNA2 lineup.

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

6969XXX

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

Drakhoran posted:

I suppose they could launch a flagship Radeon RXX 6900XTX EXXXTREME for a thousand bucks a few months ahead of the rest of their RDNA2 lineup.

If I were AMD marketing, I would adapt the Radeon VII strategy with what you can learn about AMD fanboys by lurking in r/amd:

* Take existing Navi10 chips
* Overvolt them to within a micrometer of their lives
* Put on a board with a blower cooler design (which r/amd hates, but...)
* Feature the AMD SE Asia waifu characters Example 1, Example 2, or better yet resurrect Ruby on the front of the shroud
* Etch Lisa Su's signature into the backplate, with LED backlighting
* Sell for $600

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot
Advanced Waifu Devices

Actuarial Fables
Jul 29, 2014

Taco Defender

mdxi posted:

* Feature the AMD SE Asia waifu characters Example 1, Example 2, or better yet resurrect Ruby on the front of the shroud

The world makes less sense every day.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Drakhoran posted:

It would be a bit strange to talk about RDNA2 before the final RDNA1 card (the 5600) is even available. I really wouldn't expect replacements for the current cards until second half of the year. Though if the "Big Navi" rumors are true I suppose they could launch a flagship Radeon RXX 6900XTX EXXXTREME for a thousand bucks a few months ahead of the rest of their RDNA2 lineup.

Yeah, part of me was thinking they'd drop the 6900 XT first in like, July (tease at computex), and the 6700 XT and 6500 XT would follow in September and October. With the rumors of Nvidia dropping Ampere in June-August timeframe, if the 6900 XT can at least match the the RTX 3080Ti (tall order), AND launch before, then it'll take the wind out of Ampere's sails. I'm just a little weirded out by the complete lack of mention, they know they're behind Nvidia right now so touting RDNA2 in some way might deflect attention from Ampere. AMD needs to be able to target near linear scaling while dropping power consumption 40-60% to be plausibly inline with Ampere I think, which like...a Bulldozer to Zen moment, and I'd think they'd be more vocal about it. So uh, good luck AMD/RTG.

eames
May 9, 2009

Interesting to see the two different strategies at work - Ice Lake on 10nm while the rest remains on 14nm as opposed to Renoir on the “old” 7nm while other chips are expected to move to 7nm+ (partial EUV) very soon.

Yet Renoir looks like it’ll be competitive in at least some segments and I have little doubt that AMD will be able to ship large amounts of them.

I just hope they got their idle power consumption/power gating under control.

SwissArmyDruid
Feb 14, 2014

by sebmojo
Tech Jesus does a combo video on AMD and ASUS: https://www.youtube.com/watch?v=SnHD8g4z0EQ

mdxi posted:

* Feature the AMD SE Asia waifu characters Example 1, Example 2, or better yet resurrect Ruby on the front of the shroud

I assume one is CPU, the other is GPU, therefore why is one of those not a mouldering corpse

SwissArmyDruid fucked around with this message at 10:23 on Jan 7, 2020

Khorne
May 1, 2002

EmpyreanFlux posted:

With Zen3, I get it. They're probably having record sales and the last thing they want to do is Osborne themselves - people are already thirsty asf about Zen3. Zen3 doesn't need the demo hype Zen2 got, IMHO, and they can talk about it when it's in a more complete state.

I...I have no idea why they didn't demo RDNA2 unless it's not ready, or for the same reason as Zen3, their sales of the RX 5000 series are actually really good based on internal numbers and the last thing they want to do is cutoff that cashflow too early.
CES was about mobile and a 64c chip. Those are both giant leaps that people are hyped about.

They avoided dumping too much information and stealing away from the laptop narrative, which is potentially truck fulls of money for them.

They'll talk about zen3 and rdna2 at some other event by late q2/early q3 at the latest. Likely they'll have big things to talk about at all major events this year while Intel is "uhh umm" because Intel's strike back won't start until 2021-2022.

SwissArmyDruid
Feb 14, 2014

by sebmojo
Yeah, it makes sense for AMD to just talk about mobile and mobile-related products during the Consumer Electronics Show. We can take the 3990X as a bone thrown to the enthusiasts.

The beefy stuff is probably being held for Computex in the summer.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
good post-CES interview with Dr Su

Shaocaholica
Oct 29, 2002

Fig. 5E
Zen2 ryzen doesn't support multiple sockets right? Have to go Epyc?

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.

Shaocaholica posted:

Zen2 ryzen doesn't support multiple sockets right? Have to go Epyc?
Only non-P variants of Eypc will do dual-socket, Eypc Ps, Threadripper and Ryzen are strictly one-socket.

Cygni
Nov 12, 2005

raring to post

Shaocaholica posted:

Zen2 ryzen doesn't support multiple sockets right? Have to go Epyc?

Yeah, Epyc only. And only 2 socket max. Of course 2 socket Epyc can be 128 cores, hah.

Shaocaholica
Oct 29, 2002

Fig. 5E
Are multi socket 'desktop' platforms going away since you can now stick 64 cores into a single socket?


edit: but then why have massive multi threaded performance on the desktop when you can have twice as massive? I guess there's always a niche need.

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

Is it possible to bridge two PCs via shared PCIe bus? :black101:


Shaocaholica posted:

Are multi socket 'desktop' platforms going away since you can now stick 64 cores into a single socket?

I think they mostly did around the time 4 cores was a thing

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

taqueso posted:

Is it possible to bridge two PCs via shared PCIe bus? :black101:

Yes, it's called Infiniband RDMA, you can poke right into the memory of another PC (or another PCIe device on another PC) across the PCIe bus

https://en.wikipedia.org/wiki/Remote_direct_memory_access

Paul MaudDib fucked around with this message at 23:24 on Jan 7, 2020

Shaocaholica
Oct 29, 2002

Fig. 5E
The Intel PCIe NUC looked interesting if you can just add CPU+mem modules via PCIe bus. NUMA tho

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

taqueso posted:

Is it possible to bridge two PCs via shared PCIe bus? :black101:

Yes, it’s called non transparent bridging and uses mailboxes and doorbells and stuff to control accesses between multiple hosts, like way more than 2 even.

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

Well, someone should pack that up into a single motherboard with 2-4 sockets.

e: with like 10 doorbells and 5 bathrooms

Shaocaholica
Oct 29, 2002

Fig. 5E
https://www.youtube.com/watch?v=gtuqSXbNDsM

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
making the CPU element itself follow PCIe form factor is kind of bizarre since it means the CPU cooling solution has to squeeze into a 2-slot form factor and exhaust out the top/back, instead of a DAN A4-SFX style solution where both CPU and GPU are back to back and could both suck fresh air through the side panel

it's also undoubtedly going to lead to someone plugging it into an actual PCIe slot at some point and probably blowing something up when two CPUs try to master the bus and try to sink each other's signal

what they need is basically the PCIe equivalent of the BTX form factor for motherboards where everything is on the opposite side of the PCB relative to normal, and to make the element use that eICP standard.

Paul MaudDib fucked around with this message at 00:10 on Jan 8, 2020

Shaocaholica
Oct 29, 2002

Fig. 5E
Why would it be limited to 2 slots tho?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Shaocaholica posted:

Why would it be limited to 2 slots tho?

it's not, you could hypothetically leave yourself as many slots of room as you want, but the CPU cooler has to face towards the GPU because that's how the PCIe spec works, versus flipping it the other way and sucking fresh air into the cooler without a card restricting airflow.

practically speaking though 2 slots is the only config that anyone really cares about since this is basically a SFF PC and nobody wants a SFF PC that's not small. If you get up into mATX or mITX cube case size then using a standard mobo instead of a backplane thingy is an obvious choice.

Paul MaudDib fucked around with this message at 00:08 on Jan 8, 2020

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

If you could pop 4 CPU modules and 4 GPU modules into an ATX sized backplane I think people would be interested.

Shaocaholica
Oct 29, 2002

Fig. 5E
Have a big flat power supply on the other side like the HP Z8.

https://www.youtube.com/watch?v=gy7_iREnEBo&t=8s

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

taqueso posted:

If you could pop 4 CPU modules and 4 GPU modules into an ATX sized backplane I think people would be interested.

"backplane" is a little misleading because you'd basically have slots like: Acpu-Agpu Bcpu-Bgpu Ccpu-Cgpu Dcpu-Dgpu with no interconnection between A/B/C/D like you would in a backplane. In other words it's not like you would have 8 slots that all interconnect, that would not be something you could do with off-the-shelf PCIe spec hardware. Even if you could, they would interfere with each other's bus bandwidth anyway.

So effectively what you would have is 4 ghost canyon nucs that happen to share a single case and a single PCB for their respective backplanes.

cyber cafes might like it I guess.

Not really seeing a huge advantage vs taking a 3960X and putting 4 GPUs in it and virtualizing though (apart from looking glass being a mess I guess). Or just having 4 separate pcs in some cheapass cases, the case isn't the expensive part there anyway.

Paul MaudDib fucked around with this message at 00:22 on Jan 8, 2020

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

I guess you need a way to have the CPU/chipset able to be a slave on some lanes and a master on others. Then you could build a ring or matrix of connections between CPUs.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

taqueso posted:

I guess you need a way to have the CPU/chipset able to be a slave on some lanes and a master on others. Then you could build a ring or matrix of connections between CPUs.

that's a pretty big mod that would require both hardware and software changes. if we're doing non-standard hardware configs (this is no longer PCIe) then how 'bout we start with putting the CPU board the right way around? ;)

the end state of having multiple CPU/GPU modules that network over a PCIe-like protocol is cool though. just a lot of work to get there.

you could hypothetically do a backplane using off-the-shelf PCIe devices with some kind of switching chip I guess, with all the slots wired to talk to the switch and the switch deciding what gets to talk to what, but something that can switch PCIe slots at full speed (even in a latched configuration/not changing in realtime) sounds expensive.

one interesting application of having multiple CPUs/GPUs on a plain old backplane would be for benchmarking though. You could plug four different CPUs and four different GPUs in at a time, and only boot one pair of them at a time. That would make it vastly easier to do the kinds of testing "batches" that reviewers need to do, where you test 27 different kinds of GPUs paired with 14 different CPUs or whatever. You could have them all plugged in and have some sort of BMC that can control which pair gets combined, you boot from an iscsi image and run an automated test script, then move to the next hardware combination. just automate the poo poo out of everything

Paul MaudDib fucked around with this message at 00:40 on Jan 8, 2020

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

Yeah I was thinking hypothetically, not ghost canyon specifically. I think you should be able to glue things together with PLX pcie chips so each CPU would devote some lanes to a set of PLX endpoints that each connect to a second PLX chip that is an endpoint for another CPU. Then you get to write a linux driver/numa thing. Oh and figure out what the glue between the PLXs is. Maybe an FPGA.

Paul MaudDib posted:

one interesting application of that idea using simplistic hardware would be for benchmarking though. You could plug four different CPUs and four different GPUs in at a time, and only boot one pair of them at a time. That would make it vastly easier to do the kinds of testing "batches" that reviewers need to do, where you test 27 different kinds of GPUs paired with 14 different CPUs or whatever. You could have them all plugged in and have some sort of BMC that can control which pair gets combined.

That's a good idea. Especially the automation part you expanded on.

SwissArmyDruid
Feb 14, 2014

by sebmojo

taqueso posted:

If you could pop 4 CPU modules and 4 GPU modules into an ATX sized backplane I think people would be interested.

At what point are we just doing blade servers all over again?

At any rate, I, for one, and curious as to how nice it will play with being plugged into a proper desktop's motherboard.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

taqueso posted:

That's a good idea. Especially the automation part you expanded on.

I've thought of using RDMA to do that. PCIe traffic is just DMA anyway, RDMA lets you talk to one device and have it dump your traffic onto another PCIe bus in another system.

So basically software-defined PCIe configurations across RDMA switching. Basically virtualize your CPU+OS on one server and virtualize your GPU on another server and have them connected by Infiniband RDMA. In principle you should be able to treat it like they were physically plugged together but what your PC thinks is a "GPU" is actually an RDMA device that portals the DMA traffic over to the other machine and dumps it onto the PCIe bus there, as if they were physically connected (plus a little bit of latency). There could conceivably be an InfiniBand switch in the middle.

Conceptually seems like it should be possible but I'm 99% sure the software to do that doesn't exist in today's hypervisors. It's also an open question how much the latency would hurt as far as gaming, IB guarantees very low latency due to its switched-fabric nature (IB EDR is about 0.5 usec) but it's non-zero.

Paul MaudDib fucked around with this message at 01:02 on Jan 8, 2020

Shaocaholica
Oct 29, 2002

Fig. 5E

SwissArmyDruid posted:

At what point are we just doing blade servers all over again?

At any rate, I, for one, and curious as to how nice it will play with being plugged into a proper desktop's motherboard.

Blade servers but in a smaller format, for kids. Babys first server rack.

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

rPI4s in red yellow and blue primary colors

Malcolm XML
Aug 8, 2009

I always knew it would end like this.
Multi socket is niche. HPC is one giant fabric designed to interconnect thousands of nodes.

Unless you're Chinese and 8p servers are lucky but 4p aren't there's not much to gain by going up in sockets. Especially since epyc can address 4tb, so only really high memory dbs need the 4p memory addressing

SwissArmyDruid
Feb 14, 2014

by sebmojo

Shaocaholica posted:

Blade servers but in a smaller format, for kids. Babys first server rack.

Honestly, I think they're slavishly holding to PCI spec just a little too much.

Imagine, if you would, a world where said.... what's it called Ghost Canyon? NUC were built on the opposite side of the PCB instead, sitting on top instead of hanging down.

You could put your graphics card right up against the back of the NUC (plus a little room so that heat coming off the backplate of the GPU doesn't radiate over to the GPU) but have so much better ventilation and airflow for your NUC. Like an ITX ultra-SFF case, but with much shorter PCIe runs, and a hard riser instead of a well-shielded flexible one.

Adbot
ADBOT LOVES YOU

NewFatMike
Jun 11, 2015

Paul MaudDib posted:


Not really seeing a huge advantage vs taking a 3960X and putting 4 GPUs in it and virtualizing though (apart from looking glass being a mess I guess). Or just having 4 separate pcs in some cheapass cases, the case isn't the expensive part there anyway.

What's up with Looking Glass? Anything outside of it just being a late alpha/early beta thing?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply